content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
RADIANS (NoSQL query)
APPLIES TO: NoSQL
Returns the corresponding angle in radians for an angle specified in degrees.
Syntax
RADIANS(<numeric_expr>)
Arguments
Description
numeric_expr A numeric expression.
Return types
Returns a numeric expression.
Examples
The following example returns the radians for various degree values.
SELECT VALUE {
degrees90ToRadians: RADIANS(90),
degrees180ToRadians: RADIANS(180),
degrees270ToRadians: RADIANS(270),
degrees360ToRadians: RADIANS(360)
}
[
{
"degrees90ToRadians": 1.5707963267948966,
"degrees180ToRadians": 3.141592653589793,
"degrees270ToRadians": 4.71238898038469,
"degrees360ToRadians": 6.283185307179586
}
]
Remarks
• This function doesn't use the index.
|
__label__pos
| 0.671868 |
What to use for mobile apps? - Flutter vs React Native
What to use for mobile apps? – Flutter vs React Native
Mobile apps are becoming more and more popular. Many companies want to have a piece of that pie. Instead of developing separate apps for Android and iOS - which is time-consuming - they are more often reaching for cross-platform frameworks - which allow building applications for both platforms from a single codebase. Google and Facebook released open-source tools to help with that - Flutter and React Native.
What are the benefits of using cross-platform frameworks?
Before cross-platform frameworks we had to build different applications for each system. Most of the times it required having developers who had knowledge of those systems or separate teams for each. Developing and maintaining multiple codebases was very expensive. By using cross-platform frameworks we can have only one codebase for Android and iOS (and sometimes even for web and desktop - more on that later in this blog post). It simplifies work and lowers the cost of the development and also helps to keep consistent UI across different platforms. But which framework is better? What should I choose for my project? I’ll try to answer those questions in this article.
Flutter - native mobile apps with Material Design
Flutter is a toolkit for building native applications for mobile, web and desktop. The first stable version was released in 2018 and has been rapidly gaining popularity since then. At the time of writing this article, only Flutter for mobile apps is production-ready. The tool for building applications for the web is in beta and desktop is in alpha. You can check our opinion about it in Technology Radar.
React Native - mobile apps with native look and feel
React Native is a framework released by Facebook in 2015. It’s used for developing native mobile apps for Android and iOS. Third-party libraries can be used to also build applications for web and desktop. It was also included in our Technology Radar.
Which framework is better for my project - Flutter or React Native?
Both frameworks are backed by huge companies and a large number of developers, they help build cross-platform applications, are open source and very similar. Deciding on using one platform or another may not be easy. Let’s jump into details - we have some important questions to answer!
What language do they use?
React Native uses the most popular programming language currently - JavaScript. It’s a big advantage, a lot of developers already know JavaScript. It works well with TypeScript, too.
On the other hand, Flutter uses language created by Google - Dart. It’s not as common as JavaScript, but the syntax is similar. From a new developer, it would require learning not only the framework but also the language it uses.
Which framework is more popular?
React Native is more adult than its younger competitor, Flutter. The repositories for both tools have a similar number of stars on GitHub (Flutter - 86.1k, React Native - 84.6k), but React Native has a significantly larger number of developers contributing to the project (more than 2k). Popularity is for sure in favor of React Native - there are a lot of resources to learn from, a lot of people eager to help and there is a high chance that someone already encountered problems you may face.
Flutter which was released in 2018 is gaining popularity pretty rapidly, though. In the last Stack Overflow’s annual Developer Survey, it was on the third place of the most loved frameworks (here). Flutter is trending right now and if it continues it can surpass React Native soon.
Horizontal bar chart titled Most loved frameworks, languages and tools, the description of chart reads
Percentage of developers who are developing with the language or technology and have expressed interest in continuing to develop with it.
React Native is battle-tested and used in thousands of mobile apps - like Facebook, Instagram, Skype, Uber and many, many more. Flutter’s showcase is not as impressive - some of the most popular applications are Google Ads, Stadia, Reflectly and Alibaba.
Which is easier to install?
The installation of both frameworks is straightforward. The whole process of installation and setting up the development environment is described very well in the documentation for macOS, Windows, and Linux. If you are new to mobile development, React Native offers an easy way of starting by using Expo CLI which comes with some limitations - like bigger app size and lack of support for some APIs (you can read more about it here).
Which framework has better documentation?
Flutter’s documentation is more comprehensive than React Native’s. It contains tons of guides, samples, tutorials, hints, and solutions for common problems. Everything is divided nicely into easy to navigate categories. The API reference shows a lot of examples, instructions, and videos for some widgets, which makes it easier to understand.
Which framework provides smoother development experience?
For both tools developing applications on virtual and physical devices doesn’t cause any problems as they provide guides in their documentations. They both come with utilities helping developers with their work - like hot reload and CLI. CLI can generate, run and build a project. Hot reload saves a lot of time by showing the changes you make almost instantly on a device, without a need for recompiling the whole app. Both toolkits are supported in most of the major IDEs through plugins and additional tools.
Which has more features?
Flutter has a lot more features and components built-in and available out-of-the-box. This is great because you can be sure that everything is supported by the Flutter team and shouldn’t have any problems with them. On the other hand, the number of components in React Native is smaller and relies heavily on third-party libraries. While using popular packages is pretty safe, using others can come with some risks (like sudden abandonment by its developer, lack of support or security and compatibility issues). With React Native you will find yourself jumping a lot between the official and third-party documentations.
Which framework is faster?
Flutter’s Dart code is compiled to native machine code. React Native uses something called JavaScript bridge - the JavaScript code is compiled to native at runtime. In theory, Flutter should be faster. The problem is that there are not many performance tests made on more complex applications and a lot depends on a device itself. On some devices, Flutter is closer to the native performance, on others React Native.
What are available UI components?
React Native uses native components for UI rendering. It means that apps will look and feel more like native applications. Flutter takes a different approach - it uses the same UI on Android and iOS. This gives more control over the styling and ensures that on every device the app will look the same. Flutter also offers “Cupertino Widgets” to fit the iOS design, but it’s optional. React Native has fewer components than Flutter, so it will require using more third-party modules.
Which community is bigger - Flutter or React Native?
Community is a huge benefit of React Native. It’s very active and has 2 times more questions on Stack Overflow than Flutter. Thanks to that it’s possible that the problems you may encounter during the development were already encountered by other programmers. It also means that you will be able to find more libraries for React Native than for Flutter.
When it comes to packages Flutter has Pub and React Native has NPM (which is a registry of JavaScript packages in general).
How to test mobile apps?
Testing in Flutter works like a charm. The documentation offers good documentation about testing with recipes and examples for unit, widget, and integration testing. It’s different in React Native - for testing you will have to use external tools (Jest is included by default). Documentation for RN doesn’t say much about testing, but good thing is that you can use one of many JavaScript testing frameworks out there. There are also tools designed specifically for testing React Native apps.
Native mobile app built in which framework will be easier to deploy?
The size of an app tends to be slightly bigger for Flutter than React Native. React Native is more lightweight because it doesn’t have as many features as Flutter. They are still bigger than pure native mobile apps, but the developers from both teams are constantly working on improving it.
But what about deploying? Flutter’s documentation provides great step-by-step walkthroughs of releasing apps to the Apple App Store and Google Play Store. Besides, there is also a helpful guide for configuring automatic deployments. React Native comes up short here - the documentation only shows how to publish on the Google Play Store, but not on the App Store.
What to choose in 2020 for mobile apps development - Flutter or React Native?
React Native for sure is a mature and battle-tested framework. Great community and a big number of apps working in production prove that it’s a safe bet when it comes to developing mobile apps. Reliance on third-party libraries gives a freedom of choice, but can also increase development time and cause unexpected problems. The documentation is one of the weakest points when comparing to Flutter.
Flutter comes packed with a lot of features and components that are not available in React Native. Many times you will not even have to reach for external libraries. Its documentation is comprehensive and provides many guides and examples. The biggest problem of Flutter is its age. It doesn’t have as many companies using it as React Native and the community is not as big, but it has been gaining popularity quite rapidly since its release in 2018 (and current trends show that it’s not going to stop soon). I can see that in the future it may come close or even surpass React Native.
FacebookTwitterPinterest
Konrad Jaguszewski
Front-end Developer
I'm a self-taught web developer. Since childhood I was amazed by computers. I love creating and learning new stuff, be that programming, cooking, playing guitar or doing crafts.
Comments are closed.
|
__label__pos
| 0.684979 |
Home » Linear Switches in Gaming: Precision and Response Time
Linear Switches in Gaming: Precision and Response Time
by sophiajames
Introduction
In the world of competitive gaming, where split-second decisions can make the difference between victory and defeat, having the right equipment is paramount. Among the many components of a gaming setup, the choice of mechanical keyboard switches is crucial. Linear switches, known for their smooth and consistent keystrokes, have gained popularity among gamers for their precision and lightning-fast response times. In this article, we will delve into the world of linear switches, exploring how they enhance gaming performance by offering precision and rapid response times.
Understanding Linear Switches
Before delving into their advantages, it’s essential to understand what linear switches are. Linear switches are a type of mechanical keyboard switch characterized by a consistent keystroke from top to bottom, without tactile bumps or audible clicks. When you press a key with a linear switch, you’ll experience a smooth and uninterrupted motion until the key registers.
Precision in Gaming
Precision in gaming is all about making accurate and deliberate movements. Linear switches play a pivotal role in achieving this precision due to their consistent actuation force and keystroke travel. Here’s how they contribute:
• Consistency: Linear switches actuate consistently, meaning you don’t have to apply varying levels of force to each keypress. This uniformity ensures that every input you make is reliable and predictable, leading to more precise movements in games.
• No Tactile Bumps: Unlike tactile switches that provide tactile feedback upon actuation, linear switches lack this feature. While some gamers prefer tactile feedback for typing, it can be distracting in gaming situations where rapid, precise inputs are necessary. Linear switches eliminate this distraction, allowing you to focus solely on your gameplay.
Response Time in Gaming
Response time is a critical factor in gaming, as it directly affects how quickly your actions translate into in-game movements. Linear switches contribute to lightning-fast response times through the following mechanisms:
• Faster Actuation: Linear switches typically have a shorter actuation distance compared to other switch types. This means that the key registers as pressed with a shorter keystroke, reducing the time between your intent to press a key and the game recognizing the action. Gamers can react more swiftly to in-game events, which is crucial in competitive environments.
• Reduced Debounce Time: Debounce time is the brief delay that occurs when a key is pressed and released, preventing unintended double presses. Linear switches have minimal debounce time, allowing for rapid successive keypresses without any delay, ideal for spamming abilities or executing complex combos.
• Quick Reset: After a keypress, linear switches reset quickly to their initial position, ready for the next input. This swift reset ensures that you’re always prepared to make the next move without any delay.
Conclusion
In the world of competitive gaming, where precision and response time are paramount, linear switches have carved out a niche for themselves. Their consistency, absence of tactile feedback, and lightning-fast response times make them a favorite choice among professional gamers and enthusiasts alike. When paired with the right gaming peripherals and skills, linear switches can provide that extra edge needed to secure victory in the most intense gaming competitions. Whether you’re a casual gamer or aspiring esports champion, linear switches are worth considering as a valuable addition to your gaming arsenal.
You may also like
Leave a Comment
|
__label__pos
| 0.978732 |
SaaS CMS has officially launched! Learn more now.
EPiServer 7 custom routing problem
Vote:
Hi I created a custom episerver route in Global.asax
RouteTable.Routes.MapContentRoute("UserProfileRoute", "user/{uid}/{languageBranch}/{node}/{partial}/{action}", new
{
uid = "",
languageBranch = "en",
node = "2424",
partial = UrlParameter.Optional,
action = UrlParameter.Optional
});
the idea of this route is when i open the following link "www.mywebsite.com/user/123123" it should open the Userprofile episerver page with id "2424"
but with the above mention url it takes me to startpage instead which has a id 3.
what am i doing wrong in this scenrio??
the second problem i am having is when i change the language in the url it never change the langiage of the page e.g "www.mywebsite.com/user/123123/fr" it still stay english
#65422
Jan 29, 2013 16:31
Vote:
Same issue here.
#65426
Jan 29, 2013 17:46
Vote:
When you use MapContentRoute it will treat the parts of the url that are inside {} as segment (something that is implementing EPiServer.Web.Routing.Segments.ISegment). There are some predefined keys that are handled by default, e.g. {node} that is a content instance, {language} that handles language segment. {uid} however will not be recognized and thereby it will be handled as a ParameterSegment.
So in your case you should use {language} instead of {languageBranch).
A parameter segment works in the way that during incoming routing the value of the segment will be placed in RequestContext.RouteData.Values with the parameter name as key, in your case "uid". During creation of outgoing links the parameter will output value if the key (e.g "uid") is part of RouteValues dictionary.
You can also have your own custom ISegment implementation (instead of ParameterSegment) that handles your segment, Then you need to use the overload of MapContentRoute that takes a ContentParameters as parameter and in property SegmentMappings add your custom segment with your url pattern as key as e.g. contentParameters.SegmentMappings.Add("uid", customSegmentInstance)
#65430
Jan 29, 2013 19:15
Vote:
So what would be the best way to direct the url to a specific EPiServer page? Writing a custom iSegment implementation for the {node} segment?
#65468
Jan 30, 2013 14:24
Vote:
I do not think any of the built in segements will do what you want so you need to create your own segment like UserIdSegment (it should be fairly simple, there is a baseclass SegmentBase that you can use). You could e.g. pass in the contentreference of the page in the constructor to your segment. And in RouteDataMatch check if the incoming segment match the a user id and if so set RoutedContentLink. Something like:
public class UserIdSegment : SegmentBase
{
private ContentReference _linkToProfilePage;
public UserIdSegment(string name, ContentReference contentLink) : base(name)
{
_linkToProfilePage = contentLink;
}
public override string GetVirtualPathSegment(System.Web.Routing.RequestContext requestContext, System.Web.Routing.RouteValueDictionary values)
{
return null;
}
public override bool RouteDataMatch(SegmentContext context)
{
var segmentPair = context.GetNextValue(context.RemainingPath);
if (IsValidUserId(segmentPair.Next))
{
context.RemainingPath = segmentPair.Remaining;
context.RoutedContentLink = _linkToProfilePage;
return true;
}
return false;
}
private bool IsValidUserId(string p)
{
//Here should logic for user id go
}
}
The you can register your route something like:
var segment = new UserIdSegment("uid", new ContentReference(2424));
var routingParameters = new MapContentRouteParameters()
{
SegmentMappings = new Dictionary<string, ISegment>()
};
routingParameters.SegmentMappings.Add("uid", segment);
routes.MapContentRoute(
name: "userprofiles",
url: "user/{uid}/{language}/{action}",
defaults: new { action = "index" },
parameters: routingParameters);
#65487
Jan 30, 2013 16:47
Vote:
Works like a charm! Thank you!
#65494
Jan 30, 2013 18:19
Vote:
In case someone reads this in the future I wrote an article on the topic of custom routing featuring, amongst other things a custom segment inspired by Johan's above.
#69887
Apr 09, 2013 10:18
Vote:
Maybe I'm missing something here, but I've set up a custom route using a custom segment, and the route parameter I'm really concerned about always is null in my controller. For example, if I use the example above, the 'uid' parameter will be null in the page controller for the profile page. I can see it fine if I pass it as a querystring, but that defeats the purpose. What am I missing?
#72000
Jun 05, 2013 0:45
Vote:
Chris, I got the same problem but changed the code for RouteDataMatch to this,
----------
public override bool RouteDataMatch(SegmentContext context)
{
var segmentPair = context.GetNextValue(context.RemainingPath);
var userId = segmentPair.Next;
if (IsValidUserId(userId))
{
context.RemainingPath = segmentPair.Remaining;
context.RoutedContentLink = _linkToProfilePage;
context.RouteData.Values.Add("uid", userId);
return true;
}
return false;
}
----------
e.g. adding the value to the RouteData object. Don't know if this is the correct way to do it but it works :)
/Viktor
#73538
Jul 26, 2013 11:14
This thread is locked and should be used for reference only. Please use the Episerver CMS 7 and earlier versions forum to open new discussions.
* You are NOT allowed to include any hyperlinks in the post because your account hasn't associated to your company. User profile should be updated.
|
__label__pos
| 0.832351 |
Tracking Heap Allocation Requests
This topic applies to:
Edition
Visual Basic
C#
F#
C++
Web Developer
Express
Topic does not applyTopic does not applyTopic does not apply
Native only
Topic does not apply
Pro, Premium, and Ultimate
Topic does not applyTopic does not applyTopic does not apply
Native only
Topic does not apply
Although pinpointing the source file name and line number at which an assert or reporting macro executes is often very useful in locating the cause of a problem, the same is not as likely to be true of heap allocation functions. While macros can be inserted at many appropriate points in an application's logic tree, an allocation is often buried in a special routine that is called from many different places at many different times. The question is usually not what line of code made a bad allocation, but rather which one of the thousands of allocations made by that line of code was bad and why.
The simplest way to identify the specific heap allocation call that went bad is to take advantage of the unique allocation request number associated with each block in the debug heap. When information about a block is reported by one of the dump functions, this allocation request number is enclosed in braces (for example, "{36}").
Once you know the allocation request number of an improperly allocated block, you can pass this number to _CrtSetBreakAlloc to create a breakpoint. Execution will break just before allocating the block, and you can backtrack to determine what routine was responsible for the bad call. To avoid recompiling, you can accomplish the same thing in the debugger by setting _crtBreakAlloc to the allocation request number you are interested in.
A somewhat more complicated approach is to create Debug versions of your own allocation routines, comparable to the _dbg versions of the heap allocation functions. You can then pass source file and line number arguments through to the underlying heap allocation routines, and you will immediately be able to see where a bad allocation originated.
For example, suppose your application contains a commonly used routine similar to the following:
int addNewRecord(struct RecStruct * prevRecord,
int recType, int recAccess)
{
// ...code omitted through actual allocation...
if ((newRec = malloc(recSize)) == NULL)
// ... rest of routine omitted too ...
}
In a header file, you could add code such as the following:
#ifdef _DEBUG
#define addNewRecord(p, t, a) \
addNewRecord(p, t, a, __FILE__, __LINE__)
#endif
Next, you could change the allocation in your record-creation routine as follows:
int addNewRecord(struct RecStruct *prevRecord,
int recType, int recAccess
#ifdef _DEBUG
, const char *srcFile, int srcLine
#endif
)
{
/* ... code omitted through actual allocation ... */
if ((newRec = _malloc_dbg(recSize, _NORMAL_BLOCK,
srcFile, scrLine)) == NULL)
/* ... rest of routine omitted too ... */
}
Now the source file name and line number where addNewRecord was called will be stored in each resulting block allocated in the debug heap and will be reported when that block is examined.
Community Additions
ADD
Show:
|
__label__pos
| 0.664068 |
Phalcon How to do SELECT not in with SUB QUERY in query manager?
like to know how do i use select not in as a subquery in phalcon
for example i know i can use following to do the notIn with a array values.
return User::query() ->where(" gender!=:gender: ", array('gender' => $gender)) ->andWhere(" verify=1 ") ->notInWhere('user_id', "SELECT user_id FROM user_bannned WHERE user_id=:user_id:" ) ->order(" last_visit DESC ") ->limit($limit) ->execute();
Problem is
"SELECT user_id FROM user_bannned WHERE user_id=:user_id:"
how do i do this subquery with model query manager ?
ANY ideas ? or workarounds ?
Just like in mysql.
9.3k
Just like in mysql.
What you mean by this ? SELECT Subquery dont work in models
9.3k
hasOne, hasMany , phql https://docs.phalconphp.com/en/latest/reference/phql.html
Can you exaplain in details how you can do this ? becaue i want soemthng like below done
User::query() ->andWhere(" gender!=:gender: ", array('gender' => $gender)) ->andWhere(" verified=1 ") ->inWhere('mode', ['active','pending']) ->notInWhere('user_id',"(SELECT user_from AS user_id FROM XYZ\Models\UsersBlock WHERE user_to=:user_id:) UNION (SELECT user_to AS user_id FROM XYZ\Models\UsersBlock WHERE user_from=:user_id:)") ->bind('user_id',$this->user->user_id) ->order(" last_visit DESC ") ->limit($limit)
|
__label__pos
| 0.99813 |
powered by Jive Software
Import the Openfire schema mysql windows
Hello,
I’m trying to accomplish this step in the install guide:
Import the schema file from the resources/database directory of the installation folder:
Windows: type openfire_mysql.sql | mysql [databaseName];
I’m not sure where to issue this command. I’ve tried from the windows commmand line in both the resources/database openfire directory, and from the mysql bin directory. Also i’ve tried running the command from mysql server cli. I’ve tried to add the path of the directory where mysql is to the PATH enviroment variables.
When it enter the following command
"type openfire_mysql.sql | mysql openfire;"
I get the following error:
" mysql is not recognized as an internal or external command, operable program or batch file"
I know this error indicates that the program being run is not in the path variables so you have to run it from the program directory. I’ve tried that and still get the same error.
What am i doing wrong?
|
__label__pos
| 0.54528 |
Class: RuboCop::Cop::Style::StabbyLambdaParentheses
Inherits:
Cop
• Object
show all
Includes:
ConfigurableEnforcedStyle
Defined in:
lib/rubocop/cop/style/stabby_lambda_parentheses.rb
Overview
Check for parentheses around stabby lambda arguments. There are two different styles. Defaults to `require_parentheses`.
Examples:
EnforcedStyle: require_parentheses (default)
# bad
->a,b,c { a + b + c }
# good
->(a,b,c) { a + b + c}
EnforcedStyle: require_no_parentheses
# bad
->(a,b,c) { a + b + c }
# good
->a,b,c { a + b + c}
Constant Summary collapse
MSG_REQUIRE =
'Wrap stabby lambda arguments with parentheses.'
MSG_NO_REQUIRE =
'Do not wrap stabby lambda arguments ' \
'with parentheses.'
Constants included from Util
Util::LITERAL_REGEX
Instance Attribute Summary
Attributes inherited from Cop
#config, #corrections, #offenses, #processed_source
Instance Method Summary collapse
Methods included from ConfigurableEnforcedStyle
#alternative_style, #alternative_styles, #ambiguous_style_detected, #correct_style_detected, #detected_style, #detected_style=, #no_acceptable_style!, #no_acceptable_style?, #opposite_style_detected, #style, #style_detected, #style_parameter_name, #supported_styles, #unexpected_style_detected
Methods inherited from Cop
#add_offense, all, autocorrect_incompatible_with, badge, #config_to_allow_offenses, #config_to_allow_offenses=, #cop_config, cop_name, #cop_name, #correct, department, #duplicate_location?, #excluded_file?, #find_location, #highlights, inherited, #initialize, #join_force?, lint?, match?, #messages, non_rails, #parse, qualified_cop_name, #relevant_file?, #target_rails_version, #target_ruby_version
Methods included from NodePattern::Macros
#def_node_matcher, #def_node_search, #node_search, #node_search_all, #node_search_body, #node_search_first
Methods included from AST::Sexp
#s
Methods included from AutocorrectLogic
#autocorrect?, #autocorrect_enabled?, #autocorrect_requested?, #support_autocorrect?
Methods included from IgnoredNode
#ignore_node, #ignored_node?, #part_of_ignored_node?
Methods included from Util
begins_its_line?, comment_line?, double_quotes_required?, escape_string, first_part_of_call_chain, interpret_string_escapes, line_range, needs_escaping?, on_node, parentheses?, same_line?, to_string_literal, to_supported_styles, tokens, trim_string_interporation_escape_character
Methods included from PathUtil
absolute?, chdir, hidden_dir?, hidden_file_in_not_hidden_dir?, match_path?, pwd, relative_path, reset_pwd, smart_path
Constructor Details
This class inherits a constructor from RuboCop::Cop::Cop
Instance Method Details
#autocorrect(node) ⇒ Object
36
37
38
39
40
41
42
# File 'lib/rubocop/cop/style/stabby_lambda_parentheses.rb', line 36
def autocorrect(node)
if style == :require_parentheses
missing_parentheses_corrector(node)
elsif style == :require_no_parentheses
unwanted_parentheses_corrector(node)
end
end
#on_send(node) ⇒ Object
28
29
30
31
32
33
34
# File 'lib/rubocop/cop/style/stabby_lambda_parentheses.rb', line 28
def on_send(node)
return unless stabby_lambda_with_args?(node)
return unless redundant_parentheses?(node) ||
missing_parentheses?(node)
add_offense(node.block_node.arguments)
end
|
__label__pos
| 0.982941 |
Click here to Skip to main content
15,390,185 members
Articles / Web Development / ASP.NET / ASP.NET Core
Article
Posted 22 Oct 2018
Stats
111K views
197 bookmarked
Creating Web API in ASP.NET Core 2.0
Rate me:
Please Sign up or sign in to vote.
4.73/5 (73 votes)
28 Sep 2019CPOL13 min read
In this guide, we'll use WideWorldImporters database to create a Web API.
In this article we'll be covering everything from Creating a Project, to Understanding Models, to Setting Up Dependency Injection, to Running a Web API, and learning about Unit tests - how to add them, how they work and how to run them?
Introduction
Let's create a Web API with the latest version of ASP.NET Core and Entity Framework Core.
In this guide, we'll use WideWorldImporters database to create a Web API.
REST APIs provide at least the following operations:
• GET
• POST
• PUT
• DELETE
There are other operations for REST, but they aren't necessary for this guide.
Those operations allow clients to perform actions through REST API, so our Web API must contain those operations.
WideWorldImporters database contains 4 schemas:
• Application
• Purchasing
• Sales
• Warehouse
In this guide, we'll work with Warehouse.StockItems table. We'll add code to work with this entity: allow to retrieve stock items, retrieve stock item by id, create, update and delete stock items from database.
The version for this API is 1.
This is the route table for API:
Verb Url Description
GET api/v1/Warehouse/StockItem Retrieves stock items
GET api/v1/Warehouse/StockItem/id Retrieves a stock item by id
POST api/v1/Warehouse/StockItem Creates a new stock item
PUT api/v1/Warehouse/StockItem/id Updates an existing stock item
DELETE api/v1/Warehouse/StockItem/id Deletes an existing stock item
Keep these routes in mind because API must implement all routes.
Prerequisites
Software
Skills
• C#
• ORM (Object Relational Mapping)
• TDD (Test Driven Development)
• RESTful services
Using the Code
For this guide, the working directory for source code is C:\Projects.
Step 01 - Create Project
Open Visual Studio and follow these steps:
1. Go to File > New > Project
2. Go to Installed > Visual C# > .NET Core
3. Set the name for project as WideWorldImporters.API
4. Click OK
Create Project
In the next window, select API and the latest version for .ASP.NET Core, in this case is 2.1:
Configuration For Api
Once Visual Studio has finished with creation for solution, we'll see this window:
Overview For Api
Step 02 - Install Nuget Packages
In this step, We need to install the following NuGet packages:
• EntityFrameworkCore.SqlServer
• Swashbuckle.AspNetCore
Now, We'll proceed to install EntityFrameworkCore.SqlServer package from Nuget, right click on WideWorldImporters.API project:
Manage NuGet Packages
Change to Browse tab and type Microsoft.EntityFrameworkCore.SqlServer:
Install EntityFrameworkCore.SqlServer Package
Next, install Swashbuckle.AspNetCore package:
Install Swashbuckle.AspNetCore Package
Swashbuckle.AspNetCore package allows to enable help page for Web API.
This is the structure for project.
Now run the project to check if solution is ready, press F5 and Visual Studio will show this browser window:
First Run
By default, Visual Studio adds a file with name ValuesController in Controllers directory, remove it from project.
Step 03 - Add Models
Now, create a directory with name Models and add the following files:
• Domain.cs
• Extensions.cs
• Requests.cs
• Responses.cs
Domain.cs will contains all code related to Entity Framework Core.
Extensions.cs will contain the extension methods for DbContext and collections.
Requests.cs will contain definitions for requests.
Responses.cs will contain definitions for responses.
Code for Domain.cs file:
C#
using System;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
namespace WideWorldImporters.API.Models
{
#pragma warning disable CS1591
public partial class StockItem
{
public StockItem()
{
}
public StockItem(int? stockItemID)
{
StockItemID = stockItemID;
}
public int? StockItemID { get; set; }
public string StockItemName { get; set; }
public int? SupplierID { get; set; }
public int? ColorID { get; set; }
public int? UnitPackageID { get; set; }
public int? OuterPackageID { get; set; }
public string Brand { get; set; }
public string Size { get; set; }
public int? LeadTimeDays { get; set; }
public int? QuantityPerOuter { get; set; }
public bool? IsChillerStock { get; set; }
public string Barcode { get; set; }
public decimal? TaxRate { get; set; }
public decimal? UnitPrice { get; set; }
public decimal? RecommendedRetailPrice { get; set; }
public decimal? TypicalWeightPerUnit { get; set; }
public string MarketingComments { get; set; }
public string InternalComments { get; set; }
public string CustomFields { get; set; }
public string Tags { get; set; }
public string SearchDetails { get; set; }
public int? LastEditedBy { get; set; }
public DateTime? ValidFrom { get; set; }
public DateTime? ValidTo { get; set; }
}
public class StockItemsConfiguration : IEntityTypeConfiguration<StockItem>
{
public void Configure(EntityTypeBuilder<StockItem> builder)
{
// Set configuration for entity
builder.ToTable("StockItems", "Warehouse");
// Set key for entity
builder.HasKey(p => p.StockItemID);
// Set configuration for columns
builder.Property(p => p.StockItemName).HasColumnType("nvarchar(200)").IsRequired();
builder.Property(p => p.SupplierID).HasColumnType("int").IsRequired();
builder.Property(p => p.ColorID).HasColumnType("int");
builder.Property(p => p.UnitPackageID).HasColumnType("int").IsRequired();
builder.Property(p => p.OuterPackageID).HasColumnType("int").IsRequired();
builder.Property(p => p.Brand).HasColumnType("nvarchar(100)");
builder.Property(p => p.Size).HasColumnType("nvarchar(40)");
builder.Property(p => p.LeadTimeDays).HasColumnType("int").IsRequired();
builder.Property(p => p.QuantityPerOuter).HasColumnType("int").IsRequired();
builder.Property(p => p.IsChillerStock).HasColumnType("bit").IsRequired();
builder.Property(p => p.Barcode).HasColumnType("nvarchar(100)");
builder.Property(p => p.TaxRate).HasColumnType("decimal(18, 3)").IsRequired();
builder.Property(p => p.UnitPrice).HasColumnType("decimal(18, 2)").IsRequired();
builder.Property(p => p.RecommendedRetailPrice).HasColumnType("decimal(18, 2)");
builder.Property(p => p.TypicalWeightPerUnit).HasColumnType("decimal(18, 3)").IsRequired();
builder.Property(p => p.MarketingComments).HasColumnType("nvarchar(max)");
builder.Property(p => p.InternalComments).HasColumnType("nvarchar(max)");
builder.Property(p => p.CustomFields).HasColumnType("nvarchar(max)");
builder.Property(p => p.LastEditedBy).HasColumnType("int").IsRequired();
// Columns with default value
builder
.Property(p => p.StockItemID)
.HasColumnType("int")
.IsRequired()
.HasDefaultValueSql("NEXT VALUE FOR [Sequences].[StockItemID]");
// Computed columns
builder
.Property(p => p.Tags)
.HasColumnType("nvarchar(max)")
.HasComputedColumnSql("json_query([CustomFields],N'$.Tags')");
builder
.Property(p => p.SearchDetails)
.HasColumnType("nvarchar(max)")
.IsRequired()
.HasComputedColumnSql("concat([StockItemName],N' ',[MarketingComments])");
// Columns with generated value on add or update
builder
.Property(p => p.ValidFrom)
.HasColumnType("datetime2")
.IsRequired()
.ValueGeneratedOnAddOrUpdate();
builder
.Property(p => p.ValidTo)
.HasColumnType("datetime2")
.IsRequired()
.ValueGeneratedOnAddOrUpdate();
}
}
public class WideWorldImportersDbContext : DbContext
{
public WideWorldImportersDbContext(DbContextOptions<WideWorldImportersDbContext> options)
: base(options)
{
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
// Apply configurations for entity
modelBuilder
.ApplyConfiguration(new StockItemsConfiguration());
base.OnModelCreating(modelBuilder);
}
public DbSet<StockItem> StockItems { get; set; }
}
#pragma warning restore CS1591
}
Code for Extensions.cs file:
C#
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
namespace WideWorldImporters.API.Models
{
#pragma warning disable CS1591
public static class WideWorldImportersDbContextExtensions
{
public static IQueryable<StockItem> GetStockItems(this WideWorldImportersDbContext dbContext, int pageSize = 10, int pageNumber = 1, int? lastEditedBy = null, int? colorID = null, int? outerPackageID = null, int? supplierID = null, int? unitPackageID = null)
{
// Get query from DbSet
var query = dbContext.StockItems.AsQueryable();
// Filter by: 'LastEditedBy'
if (lastEditedBy.HasValue)
query = query.Where(item => item.LastEditedBy == lastEditedBy);
// Filter by: 'ColorID'
if (colorID.HasValue)
query = query.Where(item => item.ColorID == colorID);
// Filter by: 'OuterPackageID'
if (outerPackageID.HasValue)
query = query.Where(item => item.OuterPackageID == outerPackageID);
// Filter by: 'SupplierID'
if (supplierID.HasValue)
query = query.Where(item => item.SupplierID == supplierID);
// Filter by: 'UnitPackageID'
if (unitPackageID.HasValue)
query = query.Where(item => item.UnitPackageID == unitPackageID);
return query;
}
public static async Task<StockItem> GetStockItemsAsync(this WideWorldImportersDbContext dbContext, StockItem entity)
=> await dbContext.StockItems.FirstOrDefaultAsync(item => item.StockItemID == entity.StockItemID);
public static async Task<StockItem> GetStockItemsByStockItemNameAsync(this WideWorldImportersDbContext dbContext, StockItem entity)
=> await dbContext.StockItems.FirstOrDefaultAsync(item => item.StockItemName == entity.StockItemName);
}
public static class IQueryableExtensions
{
public static IQueryable<TModel> Paging<TModel>(this IQueryable<TModel> query, int pageSize = 0, int pageNumber = 0) where TModel : class
=> pageSize > 0 && pageNumber > 0 ? query.Skip((pageNumber - 1) * pageSize).Take(pageSize) : query;
}
#pragma warning restore CS1591
}
Code for Requests.cs file:
C#
using System;
using System.ComponentModel.DataAnnotations;
namespace WideWorldImporters.API.Models
{
#pragma warning disable CS1591
public class PostStockItemsRequest
{
[Key]
public int? StockItemID { get; set; }
[Required]
[StringLength(200)]
public string StockItemName { get; set; }
[Required]
public int? SupplierID { get; set; }
public int? ColorID { get; set; }
[Required]
public int? UnitPackageID { get; set; }
[Required]
public int? OuterPackageID { get; set; }
[StringLength(100)]
public string Brand { get; set; }
[StringLength(40)]
public string Size { get; set; }
[Required]
public int? LeadTimeDays { get; set; }
[Required]
public int? QuantityPerOuter { get; set; }
[Required]
public bool? IsChillerStock { get; set; }
[StringLength(100)]
public string Barcode { get; set; }
[Required]
public decimal? TaxRate { get; set; }
[Required]
public decimal? UnitPrice { get; set; }
public decimal? RecommendedRetailPrice { get; set; }
[Required]
public decimal? TypicalWeightPerUnit { get; set; }
public string MarketingComments { get; set; }
public string InternalComments { get; set; }
public string CustomFields { get; set; }
public string Tags { get; set; }
[Required]
public string SearchDetails { get; set; }
[Required]
public int? LastEditedBy { get; set; }
public DateTime? ValidFrom { get; set; }
public DateTime? ValidTo { get; set; }
}
public class PutStockItemsRequest
{
[Required]
[StringLength(200)]
public string StockItemName { get; set; }
[Required]
public int? SupplierID { get; set; }
public int? ColorID { get; set; }
[Required]
public decimal? UnitPrice { get; set; }
}
public static class Extensions
{
public static StockItem ToEntity(this PostStockItemsRequest request)
=> new StockItem
{
StockItemID = request.StockItemID,
StockItemName = request.StockItemName,
SupplierID = request.SupplierID,
ColorID = request.ColorID,
UnitPackageID = request.UnitPackageID,
OuterPackageID = request.OuterPackageID,
Brand = request.Brand,
Size = request.Size,
LeadTimeDays = request.LeadTimeDays,
QuantityPerOuter = request.QuantityPerOuter,
IsChillerStock = request.IsChillerStock,
Barcode = request.Barcode,
TaxRate = request.TaxRate,
UnitPrice = request.UnitPrice,
RecommendedRetailPrice = request.RecommendedRetailPrice,
TypicalWeightPerUnit = request.TypicalWeightPerUnit,
MarketingComments = request.MarketingComments,
InternalComments = request.InternalComments,
CustomFields = request.CustomFields,
Tags = request.Tags,
SearchDetails = request.SearchDetails,
LastEditedBy = request.LastEditedBy,
ValidFrom = request.ValidFrom,
ValidTo = request.ValidTo
};
}
#pragma warning restore CS1591
}
Code for Responses.cs file:
C#
using System.Collections.Generic;
using System.Net;
using Microsoft.AspNetCore.Mvc;
namespace WideWorldImporters.API.Models
{
#pragma warning disable CS1591
public interface IResponse
{
string Message { get; set; }
bool DidError { get; set; }
string ErrorMessage { get; set; }
}
public interface ISingleResponse<TModel> : IResponse
{
TModel Model { get; set; }
}
public interface IListResponse<TModel> : IResponse
{
IEnumerable<TModel> Model { get; set; }
}
public interface IPagedResponse<TModel> : IListResponse<TModel>
{
int ItemsCount { get; set; }
double PageCount { get; }
}
public class Response : IResponse
{
public string Message { get; set; }
public bool DidError { get; set; }
public string ErrorMessage { get; set; }
}
public class SingleResponse<TModel> : ISingleResponse<TModel>
{
public string Message { get; set; }
public bool DidError { get; set; }
public string ErrorMessage { get; set; }
public TModel Model { get; set; }
}
public class ListResponse<TModel> : IListResponse<TModel>
{
public string Message { get; set; }
public bool DidError { get; set; }
public string ErrorMessage { get; set; }
public IEnumerable<TModel> Model { get; set; }
}
public class PagedResponse<TModel> : IPagedResponse<TModel>
{
public string Message { get; set; }
public bool DidError { get; set; }
public string ErrorMessage { get; set; }
public IEnumerable<TModel> Model { get; set; }
public int PageSize { get; set; }
public int PageNumber { get; set; }
public int ItemsCount { get; set; }
public double PageCount
=> ItemsCount < PageSize ? 1 : (int)(((double)ItemsCount / PageSize) + 1);
}
public static class ResponseExtensions
{
public static IActionResult ToHttpResponse(this IResponse response)
{
var status = response.DidError ? HttpStatusCode.InternalServerError : HttpStatusCode.OK;
return new ObjectResult(response)
{
StatusCode = (int)status
};
}
public static IActionResult ToHttpResponse<TModel>(this ISingleResponse<TModel> response)
{
var status = HttpStatusCode.OK;
if (response.DidError)
status = HttpStatusCode.InternalServerError;
else if (response.Model == null)
status = HttpStatusCode.NotFound;
return new ObjectResult(response)
{
StatusCode = (int)status
};
}
public static IActionResult ToHttpResponse<TModel>(this IListResponse<TModel> response)
{
var status = HttpStatusCode.OK;
if (response.DidError)
status = HttpStatusCode.InternalServerError;
else if (response.Model == null)
status = HttpStatusCode.NoContent;
return new ObjectResult(response)
{
StatusCode = (int)status
};
}
}
#pragma warning restore CS1591
}
Understanding Models
DOMAIN
StockItems class is the representation for Warehouse.StockItems table.
StockItemsConfiguration class contains the mapping for StockItems class.
WideWorldImportersDbContext class is the link between database and C# code, this class handles queries and commits the changes in database and of course, another things.
EXTENSIONS
WideWorldImportersDbContextExtensions contains extension methods for DbContext instance, one method to retrieve stock items, another to retrieve a sotck stock item by id and the last one to retrive a stock item by name.
IQueryableExtensions contains extension methods for IQueryable, to add paging feature.
REQUESTS
We have the following definitions:
• PostStockItemsRequest
• PutStockItemsRequest
PostStockItemsRequest represents the model to create a new stock item, contains all required properties to save in database.
PutStockItemsRequest represents the model to update an existing stock item, in this case contains only 4 properties: StockItemName, SupplierID, ColorID and UnitPrice. This class doesn't contain StockItemID property because id is in route for controller's action.
The models for requests do not require to contain all properties like entities, because we don't need to expose full definition in a request or response, it's a good practice to limit data using models with few properties.
Extensions class contains an extension method for PostStockItemsRequest, to return an instance of StockItem class from request model.
RESPONSES
These are the interfaces:
• IResponse
• ISingleResponse<TModel>
• IListResponse<TModel>
• IPagedResponse<TModel>
Each one of these interfaces has implementations, why do we need these definitions if it's more simple to return objects without wrapping them in these models? Keep in mind this Web API will provide operations for clients, with UI or without UI and it's more easy to have properties to send message, to have a model or send information if an error occurs, in addition, we set Http status code in response to describe the result from request.
These classes are generic, because in this way, we save time to define responses in future, this Web API only returns a response for a single entity, a list and a paged list.
ISingleResponse represents a response for a single entity.
IListResponse represents a response with a list, for example all shipping to existing order without paging.
IPagedResponse represents a response with pagination, for example all orders in a date range.
ResponseExtensions class contains extension methods to convert a response in a Http response, these methods return InternalServerError (500) status if an error occurs, OK (200) if it's OK and NotFound (404) if an entity does not exist in database or NoContent (204) for list responses without model.
Step 04 - Add Controller
Now, inside of Controllers directory, add a code file with name WarehouseController.cs and add this code:
C#
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;
using WideWorldImporters.API.Models;
namespace WideWorldImporters.API.Controllers
{
#pragma warning disable CS1591
[ApiController]
[Route("api/v1/[controller]")]
public class WarehouseController : ControllerBase
{
protected readonly ILogger Logger;
protected readonly WideWorldImportersDbContext DbContext;
public WarehouseController(ILogger<WarehouseController> logger, WideWorldImportersDbContext dbContext)
{
Logger = logger;
DbContext = dbContext;
}
#pragma warning restore CS1591
// GET
// api/v1/Warehouse/StockItem
/// <summary>
/// Retrieves stock items
/// </summary>
/// <param name="pageSize">Page size</param>
/// <param name="pageNumber">Page number</param>
/// <param name="lastEditedBy">Last edit by (user id)</param>
/// <param name="colorID">Color id</param>
/// <param name="outerPackageID">Outer package id</param>
/// <param name="supplierID">Supplier id</param>
/// <param name="unitPackageID">Unit package id</param>
/// <returns>A response with stock items list</returns>
/// <response code="200">Returns the stock items list</response>
/// <response code="500">If there was an internal server error</response>
[HttpGet("StockItem")]
[ProducesResponseType(200)]
[ProducesResponseType(500)]
public async Task<IActionResult> GetStockItemsAsync(int pageSize = 10, int pageNumber = 1, int? lastEditedBy = null, int? colorID = null, int? outerPackageID = null, int? supplierID = null, int? unitPackageID = null)
{
Logger?.LogDebug("'{0}' has been invoked", nameof(GetStockItemsAsync));
var response = new PagedResponse<StockItem>();
try
{
// Get the "proposed" query from repository
var query = DbContext.GetStockItems();
// Set paging values
response.PageSize = pageSize;
response.PageNumber = pageNumber;
// Get the total rows
response.ItemsCount = await query.CountAsync();
// Get the specific page from database
response.Model = await query.Paging(pageSize, pageNumber).ToListAsync();
response.Message = string.Format("Page {0} of {1}, Total of products: {2}.", pageNumber, response.PageCount, response.ItemsCount);
Logger?.LogInformation("The stock items have been retrieved successfully.");
}
catch (Exception ex)
{
response.DidError = true;
response.ErrorMessage = "There was an internal error, please contact to technical support.";
Logger?.LogCritical("There was an error on '{0}' invocation: {1}", nameof(GetStockItemsAsync), ex);
}
return response.ToHttpResponse();
}
// GET
// api/v1/Warehouse/StockItem/5
/// <summary>
/// Retrieves a stock item by ID
/// </summary>
/// <param name="id">Stock item id</param>
/// <returns>A response with stock item</returns>
/// <response code="200">Returns the stock items list</response>
/// <response code="404">If stock item is not exists</response>
/// <response code="500">If there was an internal server error</response>
[HttpGet("StockItem/{id}")]
[ProducesResponseType(200)]
[ProducesResponseType(404)]
[ProducesResponseType(500)]
public async Task<IActionResult> GetStockItemAsync(int id)
{
Logger?.LogDebug("'{0}' has been invoked", nameof(GetStockItemAsync));
var response = new SingleResponse<StockItem>();
try
{
// Get the stock item by id
response.Model = await DbContext.GetStockItemsAsync(new StockItem(id));
}
catch (Exception ex)
{
response.DidError = true;
response.ErrorMessage = "There was an internal error, please contact to technical support.";
Logger?.LogCritical("There was an error on '{0}' invocation: {1}", nameof(GetStockItemAsync), ex);
}
return response.ToHttpResponse();
}
// POST
// api/v1/Warehouse/StockItem/
/// <summary>
/// Creates a new stock item
/// </summary>
/// <param name="request">Request model</param>
/// <returns>A response with new stock item</returns>
/// <response code="200">Returns the stock items list</response>
/// <response code="201">A response as creation of stock item</response>
/// <response code="400">For bad request</response>
/// <response code="500">If there was an internal server error</response>
[HttpPost("StockItem")]
[ProducesResponseType(200)]
[ProducesResponseType(201)]
[ProducesResponseType(400)]
[ProducesResponseType(500)]
public async Task<IActionResult> PostStockItemAsync([FromBody]PostStockItemsRequest request)
{
Logger?.LogDebug("'{0}' has been invoked", nameof(PostStockItemAsync));
var response = new SingleResponse<StockItem>();
try
{
var existingEntity = await DbContext
.GetStockItemsByStockItemNameAsync(new StockItem { StockItemName = request.StockItemName });
if (existingEntity != null)
ModelState.AddModelError("StockItemName", "Stock item name already exists");
if (!ModelState.IsValid)
return BadRequest();
// Create entity from request model
var entity = request.ToEntity();
// Add entity to repository
DbContext.Add(entity);
// Save entity in database
await DbContext.SaveChangesAsync();
// Set the entity to response model
response.Model = entity;
}
catch (Exception ex)
{
response.DidError = true;
response.ErrorMessage = "There was an internal error, please contact to technical support.";
Logger?.LogCritical("There was an error on '{0}' invocation: {1}", nameof(PostStockItemAsync), ex);
}
return response.ToHttpResponse();
}
// PUT
// api/v1/Warehouse/StockItem/5
/// <summary>
/// Updates an existing stock item
/// </summary>
/// <param name="id">Stock item ID</param>
/// <param name="request">Request model</param>
/// <returns>A response as update stock item result</returns>
/// <response code="200">If stock item was updated successfully</response>
/// <response code="400">For bad request</response>
/// <response code="404">If stock item is not exists</response>
/// <response code="500">If there was an internal server error</response>
[HttpPut("StockItem/{id}")]
[ProducesResponseType(200)]
[ProducesResponseType(400)]
[ProducesResponseType(404)]
[ProducesResponseType(500)]
public async Task<IActionResult> PutStockItemAsync(int id, [FromBody]PutStockItemsRequest request)
{
Logger?.LogDebug("'{0}' has been invoked", nameof(PutStockItemAsync));
var response = new Response();
try
{
// Get stock item by id
var entity = await DbContext.GetStockItemsAsync(new StockItem(id));
// Validate if entity exists
if (entity == null)
return NotFound();
// Set changes to entity
entity.StockItemName = request.StockItemName;
entity.SupplierID = request.SupplierID;
entity.ColorID = request.ColorID;
entity.UnitPrice = request.UnitPrice;
// Update entity in repository
DbContext.Update(entity);
// Save entity in database
await DbContext.SaveChangesAsync();
}
catch (Exception ex)
{
response.DidError = true;
response.ErrorMessage = "There was an internal error, please contact to technical support.";
Logger?.LogCritical("There was an error on '{0}' invocation: {1}", nameof(PutStockItemAsync), ex);
}
return response.ToHttpResponse();
}
// DELETE
// api/v1/Warehouse/StockItem/5
/// <summary>
/// Deletes an existing stock item
/// </summary>
/// <param name="id">Stock item ID</param>
/// <returns>A response as delete stock item result</returns>
/// <response code="200">If stock item was deleted successfully</response>
/// <response code="404">If stock item is not exists</response>
/// <response code="500">If there was an internal server error</response>
[HttpDelete("StockItem/{id}")]
[ProducesResponseType(200)]
[ProducesResponseType(404)]
[ProducesResponseType(500)]
public async Task<IActionResult> DeleteStockItemAsync(int id)
{
Logger?.LogDebug("'{0}' has been invoked", nameof(DeleteStockItemAsync));
var response = new Response();
try
{
// Get stock item by id
var entity = await DbContext.GetStockItemsAsync(new StockItem(id));
// Validate if entity exists
if (entity == null)
return NotFound();
// Remove entity from repository
DbContext.Remove(entity);
// Delete entity in database
await DbContext.SaveChangesAsync();
}
catch (Exception ex)
{
response.DidError = true;
response.ErrorMessage = "There was an internal error, please contact to technical support.";
Logger?.LogCritical("There was an error on '{0}' invocation: {1}", nameof(DeleteStockItemAsync), ex);
}
return response.ToHttpResponse();
}
}
}
The process for all controller's actions is:
1. Log the invocation for method.
2. Create the instance for response according to action (Paged, list or single).
3. Perform access to database through DbContext instance.
4. If invocation for repository fails, set DidError property as true and set ErrorMessage property with: There was an internal error, please contact to technical support., because it isn't recommended to expose error details in response, it's better to save all exception details in log file.
5. Return result as Http response.
Keep in mind all names for methods that end with Async sufix because all operations are async but in Http attributes, we don't use this suffix.
Step 05 - Setting Up Dependency Injection
ASP.NET Core enables dependency injection in native way, this means we don't need any 3rd party framework to inject dependencies in controllers.
This is a great challenge because we need to change our mind from Web Forms and ASP.NET MVC, for those technologies use a framework to inject dependencies it was a luxury, now in ASP.NET Core dependency injection is a basic aspect.
Project template for ASP.NET Core has a class with name Startup, in this class we must to add the configuration to inject instances for DbContext, Services, Loggers, etc.
Modify the code of Startup.cs file to look like this:
C#
using System;
using System.IO;
using System.Reflection;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Swashbuckle.AspNetCore.Swagger;
using WideWorldImporters.API.Controllers;
using WideWorldImporters.API.Models;
namespace WideWorldImporters.API
{
#pragma warning disable CS1591
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
// Add configuration for DbContext
// Use connection string from appsettings.json file
services.AddDbContext<WideWorldImportersDbContext>(builder =>
{
builder.UseSqlServer(Configuration["AppSettings:ConnectionString"]);
});
// Set up dependency injection for controller's logger
services.AddScoped<ILogger, Logger<WarehouseController>>();
// Register the Swagger generator, defining 1 or more Swagger documents
services.AddSwaggerGen(options =>
{
options.SwaggerDoc("v1", new Info { Title = "WideWorldImporters API", Version = "v1" });
// Get xml comments path
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
// Set xml path
options.IncludeXmlComments(xmlPath);
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
app.UseDeveloperExceptionPage();
// Enable middleware to serve generated Swagger as a JSON endpoint.
app.UseSwagger();
// Enable middleware to serve swagger-ui (HTML, JS, CSS, etc.), specifying the Swagger JSON endpoint.
app.UseSwaggerUI(options =>
{
options.SwaggerEndpoint("/swagger/v1/swagger.json", "WideWorldImporters API V1");
});
app.UseMvc();
}
}
#pragma warning restore CS1591
}
The ConfigureServices method specifies how dependencies will be resolved, in this method. We need to set up DbContext and Logging.
The Configure method adds the configuration for Http request runtime.
Step 06 - Running Web API
Before you run Web API project, add the connection string in appsettings.json file:
JavaScript
{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"AppSettings": {
"ConnectionString": "server=(local);database=WideWorldImporters;integrated security=yes;"
}
}
In order to show descriptions in help page, enable XML documentation for your Web API project:
1. Right click on Project > Properties
2. Go to Build > Output
3. Enable XML documentation file
4. Save changes
Enable XML Documentation File
Now, press F5 to start debugging for Web API project, if everything it's OK, We'll get the following output in the browser:
Get Stock Items In Browser
Also, We can load help page in another tab:
Help Page
Step 07 - Add Unit Tests
In order to add unit tests for API project, follow these steps:
1. Right click on Solution > Add > New Project
2. Go to Installed > Visual C# > Test > xUnit Test Project (.NET Core)
3. Set the name for project as WideWorldImporters.API.UnitTests
4. Click OK
Add Unit Tests Project
Manage references for WideWorldImporters.API.UnitTests project:
Add Reference To Api Project
Now add a reference for WideWorldImporters.API project:
Reference Manager for Unit Tests Project.jpg
Once we have created the project, add the following NuGet packages for project:
• Microsoft.AspNetCore.Mvc.Core
• Microsoft.EntityFrameworkCore.InMemory
Remove UnitTest1.cs file.
Save changes and build WideWorldImporters.API.UnitTests project.
Now we proceed to add code related for unit tests, these tests will work with In memory database.
What is TDD? Testing is a common practice in nowadays, because with unit tests, it's easy to performing tests for features before to publish, Test Driven Development (TDD) is the way to define unit tests and validate the behavior in code.
Another concept in TDD is AAA: Arrange, Act and Assert; Arrange is the block for creation of objects, Act is the block to place all invocations for methods and Assert is the block to validate the results from methods invocation.
Since we're working with In memory database for unit tests, we need to create a class to mock WideWorldImportersDbContext class and also add data to perform testing for IWarehouseRepository operations.
To be clear: these unit tests do not establish a connection with SQL Server.
For unit tests, add the following files:
• DbContextMocker.cs
• DbContextExtensions.cs
• WarehouseControllerUnitTest.cs
Code for DbContextMocker.cs file:
C#
using Microsoft.EntityFrameworkCore;
using WideWorldImporters.API.Models;
namespace WideWorldImporters.API.UnitTests
{
public static class DbContextMocker
{
public static WideWorldImportersDbContext GetWideWorldImportersDbContext(string dbName)
{
// Create options for DbContext instance
var options = new DbContextOptionsBuilder<WideWorldImportersDbContext>()
.UseInMemoryDatabase(databaseName: dbName)
.Options;
// Create instance of DbContext
var dbContext = new WideWorldImportersDbContext(options);
// Add entities in memory
dbContext.Seed();
return dbContext;
}
}
}
Code for DbContextExtensions.cs file:
C#
using System;
using WideWorldImporters.API.Models;
namespace WideWorldImporters.API.UnitTests
{
public static class DbContextExtensions
{
public static void Seed(this WideWorldImportersDbContext dbContext)
{
// Add entities for DbContext instance
dbContext.StockItems.Add(new StockItem
{
StockItemID = 1,
StockItemName = "USB missile launcher (Green)",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 25.00m,
RecommendedRetailPrice = 37.38m,
TypicalWeightPerUnit = 0.300m,
MarketingComments = "Complete with 12 projectiles",
CustomFields = "{ \"CountryOfManufacture\": \"China\", \"Tags\": [\"USB Powered\"] }",
Tags = "[\"USB Powered\"]",
SearchDetails = "USB missile launcher (Green) Complete with 12 projectiles",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 2,
StockItemName = "USB rocket launcher (Gray)",
SupplierID = 12,
ColorID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 25.00m,
RecommendedRetailPrice = 37.38m,
TypicalWeightPerUnit = 0.300m,
MarketingComments = "Complete with 12 projectiles",
CustomFields = "{ \"CountryOfManufacture\": \"China\", \"Tags\": [\"USB Powered\"] }",
Tags = "[\"USB Powered\"]",
SearchDetails = "USB rocket launcher (Gray) Complete with 12 projectiles",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 3,
StockItemName = "Office cube periscope (Black)",
SupplierID = 12,
ColorID = 3,
UnitPackageID = 7,
OuterPackageID = 6,
LeadTimeDays = 14,
QuantityPerOuter = 10,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 18.50m,
RecommendedRetailPrice = 27.66m,
TypicalWeightPerUnit = 0.250m,
MarketingComments = "Need to see over your cubicle wall? This is just what's needed.",
CustomFields = "{ \"CountryOfManufacture\": \"China\", \"Tags\": [] }",
Tags = "[]",
SearchDetails = "Office cube periscope (Black) Need to see over your cubicle wall? This is just what's needed.",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 4,
StockItemName = "USB food flash drive - sushi roll",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - sushi roll ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 5,
StockItemName = "USB food flash drive - hamburger",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"16GB\",\"USB Powered\"] }",
Tags = "[\"16GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - hamburger ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 6,
StockItemName = "USB food flash drive - hot dog",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - hot dog ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 7,
StockItemName = "USB food flash drive - pizza slice",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"16GB\",\"USB Powered\"] }",
Tags = "[\"16GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - pizza slice ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 8,
StockItemName = "USB food flash drive - dim sum 10 drive variety pack",
SupplierID = 12,
UnitPackageID = 9,
OuterPackageID = 9,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 240.00m,
RecommendedRetailPrice = 358.80m,
TypicalWeightPerUnit = 0.500m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - dim sum 10 drive variety pack ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 9,
StockItemName = "USB food flash drive - banana",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"16GB\",\"USB Powered\"] }",
Tags = "[\"16GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - banana ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 10,
StockItemName = "USB food flash drive - chocolate bar",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - chocolate bar ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 11,
StockItemName = "USB food flash drive - cookie",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"16GB\",\"USB Powered\"] }",
Tags = "[\"16GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - cookie ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.StockItems.Add(new StockItem
{
StockItemID = 12,
StockItemName = "USB food flash drive - donut",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB food flash drive - donut ",
LastEditedBy = 1,
ValidFrom = new DateTime(2016, 5, 31, 23, 11, 0),
ValidTo = new DateTime(9999, 12, 31, 23, 59, 59)
});
dbContext.SaveChanges();
}
}
}
Code for WarehouseControllerUnitTest.cs file:
C#
using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using WideWorldImporters.API.Controllers;
using WideWorldImporters.API.Models;
using Xunit;
namespace WideWorldImporters.API.UnitTests
{
public class WarehouseControllerUnitTest
{
[Fact]
public async Task TestGetStockItemsAsync()
{
// Arrange
var dbContext = DbContextMocker.GetWideWorldImportersDbContext(nameof(TestGetStockItemsAsync));
var controller = new WarehouseController(null, dbContext);
// Act
var response = await controller.GetStockItemsAsync() as ObjectResult;
var value = response.Value as IPagedResponse<StockItem>;
dbContext.Dispose();
// Assert
Assert.False(value.DidError);
}
[Fact]
public async Task TestGetStockItemAsync()
{
// Arrange
var dbContext = DbContextMocker.GetWideWorldImportersDbContext(nameof(TestGetStockItemAsync));
var controller = new WarehouseController(null, dbContext);
var id = 1;
// Act
var response = await controller.GetStockItemAsync(id) as ObjectResult;
var value = response.Value as ISingleResponse<StockItem>;
dbContext.Dispose();
// Assert
Assert.False(value.DidError);
}
[Fact]
public async Task TestPostStockItemAsync()
{
// Arrange
var dbContext = DbContextMocker.GetWideWorldImportersDbContext(nameof(TestPostStockItemAsync));
var controller = new WarehouseController(null, dbContext);
var requestModel = new PostStockItemsRequest
{
StockItemID = 100,
StockItemName = "USB anime flash drive - Goku",
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB anime flash drive - Goku",
LastEditedBy = 1,
ValidFrom = DateTime.Now,
ValidTo = DateTime.Now.AddYears(5)
};
// Act
var response = await controller.PostStockItemAsync(requestModel) as ObjectResult;
var value = response.Value as ISingleResponse<StockItem>;
dbContext.Dispose();
// Assert
Assert.False(value.DidError);
}
[Fact]
public async Task TestPutStockItemAsync()
{
// Arrange
var dbContext = DbContextMocker.GetWideWorldImportersDbContext(nameof(TestPutStockItemAsync));
var controller = new WarehouseController(null, dbContext);
var id = 12;
var requestModel = new PutStockItemsRequest
{
StockItemName = "USB food flash drive (Update)",
SupplierID = 12,
ColorID = 3
};
// Act
var response = await controller.PutStockItemAsync(id, requestModel) as ObjectResult;
var value = response.Value as IResponse;
dbContext.Dispose();
// Assert
Assert.False(value.DidError);
}
[Fact]
public async Task TestDeleteStockItemAsync()
{
// Arrange
var dbContext = DbContextMocker.GetWideWorldImportersDbContext(nameof(TestDeleteStockItemAsync));
var controller = new WarehouseController(null, dbContext);
var id = 5;
// Act
var response = await controller.DeleteStockItemAsync(id) as ObjectResult;
var value = response.Value as IResponse;
dbContext.Dispose();
// Assert
Assert.False(value.DidError);
}
}
}
As we can see, WarehouseControllerUnitTest contains all tests for Web API, these are the methods:
Method Description
TestGetStockItemsAsync Retrieves the stock items
TestGetStockItemAsync Retrieves an existing stock item by ID
TestPostStockItemAsync Creates a new stock item
TestPutStockItemAsync Updates an existing stock item
TestDeleteStockItemAsync Deletes an existing stock item
How Unit Tests Work?
DbContextMocker creates an instance of WideWorldImportersDbContext using in memory database, the dbName parameter sets the name for in memory database; then there is an invocation for Seed method, this method adds entities for WideWorldImportersDbContext instance in order to provide results.
DbContextExtensions class contains Seed extension method.
WarehouseControllerUnitTest class contains all tests for WarehouseController class.
Keep in mind each test uses a different in memory database, inside of each test method. We retrieve in memory database using the name of test method with nameof operator.
At this level (Unit tests), we only need to check the operations for repositories, there is no need to work with a SQL database (relations, transactions, etc.).
The process for unit tests is:
1. Create an instance of WideWorldImportersDbContext
2. Create an instance of controller
3. Invoke controller's method
4. Get value from controller's invocation
5. Dispose WideWorldImportersDbContext instance
6. Validate response
Running Unit Tests
Save all changes and build WideWorldImporters.API.UnitTests project.
Now, check tests in test explorer:
Test Explorer For Unit Tests
Run all tests using test explorer, if you get any error, check the error message, review code and repeat the process.
Step 08 - Add Integration Tests
In order to add integration tests for API project, follow these steps:
1. Right click on Solution > Add > New Project
2. Go to Installed > Visual C# > Test > xUnit Test Project (.NET Core)
3. Set the name for project as WideWorldImporters.API.IntegrationTests
4. Click OK
Add Integration Tests Project
Manage references for WideWorldImporters.API.IntegrationTests project:
Add Reference To Api Project
Now add a reference for WideWorldImporters.API project:
Reference Manager For Integration Tests Project
Once we have created the project, add the following NuGet packages for project:
• Microsoft.AspNetCore.Mvc
• Microsoft.AspNetCore.Mvc.Core
• Microsoft.AspNetCore.Diagnostics
• Microsoft.AspNetCore.TestHost
• Microsoft.Extensions.Configuration.Json
Remove UnitTest1.cs file.
Save changes and build WideWorldImporters.API.IntegrationTests project.
What is the difference between unit tests and integration tests? For unit tests, we simulate all dependencies for Web API project and for integration tests, we run a process that simulates Web API execution, this means Http requests.
Now we proceed to add code related for integration tests.
For this project, integration tests will perform Http requests, each Http request will perform operations to an existing database in SQL Server instance. We'll work with a local instance of SQL Server, this can change according to your working environment, I mean the scope for integration tests.
Code for TestFixture.cs file:
C#
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Reflection;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc.ApplicationParts;
using Microsoft.AspNetCore.Mvc.Controllers;
using Microsoft.AspNetCore.Mvc.ViewComponents;
using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
namespace WideWorldImporters.API.IntegrationTests
{
public class TestFixture<TStartup> : IDisposable
{
public static string GetProjectPath(string projectRelativePath, Assembly startupAssembly)
{
var projectName = startupAssembly.GetName().Name;
var applicationBasePath = AppContext.BaseDirectory;
var directoryInfo = new DirectoryInfo(applicationBasePath);
do
{
directoryInfo = directoryInfo.Parent;
var projectDirectoryInfo = new DirectoryInfo(Path.Combine(directoryInfo.FullName, projectRelativePath));
if (projectDirectoryInfo.Exists)
if (new FileInfo(Path.Combine(projectDirectoryInfo.FullName, projectName, $"{projectName}.csproj")).Exists)
return Path.Combine(projectDirectoryInfo.FullName, projectName);
}
while (directoryInfo.Parent != null);
throw new Exception($"Project root could not be located using the application root {applicationBasePath}.");
}
private TestServer Server;
public TestFixture()
: this(Path.Combine(""))
{
}
public HttpClient Client { get; }
public void Dispose()
{
Client.Dispose();
Server.Dispose();
}
protected virtual void InitializeServices(IServiceCollection services)
{
var startupAssembly = typeof(TStartup).GetTypeInfo().Assembly;
var manager = new ApplicationPartManager
{
ApplicationParts =
{
new AssemblyPart(startupAssembly)
},
FeatureProviders =
{
new ControllerFeatureProvider(),
new ViewComponentFeatureProvider()
}
};
services.AddSingleton(manager);
}
protected TestFixture(string relativeTargetProjectParentDir)
{
var startupAssembly = typeof(TStartup).GetTypeInfo().Assembly;
var contentRoot = GetProjectPath(relativeTargetProjectParentDir, startupAssembly);
var configurationBuilder = new ConfigurationBuilder()
.SetBasePath(contentRoot)
.AddJsonFile("appsettings.json");
var webHostBuilder = new WebHostBuilder()
.UseContentRoot(contentRoot)
.ConfigureServices(InitializeServices)
.UseConfiguration(configurationBuilder.Build())
.UseEnvironment("Development")
.UseStartup(typeof(TStartup));
// Create instance of test server
Server = new TestServer(webHostBuilder);
// Add configuration for client
Client = Server.CreateClient();
Client.BaseAddress = new Uri("http://localhost:5001");
Client.DefaultRequestHeaders.Accept.Clear();
Client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
}
}
}
Code for ContentHelper.cs file:
C#
using System.Net.Http;
using System.Text;
using Newtonsoft.Json;
namespace WideWorldImporters.API.IntegrationTests
{
public static class ContentHelper
{
public static StringContent GetStringContent(object obj)
=> new StringContent(JsonConvert.SerializeObject(obj), Encoding.Default, "application/json");
}
}
Code for WarehouseTests.cs file:
C#
using System;
using System.Net.Http;
using System.Threading.Tasks;
using Newtonsoft.Json;
using WideWorldImporters.API.Models;
using Xunit;
namespace WideWorldImporters.API.IntegrationTests
{
public class WarehouseTests : IClassFixture<TestFixture<Startup>>
{
private HttpClient Client;
public WarehouseTests(TestFixture<Startup> fixture)
{
Client = fixture.Client;
}
[Fact]
public async Task TestGetStockItemsAsync()
{
// Arrange
var request = new
{
Url = "/api/v1/Warehouse/StockItem"
};
// Act
var response = await Client.GetAsync(request.Url);
// Assert
response.EnsureSuccessStatusCode();
}
[Fact]
public async Task TestGetStockItemAsync()
{
// Arrange
var request = new
{
Url = "/api/v1/Warehouse/StockItem/1"
};
// Act
var response = await Client.GetAsync(request.Url);
// Assert
response.EnsureSuccessStatusCode();
}
[Fact]
public async Task TestPostStockItemAsync()
{
// Arrange
var request = new
{
Url = "/api/v1/Warehouse/StockItem",
Body = new
{
StockItemName = string.Format("USB anime flash drive - Vegeta {0}", Guid.NewGuid()),
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 15.000m,
UnitPrice = 32.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"Japan\", \"Tags\": [\"32GB\",\"USB Powered\"] }",
Tags = "[\"32GB\",\"USB Powered\"]",
SearchDetails = "USB anime flash drive - Vegeta",
LastEditedBy = 1,
ValidFrom = DateTime.Now,
ValidTo = DateTime.Now.AddYears(5)
}
};
// Act
var response = await Client.PostAsync(request.Url, ContentHelper.GetStringContent(request.Body));
var value = await response.Content.ReadAsStringAsync();
// Assert
response.EnsureSuccessStatusCode();
}
[Fact]
public async Task TestPutStockItemAsync()
{
// Arrange
var request = new
{
Url = "/api/v1/Warehouse/StockItem/1",
Body = new
{
StockItemName = string.Format("USB anime flash drive - Vegeta {0}", Guid.NewGuid()),
SupplierID = 12,
Color = 3,
UnitPrice = 39.00m
}
};
// Act
var response = await Client.PutAsync(request.Url, ContentHelper.GetStringContent(request.Body));
// Assert
response.EnsureSuccessStatusCode();
}
[Fact]
public async Task TestDeleteStockItemAsync()
{
// Arrange
var postRequest = new
{
Url = "/api/v1/Warehouse/StockItem",
Body = new
{
StockItemName = string.Format("Product to delete {0}", Guid.NewGuid()),
SupplierID = 12,
UnitPackageID = 7,
OuterPackageID = 7,
LeadTimeDays = 14,
QuantityPerOuter = 1,
IsChillerStock = false,
TaxRate = 10.000m,
UnitPrice = 10.00m,
RecommendedRetailPrice = 47.84m,
TypicalWeightPerUnit = 0.050m,
CustomFields = "{ \"CountryOfManufacture\": \"USA\", \"Tags\": [\"Sample\"] }",
Tags = "[\"Sample\"]",
SearchDetails = "Product to delete",
LastEditedBy = 1,
ValidFrom = DateTime.Now,
ValidTo = DateTime.Now.AddYears(5)
}
};
// Act
var postResponse = await Client.PostAsync(postRequest.Url, ContentHelper.GetStringContent(postRequest.Body));
var jsonFromPostResponse = await postResponse.Content.ReadAsStringAsync();
var singleResponse = JsonConvert.DeserializeObject<SingleResponse<StockItem>>(jsonFromPostResponse);
var deleteResponse = await Client.DeleteAsync(string.Format("/api/v1/Warehouse/StockItem/{0}", singleResponse.Model.StockItemID));
// Assert
postResponse.EnsureSuccessStatusCode();
Assert.False(singleResponse.DidError);
deleteResponse.EnsureSuccessStatusCode();
}
}
}
As we can see, WarehouseTests contain all tests for Web API, these are the methods:
Method Description
TestGetStockItemsAsync Retrieves the stock items
TestGetStockItemAsync Retrieves an existing stock item by ID
TestPostStockItemAsync Creates a new stock item
TestPutStockItemAsync Updates an existing stock item
TestDeleteStockItemAsync Deletes an existing stock item
How Integration Tests Work?
TestFixture<TStartup> class provides a Http client for Web API, uses Startup class from project as reference to apply configurations for client.
WarehouseTests class contains all methods to send Http requests for Web API, the port number for Http client is 1234.
ContentHelper class contains a helper method to create StringContent from request model as JSON, this applies for POST and PUT requests.
The process for integration tests is:
1. The Http client in created in class constructor
2. Define the request: url and request model (if applies)
3. Send the request
4. Get the value from response
5. Ensure response has success status
Running Integration Tests
Save all changes and build WideWorldImporters.API.IntegrationTests project, test explorer will show all tests in project:
Test Explorer For Integration Tests
Keep in mind: To execute integration tests, you need to have running an instance of SQL Server, the connection string in appsettings.json file will be used to establish connection with SQL Server.
Now run all integration tests, the test explorer looks like the following image:
Execution Of Integration Tests
If you get any error executing integration tests, check the error message, review code and repeat the process.
Code Challenge
At this point, you have skills to extend API, take this as a challenge for you and add the following tests (units and integration):
Test Description
Get stock items by parameters Make a request for stock items searching by lastEditedBy, colorID, outerPackageID, supplierID, unitPackageID parameters.
Get a non existing stock item Get a stock item using a non existing ID and check Web API returns NotFound (404) status.
Add a stock item with existing name Add a stock item with an existing name and check Web API returns BadRequest (400) status.
Add a stock item without required fields Add a stock item without required fields and check Web API returns BadRequest (400) status.
Update a non existing stock item Update a stock item using a non existing ID and check Web API returns NotFound (404) status.
Update an existing stock item without required fields Update an existing stock item without required fields and check Web API returns BadRequest (400) status.
Delete a non existing stock item Delete a stock item using a non existing ID and check Web API returns NotFound (404) status.
Delete a stock item with orders Delete a stock item using a non existing ID and check Web API returns NotFound (404) status.
Follow the convention used in unit and integration tests to complete this challenge.
Good luck!
Code Improvements
• Explain how to use command line for .NET Core
• Add Security (Authentication and authorization) for API
• Split models definitions in files
• Refact models outside of Web API project
• Anything else? Let me know in the comments :)
Points of Interest
• In this article, we're working with Entity Framework Core.
• Entity Framework Core has in memory database.
• We can adjust all repositories to expose specific operations, in some cases, we don't want to have GetAll, Add, Update or Delete operations.
• Unit tests perform testing for Assemblies.
• Integration tests perform testing for Http requests.
• All tests have been created with xUnit framework.
Related Links
History
• October 22nd, 2018: Initial version
• November 22nd, 2018: Removing Repository pattern
• December 11th, 2018: Addition of Help Page for Web API
License
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Share
About the Author
HHerzl
Software Developer
El Salvador El Salvador
CatFactory Creator.
Full Stack Developer with Experience in C#, Entity Framework Core, ASP.NET Core and Angular.
Comments and Discussions
QuestionProject Needs Absolute Updates Pin
Cemal Sener19-Mar-21 9:04
MemberCemal Sener19-Mar-21 9:04
QuestionDate internationalisation needed in tests Pin
John Perryn14-Jun-19 16:00
MemberJohn Perryn14-Jun-19 16:00
AnswerRe: Date internationalisation needed in tests Pin
HHerzl16-Jun-19 17:57
MemberHHerzl16-Jun-19 17:57
QuestionIt all works as expected! Pin
Michael Chao13-Jun-19 3:33
MemberMichael Chao13-Jun-19 3:33
AnswerRe: It all works as expected! Pin
HHerzl16-Jun-19 12:07
MemberHHerzl16-Jun-19 12:07
QuestionError, C:\Users\cherz please help Pin
DumpsterJuice4-Jun-19 4:38
MemberDumpsterJuice4-Jun-19 4:38
AnswerRe: Error, C:\Users\cherz please help Pin
DumpsterJuice4-Jun-19 4:46
MemberDumpsterJuice4-Jun-19 4:46
PraiseThank you Pin
Member 143543353-May-19 13:21
MemberMember 143543353-May-19 13:21
QuestionThanks!! - Gracias!! Pin
joselanton8-Mar-19 3:54
Memberjoselanton8-Mar-19 3:54
AnswerRe: Thanks!! - Gracias!! Pin
HHerzl16-Jun-19 12:08
MemberHHerzl16-Jun-19 12:08
GeneralMy vote of 5 Pin
Member 135814435-Mar-19 3:30
MemberMember 135814435-Mar-19 3:30
GeneralRe: My vote of 5 Pin
HHerzl16-Jun-19 12:08
MemberHHerzl16-Jun-19 12:08
QuestionGot build error Pin
voldo527-Feb-19 4:47
Membervoldo527-Feb-19 4:47
AnswerRe: Got build error Pin
voldo527-Feb-19 9:07
Membervoldo527-Feb-19 9:07
QuestionSolution On Mac Pin
Member 1174073314-Feb-19 23:46
MemberMember 1174073314-Feb-19 23:46
AnswerRe: Solution On Mac Pin
HHerzl17-Feb-19 10:21
MemberHHerzl17-Feb-19 10:21
GeneralRe: Solution On Mac Pin
Member 1174073317-Feb-19 18:41
MemberMember 1174073317-Feb-19 18:41
GeneralRe: Solution On Mac Pin
HHerzl18-Feb-19 18:01
MemberHHerzl18-Feb-19 18:01
QuestionThis looks very nice and robust. Thanks (-) Pin
Michael Breeden15-Jan-19 6:56
MemberMichael Breeden15-Jan-19 6:56
AnswerRe: This looks very nice and robust. Thanks (-) Pin
HHerzl16-Jan-19 17:43
MemberHHerzl16-Jan-19 17:43
QuestionAlternate Hosting Choices Pin
MSBassSinger23-Nov-18 12:50
professionalMSBassSinger23-Nov-18 12:50
AnswerRe: Alternate Hosting Choices Pin
HHerzl25-Nov-18 10:35
MemberHHerzl25-Nov-18 10:35
General General News News Suggestion Suggestion Question Question Bug Bug Answer Answer Joke Joke Praise Praise Rant Rant Admin Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
__label__pos
| 0.987444 |
COMBINATORIAL_BLAS 1.3
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
ParFriends.h
Go to the documentation of this file.
1 #ifndef _PAR_FRIENDS_H_
2 #define _PAR_FRIENDS_H_
3
4 #include "mpi.h"
5 #include <iostream>
6 #include <cstdarg>
7 #include "SpParMat.h"
8 #include "SpParHelper.h"
9 #include "MPIType.h"
10 #include "Friends.h"
11 #include "OptBuf.h"
12
13
14 using namespace std;
15
16 template <class IT, class NT, class DER>
17 class SpParMat;
18
19 /*************************************************************************************************/
20 /**************************** FRIEND FUNCTIONS FOR PARALLEL CLASSES ******************************/
21 /*************************************************************************************************/
22
23
27 template <typename IT, typename NT>
29 {
30 if(vecs.size() < 1)
31 {
32 SpParHelper::Print("Warning: Nothing to concatenate, returning empty ");
33 return FullyDistVec<IT,NT>();
34 }
35 else if (vecs.size() < 2)
36 {
37 return vecs[1];
38
39 }
40 else
41 {
42 typename vector< FullyDistVec<IT,NT> >::iterator it = vecs.begin();
43 shared_ptr<CommGrid> commGridPtr = it->getcommgrid();
44 MPI_Comm World = commGridPtr->GetWorld();
45
46 IT nglen = it->TotalLength(); // new global length
47 IT cumloclen = it->MyLocLength(); // existing cumulative local lengths
48 ++it;
49 for(; it != vecs.end(); ++it)
50 {
51 if(*(commGridPtr) != *(it->getcommgrid()))
52 {
53 SpParHelper::Print("Grids are not comparable for FullyDistVec<IT,NT>::EWiseApply\n");
54 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
55 }
56 nglen += it->TotalLength();
57 cumloclen += it->MyLocLength();
58 }
59 FullyDistVec<IT,NT> ConCat (commGridPtr, nglen, NT());
60 int nprocs = commGridPtr->GetSize();
61
62 vector< vector< NT > > data(nprocs);
63 vector< vector< IT > > inds(nprocs);
64 IT gloffset = 0;
65 for(it = vecs.begin(); it != vecs.end(); ++it)
66 {
67 IT loclen = it->LocArrSize();
68 for(IT i=0; i < loclen; ++i)
69 {
70 IT locind;
71 IT loffset = it->LengthUntil();
72 int owner = ConCat.Owner(gloffset+loffset+i, locind);
73 data[owner].push_back(it->arr[i]);
74 inds[owner].push_back(locind);
75 }
76 gloffset += it->TotalLength();
77 }
78
79 int * sendcnt = new int[nprocs];
80 int * sdispls = new int[nprocs];
81 for(int i=0; i<nprocs; ++i)
82 sendcnt[i] = (int) data[i].size();
83
84 int * rdispls = new int[nprocs];
85 int * recvcnt = new int[nprocs];
86 MPI_Alltoall(sendcnt, 1, MPI_INT, recvcnt, 1, MPI_INT, World); // share the request counts
87 sdispls[0] = 0;
88 rdispls[0] = 0;
89 for(int i=0; i<nprocs-1; ++i)
90 {
91 sdispls[i+1] = sdispls[i] + sendcnt[i];
92 rdispls[i+1] = rdispls[i] + recvcnt[i];
93 }
94 IT totrecv = accumulate(recvcnt,recvcnt+nprocs,static_cast<IT>(0));
95 NT * senddatabuf = new NT[cumloclen];
96 for(int i=0; i<nprocs; ++i)
97 {
98 copy(data[i].begin(), data[i].end(), senddatabuf+sdispls[i]);
99 vector<NT>().swap(data[i]); // delete data vectors
100 }
101 NT * recvdatabuf = new NT[totrecv];
102 MPI_Alltoallv(senddatabuf, sendcnt, sdispls, MPIType<NT>(), recvdatabuf, recvcnt, rdispls, MPIType<NT>(), World); // send data
103 delete [] senddatabuf;
104
105 IT * sendindsbuf = new IT[cumloclen];
106 for(int i=0; i<nprocs; ++i)
107 {
108 copy(inds[i].begin(), inds[i].end(), sendindsbuf+sdispls[i]);
109 vector<IT>().swap(inds[i]); // delete inds vectors
110 }
111 IT * recvindsbuf = new IT[totrecv];
112 MPI_Alltoallv(sendindsbuf, sendcnt, sdispls, MPIType<IT>(), recvindsbuf, recvcnt, rdispls, MPIType<IT>(), World); // send new inds
113 DeleteAll(sendindsbuf, sendcnt, sdispls);
114
115 for(int i=0; i<nprocs; ++i)
116 {
117 for(int j = rdispls[i]; j < rdispls[i] + recvcnt[i]; ++j)
118 {
119 ConCat.arr[recvindsbuf[j]] = recvdatabuf[j];
120 }
121 }
122 DeleteAll(recvindsbuf, recvcnt, rdispls);
123 return ConCat;
124 }
125 }
126
127 template <typename MATRIXA, typename MATRIXB>
128 bool CheckSpGEMMCompliance(const MATRIXA & A, const MATRIXB & B)
129 {
130 if(A.getncol() != B.getnrow())
131 {
132 ostringstream outs;
133 outs << "Can not multiply, dimensions does not match"<< endl;
134 outs << A.getncol() << " != " << B.getnrow() << endl;
135 SpParHelper::Print(outs.str());
136 MPI_Abort(MPI_COMM_WORLD, DIMMISMATCH);
137 return false;
138 }
139 if((void*) &A == (void*) &B)
140 {
141 ostringstream outs;
142 outs << "Can not multiply, inputs alias (make a temporary copy of one of them first)"<< endl;
143 SpParHelper::Print(outs.str());
144 MPI_Abort(MPI_COMM_WORLD, MATRIXALIAS);
145 return false;
146 }
147 return true;
148 }
149
150
159 template <typename SR, typename NUO, typename UDERO, typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB>
161 (SpParMat<IU,NU1,UDERA> & A, SpParMat<IU,NU2,UDERB> & B, bool clearA = false, bool clearB = false )
162
163 {
164 if(!CheckSpGEMMCompliance(A,B) )
165 {
166 return SpParMat< IU,NUO,UDERO >();
167 }
168
169 int stages, dummy; // last two parameters of ProductGrid are ignored for Synch multiplication
170 shared_ptr<CommGrid> GridC = ProductGrid((A.commGrid).get(), (B.commGrid).get(), stages, dummy, dummy);
171 IU C_m = A.spSeq->getnrow();
172 IU C_n = B.spSeq->getncol();
173
174 UDERA * A1seq = new UDERA();
175 UDERA * A2seq = new UDERA();
176 UDERB * B1seq = new UDERB();
177 UDERB * B2seq = new UDERB();
178 (A.spSeq)->Split( *A1seq, *A2seq);
179 const_cast< UDERB* >(B.spSeq)->Transpose();
180 (B.spSeq)->Split( *B1seq, *B2seq);
181 MPI_Barrier(GridC->GetWorld());
182
183 IU ** ARecvSizes = SpHelper::allocate2D<IU>(UDERA::esscount, stages);
184 IU ** BRecvSizes = SpHelper::allocate2D<IU>(UDERB::esscount, stages);
185
186 SpParHelper::GetSetSizes( *A1seq, ARecvSizes, (A.commGrid)->GetRowWorld());
187 SpParHelper::GetSetSizes( *B1seq, BRecvSizes, (B.commGrid)->GetColWorld());
188
189 // Remotely fetched matrices are stored as pointers
190 UDERA * ARecv;
191 UDERB * BRecv;
192 vector< SpTuples<IU,NUO> *> tomerge;
193
194 int Aself = (A.commGrid)->GetRankInProcRow();
195 int Bself = (B.commGrid)->GetRankInProcCol();
196
197 for(int i = 0; i < stages; ++i)
198 {
199 vector<IU> ess;
200 if(i == Aself)
201 {
202 ARecv = A1seq; // shallow-copy
203 }
204 else
205 {
206 ess.resize(UDERA::esscount);
207 for(int j=0; j< UDERA::esscount; ++j)
208 {
209 ess[j] = ARecvSizes[j][i]; // essentials of the ith matrix in this row
210 }
211 ARecv = new UDERA(); // first, create the object
212 }
213 SpParHelper::BCastMatrix(GridC->GetRowWorld(), *ARecv, ess, i); // then, receive its elements
214 ess.clear();
215 if(i == Bself)
216 {
217 BRecv = B1seq; // shallow-copy
218 }
219 else
220 {
221 ess.resize(UDERB::esscount);
222 for(int j=0; j< UDERB::esscount; ++j)
223 {
224 ess[j] = BRecvSizes[j][i];
225 }
226 BRecv = new UDERB();
227 }
228 SpParHelper::BCastMatrix(GridC->GetColWorld(), *BRecv, ess, i); // then, receive its elements
229 SpTuples<IU,NUO> * C_cont = MultiplyReturnTuples<SR, NUO>
230 (*ARecv, *BRecv, // parameters themselves
231 false, true, // transpose information (B is transposed)
232 i != Aself, // 'delete A' condition
233 i != Bself); // 'delete B' condition
234 if(!C_cont->isZero())
235 tomerge.push_back(C_cont);
236 else
237 delete C_cont;
238 }
239 if(clearA) delete A1seq;
240 if(clearB) delete B1seq;
241
242 // Set the new dimensions
243 SpParHelper::GetSetSizes( *A2seq, ARecvSizes, (A.commGrid)->GetRowWorld());
244 SpParHelper::GetSetSizes( *B2seq, BRecvSizes, (B.commGrid)->GetColWorld());
245
246 // Start the second round
247 for(int i = 0; i < stages; ++i)
248 {
249 vector<IU> ess;
250 if(i == Aself)
251 {
252 ARecv = A2seq; // shallow-copy
253 }
254 else
255 {
256 ess.resize(UDERA::esscount);
257 for(int j=0; j< UDERA::esscount; ++j)
258 {
259 ess[j] = ARecvSizes[j][i]; // essentials of the ith matrix in this row
260 }
261 ARecv = new UDERA(); // first, create the object
262 }
263
264 SpParHelper::BCastMatrix(GridC->GetRowWorld(), *ARecv, ess, i); // then, receive its elements
265 ess.clear();
266
267 if(i == Bself)
268 {
269 BRecv = B2seq; // shallow-copy
270 }
271 else
272 {
273 ess.resize(UDERB::esscount);
274 for(int j=0; j< UDERB::esscount; ++j)
275 {
276 ess[j] = BRecvSizes[j][i];
277 }
278 BRecv = new UDERB();
279 }
280 SpParHelper::BCastMatrix(GridC->GetColWorld(), *BRecv, ess, i); // then, receive its elements
281
282 SpTuples<IU,NUO> * C_cont = MultiplyReturnTuples<SR, NUO>
283 (*ARecv, *BRecv, // parameters themselves
284 false, true, // transpose information (B is transposed)
285 i != Aself, // 'delete A' condition
286 i != Bself); // 'delete B' condition
287 if(!C_cont->isZero())
288 tomerge.push_back(C_cont);
289 else
290 delete C_cont;
291 }
292 SpHelper::deallocate2D(ARecvSizes, UDERA::esscount);
293 SpHelper::deallocate2D(BRecvSizes, UDERB::esscount);
294 if(clearA)
295 {
296 delete A2seq;
297 delete A.spSeq;
298 A.spSeq = NULL;
299 }
300 else
301 {
302 (A.spSeq)->Merge(*A1seq, *A2seq);
303 delete A1seq;
304 delete A2seq;
305 }
306 if(clearB)
307 {
308 delete B2seq;
309 delete B.spSeq;
310 B.spSeq = NULL;
311 }
312 else
313 {
314 (B.spSeq)->Merge(*B1seq, *B2seq);
315 delete B1seq;
316 delete B2seq;
317 const_cast< UDERB* >(B.spSeq)->Transpose(); // transpose back to original
318 }
319
320 UDERO * C = new UDERO(MergeAll<SR>(tomerge, C_m, C_n,true), false);
321 // First get the result in SpTuples, then convert to UDER
322 return SpParMat<IU,NUO,UDERO> (C, GridC); // return the result object
323 }
324
325
331 template <typename SR, typename NUO, typename UDERO, typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB>
333 (SpParMat<IU,NU1,UDERA> & A, SpParMat<IU,NU2,UDERB> & B, bool clearA = false, bool clearB = false )
334
335 {
336 if(!CheckSpGEMMCompliance(A,B) )
337 {
338 return SpParMat< IU,NUO,UDERO >();
339 }
340 int stages, dummy; // last two parameters of ProductGrid are ignored for Synch multiplication
341 shared_ptr<CommGrid> GridC = ProductGrid((A.commGrid).get(), (B.commGrid).get(), stages, dummy, dummy);
342 IU C_m = A.spSeq->getnrow();
343 IU C_n = B.spSeq->getncol();
344
345 const_cast< UDERB* >(B.spSeq)->Transpose();
346 MPI_Barrier(GridC->GetWorld());
347
348 IU ** ARecvSizes = SpHelper::allocate2D<IU>(UDERA::esscount, stages);
349 IU ** BRecvSizes = SpHelper::allocate2D<IU>(UDERB::esscount, stages);
350
351 SpParHelper::GetSetSizes( *(A.spSeq), ARecvSizes, (A.commGrid)->GetRowWorld());
352 SpParHelper::GetSetSizes( *(B.spSeq), BRecvSizes, (B.commGrid)->GetColWorld());
353
354 // Remotely fetched matrices are stored as pointers
355 UDERA * ARecv;
356 UDERB * BRecv;
357 vector< SpTuples<IU,NUO> *> tomerge;
358
359 int Aself = (A.commGrid)->GetRankInProcRow();
360 int Bself = (B.commGrid)->GetRankInProcCol();
361
362 for(int i = 0; i < stages; ++i)
363 {
364 vector<IU> ess;
365 if(i == Aself)
366 {
367 ARecv = A.spSeq; // shallow-copy
368 }
369 else
370 {
371 ess.resize(UDERA::esscount);
372 for(int j=0; j< UDERA::esscount; ++j)
373 {
374 ess[j] = ARecvSizes[j][i]; // essentials of the ith matrix in this row
375 }
376 ARecv = new UDERA(); // first, create the object
377 }
378
379 SpParHelper::BCastMatrix(GridC->GetRowWorld(), *ARecv, ess, i); // then, receive its elements
380 ess.clear();
381
382 if(i == Bself)
383 {
384 BRecv = B.spSeq; // shallow-copy
385 }
386 else
387 {
388 ess.resize(UDERB::esscount);
389 for(int j=0; j< UDERB::esscount; ++j)
390 {
391 ess[j] = BRecvSizes[j][i];
392 }
393 BRecv = new UDERB();
394 }
395 SpParHelper::BCastMatrix(GridC->GetColWorld(), *BRecv, ess, i); // then, receive its elements
396
397 SpTuples<IU,NUO> * C_cont = MultiplyReturnTuples<SR, NUO>
398 (*ARecv, *BRecv, // parameters themselves
399 false, true, // transpose information (B is transposed)
400 i != Aself, // 'delete A' condition
401 i != Bself); // 'delete B' condition
402
403 if(!C_cont->isZero())
404 tomerge.push_back(C_cont);
405
406 #ifndef NDEBUG
407 ostringstream outs;
408 outs << i << "th SUMMA iteration"<< endl;
409 SpParHelper::Print(outs.str());
410 #endif
411 }
412 if(clearA && A.spSeq != NULL)
413 {
414 delete A.spSeq;
415 A.spSeq = NULL;
416 }
417 if(clearB && B.spSeq != NULL)
418 {
419 delete B.spSeq;
420 B.spSeq = NULL;
421 }
422
423 SpHelper::deallocate2D(ARecvSizes, UDERA::esscount);
424 SpHelper::deallocate2D(BRecvSizes, UDERB::esscount);
425
426 UDERO * C = new UDERO(MergeAll<SR>(tomerge, C_m, C_n,true), false);
427 // First get the result in SpTuples, then convert to UDER
428 // the last parameter to MergeAll deletes tomerge arrays
429
430 if(!clearB)
431 const_cast< UDERB* >(B.spSeq)->Transpose(); // transpose back to original
432
433 return SpParMat<IU,NUO,UDERO> (C, GridC); // return the result object
434 }
435
436
437 template <typename MATRIX, typename VECTOR>
438 void CheckSpMVCompliance(const MATRIX & A, const VECTOR & x)
439 {
440 if(A.getncol() != x.TotalLength())
441 {
442 ostringstream outs;
443 outs << "Can not multiply, dimensions does not match"<< endl;
444 outs << A.getncol() << " != " << x.TotalLength() << endl;
445 SpParHelper::Print(outs.str());
446 MPI_Abort(MPI_COMM_WORLD, DIMMISMATCH);
447 }
448 if(! ( *(A.getcommgrid()) == *(x.getcommgrid())) )
449 {
450 cout << "Grids are not comparable for SpMV" << endl;
451 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
452 }
453 }
454
455
456 template <typename SR, typename IU, typename NUM, typename UDER>
458 (const SpParMat<IU,NUM,UDER> & A, const FullyDistSpVec<IU,IU> & x, bool indexisvalue, OptBuf<int32_t, typename promote_trait<NUM,IU>::T_promote > & optbuf);
459
460 template <typename SR, typename IU, typename NUM, typename UDER>
462 (const SpParMat<IU,NUM,UDER> & A, const FullyDistSpVec<IU,IU> & x, bool indexisvalue)
463 {
464 typedef typename promote_trait<NUM,IU>::T_promote T_promote;
466 return SpMV<SR>(A, x, indexisvalue, optbuf);
467 }
468
474 template<typename IU, typename NV>
475 void TransposeVector(MPI_Comm & World, const FullyDistSpVec<IU,NV> & x, int32_t & trxlocnz, IU & lenuntil, int32_t * & trxinds, NV * & trxnums, bool indexisvalue)
476 {
477 int32_t xlocnz = (int32_t) x.getlocnnz();
478 int32_t roffst = (int32_t) x.RowLenUntil(); // since trxinds is int32_t
479 int32_t roffset;
480 IU luntil = x.LengthUntil();
481 int diagneigh = x.commGrid->GetComplementRank();
482
483 MPI_Status status;
484 MPI_Sendrecv(&roffst, 1, MPIType<int32_t>(), diagneigh, TROST, &roffset, 1, MPIType<int32_t>(), diagneigh, TROST, World, &status);
485 MPI_Sendrecv(&xlocnz, 1, MPIType<int32_t>(), diagneigh, TRNNZ, &trxlocnz, 1, MPIType<int32_t>(), diagneigh, TRNNZ, World, &status);
486 MPI_Sendrecv(&luntil, 1, MPIType<IU>(), diagneigh, TRLUT, &lenuntil, 1, MPIType<IU>(), diagneigh, TRLUT, World, &status);
487
488 // ABAB: Important observation is that local indices (given by x.ind) is 32-bit addressible
489 // Copy them to 32 bit integers and transfer that to save 50% of off-node bandwidth
490 trxinds = new int32_t[trxlocnz];
491 int32_t * temp_xind = new int32_t[xlocnz];
492 for(int i=0; i< xlocnz; ++i) temp_xind[i] = (int32_t) x.ind[i];
493 MPI_Sendrecv(temp_xind, xlocnz, MPIType<int32_t>(), diagneigh, TRI, trxinds, trxlocnz, MPIType<int32_t>(), diagneigh, TRI, World, &status);
494 delete [] temp_xind;
495 if(!indexisvalue)
496 {
497 trxnums = new NV[trxlocnz];
498 MPI_Sendrecv(const_cast<NV*>(SpHelper::p2a(x.num)), xlocnz, MPIType<NV>(), diagneigh, TRX, trxnums, trxlocnz, MPIType<NV>(), diagneigh, TRX, World, &status);
499 }
500 transform(trxinds, trxinds+trxlocnz, trxinds, bind2nd(plus<int32_t>(), roffset)); // fullydist indexing (p pieces) -> matrix indexing (sqrt(p) pieces)
501 }
502
510 template<typename IU, typename NV>
511 void AllGatherVector(MPI_Comm & ColWorld, int trxlocnz, IU lenuntil, int32_t * & trxinds, NV * & trxnums,
512 int32_t * & indacc, NV * & numacc, int & accnz, bool indexisvalue)
513 {
514 int colneighs, colrank;
515 MPI_Comm_size(ColWorld, &colneighs);
516 MPI_Comm_rank(ColWorld, &colrank);
517 int * colnz = new int[colneighs];
518 colnz[colrank] = trxlocnz;
519 MPI_Allgather(MPI_IN_PLACE, 1, MPI_INT, colnz, 1, MPI_INT, ColWorld);
520 int * dpls = new int[colneighs](); // displacements (zero initialized pid)
521 std::partial_sum(colnz, colnz+colneighs-1, dpls+1);
522 accnz = std::accumulate(colnz, colnz+colneighs, 0);
523 indacc = new int32_t[accnz];
524 numacc = new NV[accnz];
525
526 // ABAB: Future issues here, colnz is of type int (MPI limitation)
527 // What if the aggregate vector size along the processor row/column is not 32-bit addressible?
528 // This will happen when n/sqrt(p) > 2^31
529 // Currently we can solve a small problem (scale 32) with 4096 processor
530 // For a medium problem (scale 35), we'll need 32K processors which gives sqrt(p) ~ 180
531 // 2^35 / 180 ~ 2^29 / 3 which is not an issue !
532
533 #ifdef TIMING
534 double t0=MPI_Wtime();
535 #endif
536 MPI_Allgatherv(trxinds, trxlocnz, MPIType<int32_t>(), indacc, colnz, dpls, MPIType<int32_t>(), ColWorld);
537
538 delete [] trxinds;
539 if(indexisvalue)
540 {
541 IU lenuntilcol;
542 if(colrank == 0) lenuntilcol = lenuntil;
543 MPI_Bcast(&lenuntilcol, 1, MPIType<IU>(), 0, ColWorld);
544 for(int i=0; i< accnz; ++i) // fill numerical values from indices
545 {
546 numacc[i] = indacc[i] + lenuntilcol;
547 }
548 }
549 else
550 {
551 MPI_Allgatherv(trxnums, trxlocnz, MPIType<NV>(), numacc, colnz, dpls, MPIType<NV>(), ColWorld);
552 delete [] trxnums;
553 }
554 #ifdef TIMING
555 double t1=MPI_Wtime();
556 cblas_allgathertime += (t1-t0);
557 #endif
558 DeleteAll(colnz,dpls);
559 }
560
561
562
569 template<typename SR, typename IVT, typename OVT, typename IU, typename NUM, typename UDER>
570 void LocalSpMV(const SpParMat<IU,NUM,UDER> & A, int rowneighs, OptBuf<int32_t, OVT > & optbuf, int32_t * & indacc, IVT * & numacc,
571 int32_t * & sendindbuf, OVT * & sendnumbuf, int * & sdispls, int * sendcnt, int accnz, bool indexisvalue)
572 {
573 if(optbuf.totmax > 0) // graph500 optimization enabled
574 {
575 if(A.spSeq->getnsplit() > 0)
576 {
577 // optbuf.{inds/nums/dspls} and sendcnt are all pre-allocated and only filled by dcsc_gespmv_threaded
578 dcsc_gespmv_threaded_setbuffers<SR> (*(A.spSeq), indacc, numacc, accnz, optbuf.inds, optbuf.nums, sendcnt, optbuf.dspls, rowneighs);
579 }
580 else
581 {
582 dcsc_gespmv<SR> (*(A.spSeq), indacc, numacc, accnz, optbuf.inds, optbuf.nums, sendcnt, optbuf.dspls, rowneighs, indexisvalue);
583 }
584 DeleteAll(indacc,numacc);
585 }
586 else
587 {
588 if(A.spSeq->getnsplit() > 0)
589 {
590 // sendindbuf/sendnumbuf/sdispls are all allocated and filled by dcsc_gespmv_threaded
591 int totalsent = dcsc_gespmv_threaded<SR> (*(A.spSeq), indacc, numacc, accnz, sendindbuf, sendnumbuf, sdispls, rowneighs);
592
593 DeleteAll(indacc, numacc);
594 for(int i=0; i<rowneighs-1; ++i)
595 sendcnt[i] = sdispls[i+1] - sdispls[i];
596 sendcnt[rowneighs-1] = totalsent - sdispls[rowneighs-1];
597 }
598 else
599 {
600 // serial SpMV with sparse vector
601 vector< int32_t > indy;
602 vector< OVT > numy;
603
604 dcsc_gespmv<SR>(*(A.spSeq), indacc, numacc, accnz, indy, numy); // actual multiplication
605 DeleteAll(indacc, numacc);
606
607 int32_t bufsize = indy.size(); // as compact as possible
608 sendindbuf = new int32_t[bufsize];
609 sendnumbuf = new OVT[bufsize];
610 int32_t perproc = A.getlocalrows() / rowneighs;
611
612 int k = 0; // index to buffer
613 for(int i=0; i<rowneighs; ++i)
614 {
615 int32_t end_this = (i==rowneighs-1) ? A.getlocalrows(): (i+1)*perproc;
616 while(k < bufsize && indy[k] < end_this)
617 {
618 sendindbuf[k] = indy[k] - i*perproc;
619 sendnumbuf[k] = numy[k];
620 ++sendcnt[i];
621 ++k;
622 }
623 }
624 sdispls = new int[rowneighs]();
625 partial_sum(sendcnt, sendcnt+rowneighs-1, sdispls+1);
626 }
627 }
628 }
629
630 template <typename SR, typename IU, typename OVT>
631 void MergeContributions(FullyDistSpVec<IU,OVT> & y, int * & recvcnt, int * & rdispls, int32_t * & recvindbuf, OVT * & recvnumbuf, int rowneighs)
632 {
633 // free memory of y, in case it was aliased
634 vector<IU>().swap(y.ind);
635 vector<OVT>().swap(y.num);
636
637 #ifndef HEAPMERGE
638 IU ysize = y.MyLocLength(); // my local length is only O(n/p)
639 bool * isthere = new bool[ysize];
640 vector< pair<IU,OVT> > ts_pairs;
641 fill_n(isthere, ysize, false);
642
643 // We don't need to keep a "merger" because minimum will always come from the processor
644 // with the smallest rank; so a linear sweep over the received buffer is enough
645 for(int i=0; i<rowneighs; ++i)
646 {
647 for(int j=0; j< recvcnt[i]; ++j)
648 {
649 int32_t index = recvindbuf[rdispls[i] + j];
650 if(!isthere[index])
651 ts_pairs.push_back(make_pair(index, recvnumbuf[rdispls[i] + j]));
652
653 }
654 }
655 DeleteAll(recvcnt, rdispls);
656 DeleteAll(isthere, recvindbuf, recvnumbuf);
657 sort(ts_pairs.begin(), ts_pairs.end());
658 int nnzy = ts_pairs.size();
659 y.ind.resize(nnzy);
660 y.num.resize(nnzy);
661 for(int i=0; i< nnzy; ++i)
662 {
663 y.ind[i] = ts_pairs[i].first;
664 y.num[i] = ts_pairs[i].second;
665 }
666 #else
667 // Alternative 2: Heap-merge
668 int32_t hsize = 0;
669 int32_t inf = numeric_limits<int32_t>::min();
670 int32_t sup = numeric_limits<int32_t>::max();
671 KNHeap< int32_t, int32_t > sHeap(sup, inf);
672 int * processed = new int[rowneighs]();
673 for(int i=0; i<rowneighs; ++i)
674 {
675 if(recvcnt[i] > 0)
676 {
677 // key, proc_id
678 sHeap.insert(recvindbuf[rdispls[i]], i);
679 ++hsize;
680 }
681 }
682 int32_t key, locv;
683 if(hsize > 0)
684 {
685 sHeap.deleteMin(&key, &locv);
686 y.ind.push_back( static_cast<IU>(key));
687 y.num.push_back(recvnumbuf[rdispls[locv]]); // nothing is processed yet
688
689 if( (++(processed[locv])) < recvcnt[locv] )
690 sHeap.insert(recvindbuf[rdispls[locv]+processed[locv]], locv);
691 else
692 --hsize;
693 }
694 while(hsize > 0)
695 {
696 sHeap.deleteMin(&key, &locv);
697 IU deref = rdispls[locv] + processed[locv];
698 if(y.ind.back() == static_cast<IU>(key)) // y.ind is surely not empty
699 {
700 y.num.back() = SR::add(y.num.back(), recvnumbuf[deref]);
701 // ABAB: Benchmark actually allows us to be non-deterministic in terms of parent selection
702 // We can just skip this addition operator (if it's a max/min select)
703 }
704 else
705 {
706 y.ind.push_back(static_cast<IU>(key));
707 y.num.push_back(recvnumbuf[deref]);
708 }
709
710 if( (++(processed[locv])) < recvcnt[locv] )
711 sHeap.insert(recvindbuf[rdispls[locv]+processed[locv]], locv);
712 else
713 --hsize;
714 }
715 DeleteAll(recvcnt, rdispls,processed);
716 DeleteAll(recvindbuf, recvnumbuf);
717 #endif
718
719 }
720
727 template <typename SR, typename IVT, typename OVT, typename IU, typename NUM, typename UDER>
729 bool indexisvalue, OptBuf<int32_t, OVT > & optbuf)
730 {
731 CheckSpMVCompliance(A,x);
732 optbuf.MarkEmpty();
733
734 MPI_Comm World = x.commGrid->GetWorld();
735 MPI_Comm ColWorld = x.commGrid->GetColWorld();
736 MPI_Comm RowWorld = x.commGrid->GetRowWorld();
737
738 int accnz;
739 int32_t trxlocnz;
740 IU lenuntil;
741 int32_t *trxinds, *indacc;
742 IVT *trxnums, *numacc;
743 TransposeVector(World, x, trxlocnz, lenuntil, trxinds, trxnums, indexisvalue);
744 AllGatherVector(ColWorld, trxlocnz, lenuntil, trxinds, trxnums, indacc, numacc, accnz, indexisvalue);
745
746 int rowneighs;
747 MPI_Comm_size(RowWorld, &rowneighs);
748 int * sendcnt = new int[rowneighs]();
749 int32_t * sendindbuf;
750 OVT * sendnumbuf;
751 int * sdispls;
752 LocalSpMV<SR>(A, rowneighs, optbuf, indacc, numacc, sendindbuf, sendnumbuf, sdispls, sendcnt, accnz, indexisvalue); // indacc/numacc deallocated, sendindbuf/sendnumbuf/sdispls allocated
753
754 int * rdispls = new int[rowneighs];
755 int * recvcnt = new int[rowneighs];
756 MPI_Alltoall(sendcnt, 1, MPI_INT, recvcnt, 1, MPI_INT, RowWorld); // share the request counts
757
758 // receive displacements are exact whereas send displacements have slack
759 rdispls[0] = 0;
760 for(int i=0; i<rowneighs-1; ++i)
761 {
762 rdispls[i+1] = rdispls[i] + recvcnt[i];
763 }
764 int totrecv = accumulate(recvcnt,recvcnt+rowneighs,0);
765 int32_t * recvindbuf = new int32_t[totrecv];
766 OVT * recvnumbuf = new OVT[totrecv];
767
768 #ifdef TIMING
769 double t2=MPI_Wtime();
770 #endif
771 if(optbuf.totmax > 0 ) // graph500 optimization enabled
772 {
773 MPI_Alltoallv(optbuf.inds, sendcnt, optbuf.dspls, MPIType<int32_t>(), recvindbuf, recvcnt, rdispls, MPIType<int32_t>(), RowWorld);
774 MPI_Alltoallv(optbuf.nums, sendcnt, optbuf.dspls, MPIType<OVT>(), recvnumbuf, recvcnt, rdispls, MPIType<OVT>(), RowWorld);
775
776 delete [] sendcnt;
777 }
778 else
779 {
780 /* ofstream oput;
781 x.commGrid->OpenDebugFile("Send", oput);
782 oput << "To displacements: "; copy(sdispls, sdispls+rowneighs, ostream_iterator<int>(oput, " ")); oput << endl;
783 oput << "To counts: "; copy(sendcnt, sendcnt+rowneighs, ostream_iterator<int>(oput, " ")); oput << endl;
784 for(int i=0; i< rowneighs; ++i)
785 {
786 oput << "To neighbor: " << i << endl;
787 copy(sendindbuf+sdispls[i], sendindbuf+sdispls[i]+sendcnt[i], ostream_iterator<int32_t>(oput, " ")); oput << endl;
788 copy(sendnumbuf+sdispls[i], sendnumbuf+sdispls[i]+sendcnt[i], ostream_iterator<OVT>(oput, " ")); oput << endl;
789 }
790 oput.close(); */
791
792 MPI_Alltoallv(sendindbuf, sendcnt, sdispls, MPIType<int32_t>(), recvindbuf, recvcnt, rdispls, MPIType<int32_t>(), RowWorld);
793 MPI_Alltoallv(sendnumbuf, sendcnt, sdispls, MPIType<OVT>(), recvnumbuf, recvcnt, rdispls, MPIType<OVT>(), RowWorld);
794
795 DeleteAll(sendindbuf, sendnumbuf);
796 DeleteAll(sendcnt, sdispls);
797 }
798 #ifdef TIMING
799 double t3=MPI_Wtime();
800 cblas_alltoalltime += (t3-t2);
801 #endif
802
803 // ofstream output;
804 // A.commGrid->OpenDebugFile("Recv", output);
805 // copy(recvindbuf, recvindbuf+totrecv, ostream_iterator<IU>(output," ")); output << endl;
806 // output.close();
807
808 MergeContributions<SR>(y,recvcnt, rdispls, recvindbuf, recvnumbuf, rowneighs);
809 }
810
811
812 template <typename SR, typename IVT, typename OVT, typename IU, typename NUM, typename UDER>
813 void SpMV (const SpParMat<IU,NUM,UDER> & A, const FullyDistSpVec<IU,IVT> & x, FullyDistSpVec<IU,OVT> & y, bool indexisvalue)
814 {
816 SpMV<SR>(A, x, y, indexisvalue, optbuf);
817 }
818
819
824 template <typename SR, typename IU, typename NUM, typename UDER>
826 (const SpParMat<IU,NUM,UDER> & A, const FullyDistSpVec<IU,IU> & x, bool indexisvalue, OptBuf<int32_t, typename promote_trait<NUM,IU>::T_promote > & optbuf)
827 {
828 typedef typename promote_trait<NUM,IU>::T_promote T_promote;
829 FullyDistSpVec<IU, T_promote> y ( x.getcommgrid(), A.getnrow()); // identity doesn't matter for sparse vectors
830 SpMV<SR>(A, x, y, indexisvalue, optbuf);
831 return y;
832 }
833
837 template <typename SR, typename IU, typename NUM, typename NUV, typename UDER>
840 {
841 typedef typename promote_trait<NUM,NUV>::T_promote T_promote;
842 CheckSpMVCompliance(A, x);
843
844 MPI_Comm World = x.commGrid->GetWorld();
845 MPI_Comm ColWorld = x.commGrid->GetColWorld();
846 MPI_Comm RowWorld = x.commGrid->GetRowWorld();
847
848 int xsize = (int) x.LocArrSize();
849 int trxsize = 0;
850
851 int diagneigh = x.commGrid->GetComplementRank();
852 MPI_Status status;
853 MPI_Sendrecv(&xsize, 1, MPI_INT, diagneigh, TRX, &trxsize, 1, MPI_INT, diagneigh, TRX, World, &status);
854
855 NUV * trxnums = new NUV[trxsize];
856 MPI_Sendrecv(const_cast<NUV*>(SpHelper::p2a(x.arr)), xsize, MPIType<NUV>(), diagneigh, TRX, trxnums, trxsize, MPIType<NUV>(), diagneigh, TRX, World, &status);
857
858 int colneighs, colrank;
859 MPI_Comm_size(ColWorld, &colneighs);
860 MPI_Comm_rank(ColWorld, &colrank);
861 int * colsize = new int[colneighs];
862 colsize[colrank] = trxsize;
863 MPI_Allgather(MPI_IN_PLACE, 1, MPI_INT, colsize, 1, MPI_INT, ColWorld);
864 int * dpls = new int[colneighs](); // displacements (zero initialized pid)
865 std::partial_sum(colsize, colsize+colneighs-1, dpls+1);
866 int accsize = std::accumulate(colsize, colsize+colneighs, 0);
867 NUV * numacc = new NUV[accsize];
868
869 MPI_Allgatherv(trxnums, trxsize, MPIType<NUV>(), numacc, colsize, dpls, MPIType<NUV>(), ColWorld);
870 delete [] trxnums;
871
872 // serial SpMV with dense vector
873 T_promote id = SR::id();
874 IU ysize = A.getlocalrows();
875 T_promote * localy = new T_promote[ysize];
876 fill_n(localy, ysize, id);
877 dcsc_gespmv<SR>(*(A.spSeq), numacc, localy);
878
879 DeleteAll(numacc,colsize, dpls);
880
881 // FullyDistVec<IT,NT>(shared_ptr<CommGrid> grid, IT globallen, NT initval, NT id)
882 FullyDistVec<IU, T_promote> y ( x.commGrid, A.getnrow(), id);
883
884 int rowneighs;
885 MPI_Comm_size(RowWorld, &rowneighs);
886
887 IU begptr, endptr;
888 for(int i=0; i< rowneighs; ++i)
889 {
890 begptr = y.RowLenUntil(i);
891 if(i == rowneighs-1)
892 {
893 endptr = ysize;
894 }
895 else
896 {
897 endptr = y.RowLenUntil(i+1);
898 }
899 MPI_Reduce(localy+begptr, SpHelper::p2a(y.arr), endptr-begptr, MPIType<T_promote>(), SR::mpi_op(), i, RowWorld);
900 }
901 delete [] localy;
902 return y;
903 }
904
905
911 template <typename SR, typename IU, typename NUM, typename NUV, typename UDER>
914 {
915 typedef typename promote_trait<NUM,NUV>::T_promote T_promote;
916 CheckSpMVCompliance(A, x);
917
918 MPI_Comm World = x.commGrid->GetWorld();
919 MPI_Comm ColWorld = x.commGrid->GetColWorld();
920 MPI_Comm RowWorld = x.commGrid->GetRowWorld();
921
922 int xlocnz = (int) x.getlocnnz();
923 int trxlocnz = 0;
924 int roffst = x.RowLenUntil();
925 int offset;
926
927 int diagneigh = x.commGrid->GetComplementRank();
928 MPI_Status status;
929 MPI_Sendrecv(&xlocnz, 1, MPI_INT, diagneigh, TRX, &trxlocnz, 1, MPI_INT, diagneigh, TRX, World, &status);
930 MPI_Sendrecv(&roffst, 1, MPI_INT, diagneigh, TROST, &offset, 1, MPI_INT, diagneigh, TROST, World, &status);
931
932 IU * trxinds = new IU[trxlocnz];
933 NUV * trxnums = new NUV[trxlocnz];
934 MPI_Sendrecv(const_cast<IU*>(SpHelper::p2a(x.ind)), xlocnz, MPIType<IU>(), diagneigh, TRX, trxinds, trxlocnz, MPIType<IU>(), diagneigh, TRX, World, &status);
935 MPI_Sendrecv(const_cast<NUV*>(SpHelper::p2a(x.num)), xlocnz, MPIType<NUV>(), diagneigh, TRX, trxnums, trxlocnz, MPIType<NUV>(), diagneigh, TRX, World, &status);
936 transform(trxinds, trxinds+trxlocnz, trxinds, bind2nd(plus<IU>(), offset)); // fullydist indexing (n pieces) -> matrix indexing (sqrt(p) pieces)
937
938 int colneighs, colrank;
939 MPI_Comm_size(ColWorld, &colneighs);
940 MPI_Comm_rank(ColWorld, &colrank);
941 int * colnz = new int[colneighs];
942 colnz[colrank] = trxlocnz;
943 MPI_Allgather(MPI_IN_PLACE, 1, MPI_INT, colnz, 1, MPI_INT, ColWorld);
944 int * dpls = new int[colneighs](); // displacements (zero initialized pid)
945 std::partial_sum(colnz, colnz+colneighs-1, dpls+1);
946 int accnz = std::accumulate(colnz, colnz+colneighs, 0);
947 IU * indacc = new IU[accnz];
948 NUV * numacc = new NUV[accnz];
949
950 // ABAB: Future issues here, colnz is of type int (MPI limitation)
951 // What if the aggregate vector size along the processor row/column is not 32-bit addressible?
952 MPI_Allgatherv(trxinds, trxlocnz, MPIType<IU>(), indacc, colnz, dpls, MPIType<IU>(), ColWorld);
953 MPI_Allgatherv(trxnums, trxlocnz, MPIType<NUV>(), numacc, colnz, dpls, MPIType<NUV>(), ColWorld);
954 DeleteAll(trxinds, trxnums);
955
956 // serial SpMV with sparse vector
957 vector< int32_t > indy;
958 vector< T_promote > numy;
959
960 int32_t * tmpindacc = new int32_t[accnz];
961 for(int i=0; i< accnz; ++i) tmpindacc[i] = indacc[i];
962 delete [] indacc;
963
964 dcsc_gespmv<SR>(*(A.spSeq), tmpindacc, numacc, accnz, indy, numy); // actual multiplication
965
966 DeleteAll(tmpindacc, numacc);
967 DeleteAll(colnz, dpls);
968
969 FullyDistSpVec<IU, T_promote> y ( x.commGrid, A.getnrow()); // identity doesn't matter for sparse vectors
970 IU yintlen = y.MyRowLength();
971
972 int rowneighs;
973 MPI_Comm_size(RowWorld,&rowneighs);
974 vector< vector<IU> > sendind(rowneighs);
975 vector< vector<T_promote> > sendnum(rowneighs);
976 typename vector<int32_t>::size_type outnz = indy.size();
977 for(typename vector<IU>::size_type i=0; i< outnz; ++i)
978 {
979 IU locind;
980 int rown = y.OwnerWithinRow(yintlen, static_cast<IU>(indy[i]), locind);
981 sendind[rown].push_back(locind);
982 sendnum[rown].push_back(numy[i]);
983 }
984
985 IU * sendindbuf = new IU[outnz];
986 T_promote * sendnumbuf = new T_promote[outnz];
987 int * sendcnt = new int[rowneighs];
988 int * sdispls = new int[rowneighs];
989 for(int i=0; i<rowneighs; ++i)
990 sendcnt[i] = sendind[i].size();
991
992 int * rdispls = new int[rowneighs];
993 int * recvcnt = new int[rowneighs];
994 MPI_Alltoall(sendcnt, 1, MPI_INT, recvcnt, 1, MPI_INT, RowWorld); // share the request counts
995
996 sdispls[0] = 0;
997 rdispls[0] = 0;
998 for(int i=0; i<rowneighs-1; ++i)
999 {
1000 sdispls[i+1] = sdispls[i] + sendcnt[i];
1001 rdispls[i+1] = rdispls[i] + recvcnt[i];
1002 }
1003 int totrecv = accumulate(recvcnt,recvcnt+rowneighs,0);
1004 IU * recvindbuf = new IU[totrecv];
1005 T_promote * recvnumbuf = new T_promote[totrecv];
1006
1007 for(int i=0; i<rowneighs; ++i)
1008 {
1009 copy(sendind[i].begin(), sendind[i].end(), sendindbuf+sdispls[i]);
1010 vector<IU>().swap(sendind[i]);
1011 }
1012 for(int i=0; i<rowneighs; ++i)
1013 {
1014 copy(sendnum[i].begin(), sendnum[i].end(), sendnumbuf+sdispls[i]);
1015 vector<T_promote>().swap(sendnum[i]);
1016 }
1017 MPI_Alltoallv(sendindbuf, sendcnt, sdispls, MPIType<IU>(), recvindbuf, recvcnt, rdispls, MPIType<IU>(), RowWorld);
1018 MPI_Alltoallv(sendnumbuf, sendcnt, sdispls, MPIType<T_promote>(), recvnumbuf, recvcnt, rdispls, MPIType<T_promote>(), RowWorld);
1019
1020 DeleteAll(sendindbuf, sendnumbuf);
1021 DeleteAll(sendcnt, recvcnt, sdispls, rdispls);
1022
1023 // define a SPA-like data structure
1024 IU ysize = y.MyLocLength();
1025 T_promote * localy = new T_promote[ysize];
1026 bool * isthere = new bool[ysize];
1027 vector<IU> nzinds; // nonzero indices
1028 fill_n(isthere, ysize, false);
1029
1030 for(int i=0; i< totrecv; ++i)
1031 {
1032 if(!isthere[recvindbuf[i]])
1033 {
1034 localy[recvindbuf[i]] = recvnumbuf[i]; // initial assignment
1035 nzinds.push_back(recvindbuf[i]);
1036 isthere[recvindbuf[i]] = true;
1037 }
1038 else
1039 {
1040 localy[recvindbuf[i]] = SR::add(localy[recvindbuf[i]], recvnumbuf[i]);
1041 }
1042 }
1043 DeleteAll(isthere, recvindbuf, recvnumbuf);
1044 sort(nzinds.begin(), nzinds.end());
1045 int nnzy = nzinds.size();
1046 y.ind.resize(nnzy);
1047 y.num.resize(nnzy);
1048 for(int i=0; i< nnzy; ++i)
1049 {
1050 y.ind[i] = nzinds[i];
1051 y.num[i] = localy[nzinds[i]];
1052 }
1053 delete [] localy;
1054 return y;
1055 }
1056
1057
1058 template <typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB>
1060 (const SpParMat<IU,NU1,UDERA> & A, const SpParMat<IU,NU2,UDERB> & B , bool exclude)
1061 {
1062 typedef typename promote_trait<NU1,NU2>::T_promote N_promote;
1063 typedef typename promote_trait<UDERA,UDERB>::T_promote DER_promote;
1064
1065 if(*(A.commGrid) == *(B.commGrid))
1066 {
1067 DER_promote * result = new DER_promote( EWiseMult(*(A.spSeq),*(B.spSeq),exclude) );
1068 return SpParMat<IU, N_promote, DER_promote> (result, A.commGrid);
1069 }
1070 else
1071 {
1072 cout << "Grids are not comparable elementwise multiplication" << endl;
1073 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1075 }
1076 }
1077
1078 template <typename RETT, typename RETDER, typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB, typename _BinaryOperation>
1080 (const SpParMat<IU,NU1,UDERA> & A, const SpParMat<IU,NU2,UDERB> & B, _BinaryOperation __binary_op, bool notB, const NU2& defaultBVal)
1081 {
1082 if(*(A.commGrid) == *(B.commGrid))
1083 {
1084 RETDER * result = new RETDER( EWiseApply<RETT>(*(A.spSeq),*(B.spSeq), __binary_op, notB, defaultBVal) );
1085 return SpParMat<IU, RETT, RETDER> (result, A.commGrid);
1086 }
1087 else
1088 {
1089 cout << "Grids are not comparable elementwise apply" << endl;
1090 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1091 return SpParMat< IU,RETT,RETDER >();
1092 }
1093 }
1094
1095 template <typename RETT, typename RETDER, typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB, typename _BinaryOperation, typename _BinaryPredicate>
1097 (const SpParMat<IU,NU1,UDERA> & A, const SpParMat<IU,NU2,UDERB> & B, _BinaryOperation __binary_op, _BinaryPredicate do_op, bool allowANulls, bool allowBNulls, const NU1& ANullVal, const NU2& BNullVal, const bool allowIntersect, const bool useExtendedBinOp)
1098 {
1099 if(*(A.commGrid) == *(B.commGrid))
1100 {
1101 RETDER * result = new RETDER( EWiseApply<RETT>(*(A.spSeq),*(B.spSeq), __binary_op, do_op, allowANulls, allowBNulls, ANullVal, BNullVal, allowIntersect) );
1102 return SpParMat<IU, RETT, RETDER> (result, A.commGrid);
1103 }
1104 else
1105 {
1106 cout << "Grids are not comparable elementwise apply" << endl;
1107 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1108 return SpParMat< IU,RETT,RETDER >();
1109 }
1110 }
1111
1112 // plain adapter
1113 template <typename RETT, typename RETDER, typename IU, typename NU1, typename NU2, typename UDERA, typename UDERB, typename _BinaryOperation, typename _BinaryPredicate>
1115 EWiseApply (const SpParMat<IU,NU1,UDERA> & A, const SpParMat<IU,NU2,UDERB> & B, _BinaryOperation __binary_op, _BinaryPredicate do_op, bool allowANulls, bool allowBNulls, const NU1& ANullVal, const NU2& BNullVal, const bool allowIntersect = true)
1116 {
1117 return EWiseApply<RETT, RETDER>(A, B,
1120 allowANulls, allowBNulls, ANullVal, BNullVal, allowIntersect, true);
1121 }
1122 // end adapter
1123
1124
1129 template <typename IU, typename NU1, typename NU2>
1131 (const SpParVec<IU,NU1> & V, const DenseParVec<IU,NU2> & W , bool exclude, NU2 zero)
1132 {
1133 typedef typename promote_trait<NU1,NU2>::T_promote T_promote;
1134
1135 if(*(V.commGrid) == *(W.commGrid))
1136 {
1137 SpParVec< IU, T_promote> Product(V.commGrid);
1138 Product.length = V.length;
1139 if(Product.diagonal)
1140 {
1141 if(exclude)
1142 {
1143 IU size= V.ind.size();
1144 for(IU i=0; i<size; ++i)
1145 {
1146 if(W.arr.size() <= V.ind[i] || W.arr[V.ind[i]] == zero) // keep only those
1147 {
1148 Product.ind.push_back(V.ind[i]);
1149 Product.num.push_back(V.num[i]);
1150 }
1151 }
1152 }
1153 else
1154 {
1155 IU size= V.ind.size();
1156 for(IU i=0; i<size; ++i)
1157 {
1158 if(W.arr.size() > V.ind[i] && W.arr[V.ind[i]] != zero) // keep only those
1159 {
1160 Product.ind.push_back(V.ind[i]);
1161 Product.num.push_back(V.num[i] * W.arr[V.ind[i]]);
1162 }
1163 }
1164 }
1165 }
1166 return Product;
1167 }
1168 else
1169 {
1170 cout << "Grids are not comparable elementwise multiplication" << endl;
1171 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1172 return SpParVec< IU,T_promote>();
1173 }
1174 }
1175
1180 template <typename IU, typename NU1, typename NU2>
1182 (const FullyDistSpVec<IU,NU1> & V, const FullyDistVec<IU,NU2> & W , bool exclude, NU2 zero)
1183 {
1184 typedef typename promote_trait<NU1,NU2>::T_promote T_promote;
1185
1186 if(*(V.commGrid) == *(W.commGrid))
1187 {
1188 FullyDistSpVec< IU, T_promote> Product(V.commGrid);
1189 if(V.glen != W.glen)
1190 {
1191 cerr << "Vector dimensions don't match for EWiseMult\n";
1192 MPI_Abort(MPI_COMM_WORLD, DIMMISMATCH);
1193 }
1194 else
1195 {
1196 Product.glen = V.glen;
1197 IU size= V.getlocnnz();
1198 if(exclude)
1199 {
1200 #if defined(_OPENMP) && defined(CBLAS_EXPERIMENTAL) // not faster than serial
1201 int actual_splits = cblas_splits * 1; // 1 is the parallel slackness
1202 vector <IU> tlosizes (actual_splits, 0);
1203 vector < vector<IU> > tlinds(actual_splits);
1204 vector < vector<T_promote> > tlnums(actual_splits);
1205 IU tlsize = size / actual_splits;
1206 #pragma omp parallel for //schedule(dynamic, 1)
1207 for(IU t = 0; t < actual_splits; ++t)
1208 {
1209 IU tlbegin = t*tlsize;
1210 IU tlend = (t==actual_splits-1)? size : (t+1)*tlsize;
1211 for(IU i=tlbegin; i<tlend; ++i)
1212 {
1213 if(W.arr[V.ind[i]] == zero) // keep only those
1214 {
1215 tlinds[t].push_back(V.ind[i]);
1216 tlnums[t].push_back(V.num[i]);
1217 tlosizes[t]++;
1218 }
1219 }
1220 }
1221 vector<IU> prefix_sum(actual_splits+1,0);
1222 partial_sum(tlosizes.begin(), tlosizes.end(), prefix_sum.begin()+1);
1223 Product.ind.resize(prefix_sum[actual_splits]);
1224 Product.num.resize(prefix_sum[actual_splits]);
1225
1226 #pragma omp parallel for //schedule(dynamic, 1)
1227 for(IU t=0; t< actual_splits; ++t)
1228 {
1229 copy(tlinds[t].begin(), tlinds[t].end(), Product.ind.begin()+prefix_sum[t]);
1230 copy(tlnums[t].begin(), tlnums[t].end(), Product.num.begin()+prefix_sum[t]);
1231 }
1232 #else
1233 for(IU i=0; i<size; ++i)
1234 {
1235 if(W.arr[V.ind[i]] == zero) // keep only those
1236 {
1237 Product.ind.push_back(V.ind[i]);
1238 Product.num.push_back(V.num[i]);
1239 }
1240 }
1241 #endif
1242 }
1243 else
1244 {
1245 for(IU i=0; i<size; ++i)
1246 {
1247 if(W.arr[V.ind[i]] != zero) // keep only those
1248 {
1249 Product.ind.push_back(V.ind[i]);
1250 Product.num.push_back(V.num[i] * W.arr[V.ind[i]]);
1251 }
1252 }
1253 }
1254 }
1255 return Product;
1256 }
1257 else
1258 {
1259 cout << "Grids are not comparable elementwise multiplication" << endl;
1260 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1262 }
1263 }
1264
1265
1283 template <typename RET, typename IU, typename NU1, typename NU2, typename _BinaryOperation, typename _BinaryPredicate>
1285 (const FullyDistSpVec<IU,NU1> & V, const FullyDistVec<IU,NU2> & W , _BinaryOperation _binary_op, _BinaryPredicate _doOp, bool allowVNulls, NU1 Vzero, const bool useExtendedBinOp)
1286 {
1287 typedef RET T_promote; //typedef typename promote_trait<NU1,NU2>::T_promote T_promote;
1288 if(*(V.commGrid) == *(W.commGrid))
1289 {
1290 FullyDistSpVec< IU, T_promote> Product(V.commGrid);
1291 FullyDistVec< IU, NU1> DV (V);
1292 if(V.TotalLength() != W.TotalLength())
1293 {
1294 ostringstream outs;
1295 outs << "Vector dimensions don't match (" << V.TotalLength() << " vs " << W.TotalLength() << ") for EWiseApply (short version)\n";
1296 SpParHelper::Print(outs.str());
1297 MPI_Abort(MPI_COMM_WORLD, DIMMISMATCH);
1298 }
1299 else
1300 {
1301 Product.glen = V.glen;
1302 IU size= W.LocArrSize();
1303 IU spsize = V.getlocnnz();
1304 IU sp_iter = 0;
1305 if (allowVNulls)
1306 {
1307 // iterate over the dense vector
1308 for(IU i=0; i<size; ++i)
1309 {
1310 if(sp_iter < spsize && V.ind[sp_iter] == i)
1311 {
1312 if (_doOp(V.num[sp_iter], W.arr[i], false, false))
1313 {
1314 Product.ind.push_back(i);
1315 Product.num.push_back(_binary_op(V.num[sp_iter], W.arr[i], false, false));
1316 }
1317 sp_iter++;
1318 }
1319 else
1320 {
1321 if (_doOp(Vzero, W.arr[i], true, false))
1322 {
1323 Product.ind.push_back(i);
1324 Product.num.push_back(_binary_op(Vzero, W.arr[i], true, false));
1325 }
1326 }
1327 }
1328 }
1329 else
1330 {
1331 // iterate over the sparse vector
1332 for(sp_iter = 0; sp_iter < spsize; ++sp_iter)
1333 {
1334 if (_doOp(V.num[sp_iter], W.arr[V.ind[sp_iter]], false, false))
1335 {
1336 Product.ind.push_back(V.ind[sp_iter]);
1337 Product.num.push_back(_binary_op(V.num[sp_iter], W.arr[V.ind[sp_iter]], false, false));
1338 }
1339 }
1340 }
1341 }
1342 return Product;
1343 }
1344 else
1345 {
1346 cout << "Grids are not comparable for EWiseApply" << endl;
1347 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1349 }
1350 }
1351
1373 template <typename RET, typename IU, typename NU1, typename NU2, typename _BinaryOperation, typename _BinaryPredicate>
1375 (const FullyDistSpVec<IU,NU1> & V, const FullyDistSpVec<IU,NU2> & W , _BinaryOperation _binary_op, _BinaryPredicate _doOp, bool allowVNulls, bool allowWNulls, NU1 Vzero, NU2 Wzero, const bool allowIntersect, const bool useExtendedBinOp)
1376 {
1377 typedef RET T_promote; // typename promote_trait<NU1,NU2>::T_promote T_promote;
1378 if(*(V.commGrid) == *(W.commGrid))
1379 {
1380 FullyDistSpVec< IU, T_promote> Product(V.commGrid);
1381 if(V.glen != W.glen)
1382 {
1383 ostringstream outs;
1384 outs << "Vector dimensions don't match (" << V.glen << " vs " << W.glen << ") for EWiseApply (full version)\n";
1385 SpParHelper::Print(outs.str());
1386 MPI_Abort(MPI_COMM_WORLD, DIMMISMATCH);
1387 }
1388 else
1389 {
1390 Product.glen = V.glen;
1391 typename vector< IU >::const_iterator indV = V.ind.begin();
1392 typename vector< NU1 >::const_iterator numV = V.num.begin();
1393 typename vector< IU >::const_iterator indW = W.ind.begin();
1394 typename vector< NU2 >::const_iterator numW = W.num.begin();
1395
1396 while (indV < V.ind.end() && indW < W.ind.end())
1397 {
1398 if (*indV == *indW)
1399 {
1400 // overlap
1401 if (allowIntersect)
1402 {
1403 if (_doOp(*numV, *numW, false, false))
1404 {
1405 Product.ind.push_back(*indV);
1406 Product.num.push_back(_binary_op(*numV, *numW, false, false));
1407 }
1408 }
1409 indV++; numV++;
1410 indW++; numW++;
1411 }
1412 else if (*indV < *indW)
1413 {
1414 // V has value but W does not
1415 if (allowWNulls)
1416 {
1417 if (_doOp(*numV, Wzero, false, true))
1418 {
1419 Product.ind.push_back(*indV);
1420 Product.num.push_back(_binary_op(*numV, Wzero, false, true));
1421 }
1422 }
1423 indV++; numV++;
1424 }
1425 else //(*indV > *indW)
1426 {
1427 // W has value but V does not
1428 if (allowVNulls)
1429 {
1430 if (_doOp(Vzero, *numW, true, false))
1431 {
1432 Product.ind.push_back(*indW);
1433 Product.num.push_back(_binary_op(Vzero, *numW, true, false));
1434 }
1435 }
1436 indW++; numW++;
1437 }
1438 }
1439 // clean up
1440 while (allowWNulls && indV < V.ind.end())
1441 {
1442 if (_doOp(*numV, Wzero, false, true))
1443 {
1444 Product.ind.push_back(*indV);
1445 Product.num.push_back(_binary_op(*numV, Wzero, false, true));
1446 }
1447 indV++; numV++;
1448 }
1449 while (allowVNulls && indW < W.ind.end())
1450 {
1451 if (_doOp(Vzero, *numW, true, false))
1452 {
1453 Product.ind.push_back(*indW);
1454 Product.num.push_back(_binary_op(Vzero, *numW, true, false));
1455 }
1456 indW++; numW++;
1457 }
1458 }
1459 return Product;
1460 }
1461 else
1462 {
1463 cout << "Grids are not comparable for EWiseApply" << endl;
1464 MPI_Abort(MPI_COMM_WORLD, GRIDMISMATCH);
1466 }
1467 }
1468
1469 // plain callback versions
1470 template <typename RET, typename IU, typename NU1, typename NU2, typename _BinaryOperation, typename _BinaryPredicate>
1472 (const FullyDistSpVec<IU,NU1> & V, const FullyDistVec<IU,NU2> & W , _BinaryOperation _binary_op, _BinaryPredicate _doOp, bool allowVNulls, NU1 Vzero)
1473 {
1474 return EWiseApply<RET>(V, W,
1477 allowVNulls, Vzero, true);
1478 }
1479
1480 template <typename RET, typename IU, typename NU1, typename NU2, typename _BinaryOperation, typename _BinaryPredicate>
1482 (const FullyDistSpVec<IU,NU1> & V, const FullyDistSpVec<IU,NU2> & W , _BinaryOperation _binary_op, _BinaryPredicate _doOp, bool allowVNulls, bool allowWNulls, NU1 Vzero, NU2 Wzero, const bool allowIntersect = true)
1483 {
1484 return EWiseApply<RET>(V, W,
1487 allowVNulls, allowWNulls, Vzero, Wzero, allowIntersect, true);
1488 }
1489
1490
1491
1492 #endif
1493
|
__label__pos
| 0.932022 |
Add subscript Text to header of Table in pdf by MATLAB Report Generator
9 views (last 30 days)
How can I add subscript to the Header of the Table in Matlab Report Generator?
I tried this code. It does make the subscipt in UITable. But does not write text in pdf.
import mlreportgen.report.*
import mlreportgen.dom.*
rpt = Report('subscriptTable','pdf');
h = uitable('Data', [1 2; 3 4], 'ColumnName', {'<HTML>&Omega', '<HTML>P<SUB>i'})
headerLabels=h.ColumnName
bodyContent=h.Data
tbl = FormalTable(headerLabels,bodyContent);
tbl.Border = 'solid';
tbl.ColSep = 'solid';
tbl.RowSep = 'solid';
add(rpt,tbl);
close(rpt)
rptview(rpt)
Accepted Answer
Rahul Singhal
Rahul Singhal on 1 Mar 2021
Edited: Rahul Singhal on 1 Mar 2021
Hi Masood,
You can use the DOM mlreportgen.dom.HTML object to include the HTML content, used in the uitable column names, in the DOM FormalTable. Make sure that the input HTML follows the requirements as specified here: https://www.mathworks.com/help/rptgen/ug/prepare-html-code-for-dom-reports.html
Below is an example code:
import mlreportgen.report.*
import mlreportgen.dom.*
rpt = Report('subscriptTable','pdf');
h = uitable('Data', [1 2; 3 4], 'ColumnName', {'<html>Ω</html>', '<html>P<sub>i</sub></html>'})
% Create table header row containing HTML content
tblHeaderRow = TableRow();
for i = 1:length(h.ColumnName)
te = TableEntry();
append(te,HTML(h.ColumnName{i})); % append content using DOM HTML object
append(tblHeaderRow,te);
end
bodyContent=h.Data
tbl = FormalTable(bodyContent); % create table with just body content
append(tbl.Header,tblHeaderRow); % append the header row
tbl.Border = 'solid';
tbl.ColSep = 'solid';
tbl.RowSep = 'solid';
add(rpt,tbl);
close(rpt)
rptview(rpt)
Thanks,
Rahul
More Answers (0)
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Translated by
|
__label__pos
| 0.783741 |
Fine Tuning Drupal Themes with Patterns, Arg and Types
In this article, we’ll discuss how you can leverage various Drupal API functions to achieve more fine grained theming. We’ll cover template preprocessing and alter hooks using path patterns, types and args(). We’ll use the arg() function which returns parts of a current Drupal URL path and some pattern matching for instances when you want to match a pattern in a URL. We’ll also take a look at creating a variable for an array of content types.
Template preprocessing is a means to define variables for use within your page. For example, you can define a body class variable. In turn you can use the resulting class in your Sass or CSS. An alter or build hook is something that’s run before the page renders so you can do things such as adding JS or CSS to specific pages or content types.
I’ll explain and demonstrate these hooks and how you can use them to:
• Add a <body> class to a specific page for better theming
• Add Javascript or CSS to specific pages and paths
• Use wildcard path arguments
• URL pattern matching using preg_match
• Create an array of content types to use as a variable in your argument
• Using path arguments as parameters
The functions we discuss here will be added to your theme’s template.php file. Although you can also use these functions in a custom module, you’ll need to specify that the functions are not for admin pages unless that’s your intent, and I’ll cover how to do that.
Getting Started
When using preprocess or alter functions within your theme, you’ll want to be sure template.php exists but if not, you can go ahead and create this file in the root of your theme. So if my theme name is foobar, the path to template.php will be:
/sites/all/themes/foobar/template.php
API functions are prefaced by the machine name of your theme. For example, if you are using hook_page_alter and your theme name is foobar, we’d write it as function foobar_page_alter(). (The machine name is simply the theme’s folder name.)
Custom Body Classes Using a Content Types Array
A body class is a class that’s added to your HTML <body> tag. For example, on a blog page, you might see something like this:
<body class="page-node page-node-blog">
You can leverage that class in your Sass or CSS to fine tune your theming by doing something like this just for blog pages on your site:
.page-node-blog h1 {
// custom Sass here
}
Out of the box, Drupal 7 comes with some pretty good body classes and usually these are fine for most use cases. In addition, Drupal contribution or Contrib themes such as Zen add enhanced and expanded classes.
In our case, we want to add a class to some pages which share some common attributes but may not necessarily derive from the same content type. Let’s say we have two content types that we want to add a specific class to in order to theme those alike but perhaps different from other pages on our website. We can build an array of Drupal content types we want to target and then use that array to add the class. Once we’ve defined the array, we just check to ensure that a given node exists and then pass the array in.
<?php
/**
* Implements template_preprocess_html().
*
* Define custom classes for theming.
*/
function foobar_preprocess_html(&$vars) {
// Build a node types array from our targeted content types.
$foo_types = array(
'landing_page',
'our_services',
);
// Define the node.
$node = menu_get_object();
// Use the array to add a class to those content types.
if (!empty($node) && in_array($node->type, $foo_types)) {
$vars['classes_array'][] = 'page-style-foobar';
}
}
This function preprocesses variables for anything that would typically be before the ending HTML </head> tag which is in Drupal’s core template, html.tpl.php. We add the body class using $vars['classes_array']. This variable gets rendered with <?php print $classes; ?> in the <body> tag. In our case, this class will only render in landing_page and our_services content types. Now we can use .page-style-foobar in our Sass or CSS to style these pages.
URL Pattern Matching
You can also use URL pattern matching for adding useful custom classes. Let’s say we have an “Our Services” landing page and then some sub-pages under that path. The URL architecture might look like this:
example.com/our-services
- example.com/our-services/subpage-1
- example.com/our-services/subpage-2
We’ll add a custom body class to those pages using preg_match, regex, PHP matches and Drupal’s request_uri function. Once again, we’d put this in a foobar_preprocess_html as above.
<?php
function foobar_preprocess_html(&$vars) {
// Define the URL path.
$path = request_uri();
// Add body classes to various pages for better theming.
if (preg_match('|^/our-services((?:/[a-zA-Z0-9_\-]*)*)?|', $path, $matches)) {
$vars['classes_array'][] = 'page-services';
}
}
Now you can use .page-services .some-common-element for theming these “our-services” pages. Obviously this method has pitfalls if your URL structure changes so it may only fit some use cases.
Path Arguments
Another clever way of custom theming specific parts of your site is to partition those off using arg(). Drupal tends to get script and CSS heavy so ideally if you’re adding extra CSS or JS and you don’t need it for every page (which is normally done using your theme’s .info file), you can use path arg() to add these only to pages where they’re needed. For example, if I want to add a custom script to just the Drupal registration page, I can create a path arg() if statement and then add the script within that. The URL path we’ll focus on here is /user/register and you’ll see how the arg() breaks down the URL structure.
We’ll be using hook_page_alter here which can apply alterations to a page before it’s rendered. First we define the theme path and use Drupal’s attached function.
<?php
/**
* Implements hook_page_alter().
*
* Add custom functions such as adding js or css.
*/
function foobar_page_alter(&$page, $form) {
// Define the module path for use below.
$theme_path = drupal_get_path('theme', 'foobar');
if (arg(0) == "user" && arg(1) == "register") {
$foobar_js = array(
'#attached' => array(
'js' => array(
$theme_path . '/js/custom.js' => array(
'group' => JS_THEME,
),
),
),
);
drupal_render($foobar_js);
}
}
In this function, note that we’ve substituted our theme name for hook so hook_page_alter becomes foobar_page_alter. The arg() number signifies the position in the URL path. Zero is first, one is second and so on. You can get pretty creative with these adding more parameters. Let’s say you wanted to add JS to just the user page but no paths underneath it. You could add a NULL arg() after the initial arg().
if (arg(0) == "user" && arg(1) == NULL) {
// code here
}
In the examples above, we’ve implemented various functions in our theme’s template.php file. You can also use these in a custom module as well and in that case you’d just preface your module name in the function rather than the theme name. When theming from a module, you’ll probably want to exclude admin paths since a module can target anywhere in your site. You can do that excluding admin paths.
if (!path_is_admin(current_path())) {
// code here
}
Conclusion
As you can see, leveraging the Drupal API Toolbox for theming extends your reach as a developer. The examples above are not definitive but they do give you a feel for what’s possible. As a Drupal Themer, using these has helped me expand my bag of tricks when theming and building a site. Comments? Feedback? Leave them below!
Resources
|
__label__pos
| 0.617462 |
Kaptain
Additional
Language
Kotlin
Version
1.0.3 (Aug 9, 2021)
Created
Mar 26, 2020
Updated
Aug 9, 2021
Owner
Adriel Café (adrielcafe)
Contributor
Adriel Café (adrielcafe)
1
Activity
Badge
Generate
Download
Source code
APK file
Commercial
Kaptain
Kaptain is a small, dependencyless and easy to use Android library that helps you to navigate between activities spread over several modules.
Usage
Given the following project structure:
/app
MyApplication.kt
/feature-a
FeatureAActivity.kt
/feature-b
FeatureBActivity.kt
/feature-shared
Destination.kt
• app module imports all modules below
• feature-a and feature-b imports only feature-shared
• feature-shared imports nothing
1. Define destinations
First, you must list all possible destinations (Activities) of your app. Create a sealed class that implements the KaptainDestination interface.
Optionally, you can add arguments to your destination using a data class.
sealed class Destination : KaptainDestination {
object FeatureA : Destination()
data class FeatureB(val message: String) : Destination()
}
2. Create a Kaptain instance
Next, create a new Kaptain instance and associate your destinations with the corresponding Activities.
class MyApplication : Application() {
val kaptain = Kaptain {
add<Destination.FeatureA, FeatureAActivity>()
add<Destination.FeatureB, FeatureBActivity>()
}
}
Ideally, you should inject this instance as a singleton using a DI library. Check the sample app for an example using Koin.
3. Navigate between activities
Now you can navigate to any Activity, from any module:
class FeatureAActivity : AppCompatActivity() {
fun goToFeatureB() {
kaptain.navigate(
activity = this,
destination = Destination.FeatureB(message = "Ahoy!"),
requestCode = 0x123 // Optional
)
}
}
4. Retrieve a destination content
After arrive at your destination, you can retrieve it's content:
class FeatureBActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_feature_b)
val importantMessage = kaptain.fromIntent<Destination.FeatureB>(this)?.message
}
}
Dynamic feature modules
Kaptain works great with dynamic features!
1. Add destinations on demand
You can add/remove destinations at any time:
kaptain.add<Destination.DynamicFeatureA, DynamicFeatureAActivity>()
kaptain.remove<Destination.DynamicFeatureB>()
2. Make sure the destination exists before navigating
if (kaptain.has<Destination.DynamicFeatureA>) {
kaptain.navigate(this, Destination.DynamicFeatureA)
}
Import to your project
1. Add the JitPack repository in your root build.gradle at the end of repositories:
allprojects {
repositories {
maven { url 'https://jitpack.io' }
}
}
1. Next, add the library to your module:
dependencies {
implementation "com.github.adrielcafe.kaptain:kaptain:$currentVersion"
}
Current version:
|
__label__pos
| 0.999258 |
Ask Ubuntu is a question and answer site for Ubuntu users and developers. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
How can I read man pages in my mother tongue?
I'd also like to contribute some translations.
• Where should I go?
• Is there a community supported by Canonical?
share|improve this question
up vote 9 down vote accepted
1. Install the package named language-pack-<two letter language code>
e.g. sudo apt-get install language-pack-es for Spanish
2. Install the package named manpages-<two letter language code>
e.g. apt-get install manpages-es for Spanish man pages.
3. Set your LANG environment variable to <language>_<country>.<encoding>,
e.g. LANG=es_ES.UTF-8
4. Run man
List of language codes.
share|improve this answer
Thanks for the nice answer. I've tried, but there is no ko yet. ko is korean. Who did translate your language? Canonical? – Benjamin Feb 7 '11 at 0:31
You're right, it looks like Korean hasn't been supported since 2005. Try asking here: answers.launchpad.net/ubuntu/+source/manpages-ko/+addquestion or join up to contribute here: translations.launchpad.net/ubuntu/maverick/+translations – Mikel Feb 7 '11 at 6:28
First part of question already answered. This will answer second part of question.
Look at this page and locate you language and open the page for your language team from the link.
https://translations.launchpad.net/+groups/ubuntu-translators
If your language is not in the list refer https://wiki.ubuntu.com/Translations/KnowledgeBase/StartingTeam on how to start one team
These are translated by community of translation volunteers.
share|improve this answer
Thanks guys. They were so helpful. – Benjamin Feb 9 '11 at 11:19
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.742731 |
Data Link Layer.
The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking. It corresponds to, or is part of the link layer of the TCP/IP reference model.
In the Data Link Layer we have this protocols ATM · SDLC · HDLC · ARP · CSLIP · SLIP · PLIP · IEEE 802.3 · Frame Relay · ITU-T G.hn DLL · PPP · X.25.
The main function of this layer is to transfers data between adjacent network nodes in a wide area network or between nodes on the same local area network segment. Within the semantics of the OSI network architecture, the Data Link Layer protocols respond to service requests from the Network Layer and they perform their function by issuing service requests to the Physical Layer.
The Data Link Layer is concerned with local delivery of frames between devices on the same LAN. Data Link frames, as these protocol data units are called, do not cross the boundaries of a local network. Inter-network routing and global addressing are higher layer functions, allowing Data Link protocols to focus on local delivery, addressing, and media arbitration. In this way, the Data Link layer is analogous to a neighborhood traffic cop; it endeavors to arbitrate between parties contending for access to a medium.
In some networks, such as IEEE 802 local area networks, the Data Link Layer is described in more detail with Media Access Control (MAC) and Logical Link Control (LLC) sublayers; this means that the IEEE 802.2 LLC protocol can be used with all of the IEEE 802 MAC layers, such as Ethernet, token ring, IEEE 802.11, etc., as well as with some non-802 MAC layers such as FDDI. Other Data Link Layer protocols, such as HDLC, are specified to include both sublayers, although some other protocols, such as Cisco HDLC, use HDLC’s low-level framing as a MAC layer in combination with a different LLC layer. In the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables), the Data Link Layer is divided into three sub-layers (Application Protocol Convergence, Logical Link Control and Medium Access Control).
By: Alexander Ólafsson.
Advertisements
This entry was posted in Week 5. Bookmark the permalink.
One Response to Data Link Layer.
1. mbnielsen says:
Did you find any relevant references?
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.572339 |
Wednesday, 15 October 2014
How to convert binary to hexadecimal and back again (very simple shortcut)
Hexadecimal is a base 16 numbering system used in computers. If you are using the basic calculator on a Windows computer, you can swap the view to programmer from standard and easily translate between binary, decimal and hex. That being said however, there may be times when you are not allowed to use the calculator (like when taking a certification exam) so you should know this shortcut.
Step 1: Know the hexadecimal numbers through 15. 0 through 9 are all the same as decimal but 10-15 are not, since you cannot have a two digit number in a single bit.
10 - A
11 - B
12 - C
13 - D
14 - E
15 - F
So if someone says 10 in decimal, they would mean A in hex. If someone said 10 in binary, they would mean 2 since 2^0 is 0 plus 2^1 is 2, equaling 2. See my previous tutorial for decimal/binary help: Decimal to Binary and Back
Ok so now that you might be saying to yourself, "I thought we would go to 16 since it's a base 16 system". Remember that we are starting at 0, which means 0 is the 1st number, 1 is the 2nd, 2 is the 3rd and so on with 15 being the 16th number. So 0-15 is 16 numbers.
Step 2: Acquire your binary or hex number that you would like to convert. Lets pick a random binary number, say 11010001. If you refer to the tutorial above, you would know that the binary number is 209 in decimal but we want to go to hex. Break the binary number into groups of 4 starting from the right side to the left. The groups would be:
1101 0001
Step 3: Match the number of that group to the corresponding hex value. The group on the left hand side, 1101, is equal to 13 in decimal. If you refer to the hex chart above, you will see that 13 in decimal equals "D". So that group of 4 is D. The second group is 0001 or just plain 1. 1 is the same across each three numbering systems so that group would simply be 1.
Combine those two values and you have D1 which is the correct answer.
Now lets try a harder number. 1101011010001
Let's break it into groups of four again, starting from the right hand side and moving left. So we have (1) (1010) (1101) (0001). Notice that the final group only has 1 number in it. That's ok. Often times you will not have a binary number that is perfectly divisible by 4. Not to worry, if you'd like you can simply add leading 0's so the group would be (0001). Either way, it is 1.
So, the first group on the left, (1), is simply 1. The second group, (1010), is 10 (decimal) or A in hex. The third group, (1101), is 13 (decimal) or D in hex. The final group, (0001), is simply 1. Combine all those values together and you get 1AD1.
It is exceedingly simple and only requires the memorization of that chart above, 10-15 or A-F. The long way to convert a number to hex, say 272 (decimal) is more of a challenge. Like binary, hex is translated using powers. Instead of a base 2 for binary, it's a base 16. For the binary tutorial, we talked about the place values being 2^0, 2^1, 2^2, 2^3 and so on. For hex, simply substitute 2 for 16 so it would be 16^0, 16^1, 16^2, 16^3 and so on.
So the first group, 16^0, would be 1 since anything to the 0 power is 1. The second group, 16^1, would be 16. The third group, 16^2, would be 256 since 16 x 16 is 256. The fourth group, 16^3, would be 4096 since 16 x 16 x 16 is 4096. As you can see the numbers get very large very quickly, which helps out a great deal in usable address space (for IPv6).
So using the number above, 272 in decimal, would be 110 in hex. We know this because 110 in hex is:
So in essence, you would be adding 1 group of 256 with a group of 16 and no groups of 1 which equals 272.
Say you have 4100 (decimal) and want to convert it to hex the long way. First, you can write out your place values, 16^0, 16^1, 16^2, 16^3. 16^4 would be 65,536 and since you do not have a group of 65,536 in 4100, there is no need to go to that place value. 16^3 is 4096 and you do have 1 group of 4096 in 4100 so you will use that place value. So put a 1 in the 4096 place value, since you have a group of 4096 in 4100. Then you have 4 left over.
Go to your next place value of 16^2 or 256. Do you have a group of 256 in 4? No, so put a 0 in that place value. Next go to 16^1 or 16. Do you have a group of 16 in 4? No, so put a 0 in that place value. Next is 16^0 or 1. Do you have a group of 1 in 4? Yes, you have 4 so put 4. Now you are done. Combine the values you got and the hex number is 1004.
No comments:
Post a Comment
Wildern Pupils if you log onto your school email account you can leave a comment via that ID.
|
__label__pos
| 0.899188 |
~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~
TOMOYO Linux Cross Reference
Linux/fs/ntfs/logfile.c
Version: ~ [ linux-5.12-rc7 ] ~ [ linux-5.11.13 ] ~ [ linux-5.10.29 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.111 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.186 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.230 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.266 ] ~ [ linux-4.8.17 ] ~ [ linux-4.7.10 ] ~ [ linux-4.6.7 ] ~ [ linux-4.5.7 ] ~ [ linux-4.4.266 ] ~ [ linux-4.3.6 ] ~ [ linux-4.2.8 ] ~ [ linux-4.1.52 ] ~ [ linux-4.0.9 ] ~ [ linux-3.18.140 ] ~ [ linux-3.16.85 ] ~ [ linux-3.14.79 ] ~ [ linux-3.12.74 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.5 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~
1 /*
2 * logfile.c - NTFS kernel journal handling. Part of the Linux-NTFS project.
3 *
4 * Copyright (c) 2002-2007 Anton Altaparmakov
5 *
6 * This program/include file is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License as published
8 * by the Free Software Foundation; either version 2 of the License, or
9 * (at your option) any later version.
10 *
11 * This program/include file is distributed in the hope that it will be
12 * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
13 * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 * GNU General Public License for more details.
15 *
16 * You should have received a copy of the GNU General Public License
17 * along with this program (in the main directory of the Linux-NTFS
18 * distribution in the file COPYING); if not, write to the Free Software
19 * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 */
21
22 #ifdef NTFS_RW
23
24 #include <linux/types.h>
25 #include <linux/fs.h>
26 #include <linux/highmem.h>
27 #include <linux/buffer_head.h>
28 #include <linux/bitops.h>
29 #include <linux/log2.h>
30
31 #include "attrib.h"
32 #include "aops.h"
33 #include "debug.h"
34 #include "logfile.h"
35 #include "malloc.h"
36 #include "volume.h"
37 #include "ntfs.h"
38
39 /**
40 * ntfs_check_restart_page_header - check the page header for consistency
41 * @vi: $LogFile inode to which the restart page header belongs
42 * @rp: restart page header to check
43 * @pos: position in @vi at which the restart page header resides
44 *
45 * Check the restart page header @rp for consistency and return 'true' if it is
46 * consistent and 'false' otherwise.
47 *
48 * This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
49 * require the full restart page.
50 */
51 static bool ntfs_check_restart_page_header(struct inode *vi,
52 RESTART_PAGE_HEADER *rp, s64 pos)
53 {
54 u32 logfile_system_page_size, logfile_log_page_size;
55 u16 ra_ofs, usa_count, usa_ofs, usa_end = 0;
56 bool have_usa = true;
57
58 ntfs_debug("Entering.");
59 /*
60 * If the system or log page sizes are smaller than the ntfs block size
61 * or either is not a power of 2 we cannot handle this log file.
62 */
63 logfile_system_page_size = le32_to_cpu(rp->system_page_size);
64 logfile_log_page_size = le32_to_cpu(rp->log_page_size);
65 if (logfile_system_page_size < NTFS_BLOCK_SIZE ||
66 logfile_log_page_size < NTFS_BLOCK_SIZE ||
67 logfile_system_page_size &
68 (logfile_system_page_size - 1) ||
69 !is_power_of_2(logfile_log_page_size)) {
70 ntfs_error(vi->i_sb, "$LogFile uses unsupported page size.");
71 return false;
72 }
73 /*
74 * We must be either at !pos (1st restart page) or at pos = system page
75 * size (2nd restart page).
76 */
77 if (pos && pos != logfile_system_page_size) {
78 ntfs_error(vi->i_sb, "Found restart area in incorrect "
79 "position in $LogFile.");
80 return false;
81 }
82 /* We only know how to handle version 1.1. */
83 if (sle16_to_cpu(rp->major_ver) != 1 ||
84 sle16_to_cpu(rp->minor_ver) != 1) {
85 ntfs_error(vi->i_sb, "$LogFile version %i.%i is not "
86 "supported. (This driver supports version "
87 "1.1 only.)", (int)sle16_to_cpu(rp->major_ver),
88 (int)sle16_to_cpu(rp->minor_ver));
89 return false;
90 }
91 /*
92 * If chkdsk has been run the restart page may not be protected by an
93 * update sequence array.
94 */
95 if (ntfs_is_chkd_record(rp->magic) && !le16_to_cpu(rp->usa_count)) {
96 have_usa = false;
97 goto skip_usa_checks;
98 }
99 /* Verify the size of the update sequence array. */
100 usa_count = 1 + (logfile_system_page_size >> NTFS_BLOCK_SIZE_BITS);
101 if (usa_count != le16_to_cpu(rp->usa_count)) {
102 ntfs_error(vi->i_sb, "$LogFile restart page specifies "
103 "inconsistent update sequence array count.");
104 return false;
105 }
106 /* Verify the position of the update sequence array. */
107 usa_ofs = le16_to_cpu(rp->usa_ofs);
108 usa_end = usa_ofs + usa_count * sizeof(u16);
109 if (usa_ofs < sizeof(RESTART_PAGE_HEADER) ||
110 usa_end > NTFS_BLOCK_SIZE - sizeof(u16)) {
111 ntfs_error(vi->i_sb, "$LogFile restart page specifies "
112 "inconsistent update sequence array offset.");
113 return false;
114 }
115 skip_usa_checks:
116 /*
117 * Verify the position of the restart area. It must be:
118 * - aligned to 8-byte boundary,
119 * - after the update sequence array, and
120 * - within the system page size.
121 */
122 ra_ofs = le16_to_cpu(rp->restart_area_offset);
123 if (ra_ofs & 7 || (have_usa ? ra_ofs < usa_end :
124 ra_ofs < sizeof(RESTART_PAGE_HEADER)) ||
125 ra_ofs > logfile_system_page_size) {
126 ntfs_error(vi->i_sb, "$LogFile restart page specifies "
127 "inconsistent restart area offset.");
128 return false;
129 }
130 /*
131 * Only restart pages modified by chkdsk are allowed to have chkdsk_lsn
132 * set.
133 */
134 if (!ntfs_is_chkd_record(rp->magic) && sle64_to_cpu(rp->chkdsk_lsn)) {
135 ntfs_error(vi->i_sb, "$LogFile restart page is not modified "
136 "by chkdsk but a chkdsk LSN is specified.");
137 return false;
138 }
139 ntfs_debug("Done.");
140 return true;
141 }
142
143 /**
144 * ntfs_check_restart_area - check the restart area for consistency
145 * @vi: $LogFile inode to which the restart page belongs
146 * @rp: restart page whose restart area to check
147 *
148 * Check the restart area of the restart page @rp for consistency and return
149 * 'true' if it is consistent and 'false' otherwise.
150 *
151 * This function assumes that the restart page header has already been
152 * consistency checked.
153 *
154 * This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
155 * require the full restart page.
156 */
157 static bool ntfs_check_restart_area(struct inode *vi, RESTART_PAGE_HEADER *rp)
158 {
159 u64 file_size;
160 RESTART_AREA *ra;
161 u16 ra_ofs, ra_len, ca_ofs;
162 u8 fs_bits;
163
164 ntfs_debug("Entering.");
165 ra_ofs = le16_to_cpu(rp->restart_area_offset);
166 ra = (RESTART_AREA*)((u8*)rp + ra_ofs);
167 /*
168 * Everything before ra->file_size must be before the first word
169 * protected by an update sequence number. This ensures that it is
170 * safe to access ra->client_array_offset.
171 */
172 if (ra_ofs + offsetof(RESTART_AREA, file_size) >
173 NTFS_BLOCK_SIZE - sizeof(u16)) {
174 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
175 "inconsistent file offset.");
176 return false;
177 }
178 /*
179 * Now that we can access ra->client_array_offset, make sure everything
180 * up to the log client array is before the first word protected by an
181 * update sequence number. This ensures we can access all of the
182 * restart area elements safely. Also, the client array offset must be
183 * aligned to an 8-byte boundary.
184 */
185 ca_ofs = le16_to_cpu(ra->client_array_offset);
186 if (((ca_ofs + 7) & ~7) != ca_ofs ||
187 ra_ofs + ca_ofs > NTFS_BLOCK_SIZE - sizeof(u16)) {
188 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
189 "inconsistent client array offset.");
190 return false;
191 }
192 /*
193 * The restart area must end within the system page size both when
194 * calculated manually and as specified by ra->restart_area_length.
195 * Also, the calculated length must not exceed the specified length.
196 */
197 ra_len = ca_ofs + le16_to_cpu(ra->log_clients) *
198 sizeof(LOG_CLIENT_RECORD);
199 if (ra_ofs + ra_len > le32_to_cpu(rp->system_page_size) ||
200 ra_ofs + le16_to_cpu(ra->restart_area_length) >
201 le32_to_cpu(rp->system_page_size) ||
202 ra_len > le16_to_cpu(ra->restart_area_length)) {
203 ntfs_error(vi->i_sb, "$LogFile restart area is out of bounds "
204 "of the system page size specified by the "
205 "restart page header and/or the specified "
206 "restart area length is inconsistent.");
207 return false;
208 }
209 /*
210 * The ra->client_free_list and ra->client_in_use_list must be either
211 * LOGFILE_NO_CLIENT or less than ra->log_clients or they are
212 * overflowing the client array.
213 */
214 if ((ra->client_free_list != LOGFILE_NO_CLIENT &&
215 le16_to_cpu(ra->client_free_list) >=
216 le16_to_cpu(ra->log_clients)) ||
217 (ra->client_in_use_list != LOGFILE_NO_CLIENT &&
218 le16_to_cpu(ra->client_in_use_list) >=
219 le16_to_cpu(ra->log_clients))) {
220 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
221 "overflowing client free and/or in use lists.");
222 return false;
223 }
224 /*
225 * Check ra->seq_number_bits against ra->file_size for consistency.
226 * We cannot just use ffs() because the file size is not a power of 2.
227 */
228 file_size = (u64)sle64_to_cpu(ra->file_size);
229 fs_bits = 0;
230 while (file_size) {
231 file_size >>= 1;
232 fs_bits++;
233 }
234 if (le32_to_cpu(ra->seq_number_bits) != 67 - fs_bits) {
235 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
236 "inconsistent sequence number bits.");
237 return false;
238 }
239 /* The log record header length must be a multiple of 8. */
240 if (((le16_to_cpu(ra->log_record_header_length) + 7) & ~7) !=
241 le16_to_cpu(ra->log_record_header_length)) {
242 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
243 "inconsistent log record header length.");
244 return false;
245 }
246 /* Dito for the log page data offset. */
247 if (((le16_to_cpu(ra->log_page_data_offset) + 7) & ~7) !=
248 le16_to_cpu(ra->log_page_data_offset)) {
249 ntfs_error(vi->i_sb, "$LogFile restart area specifies "
250 "inconsistent log page data offset.");
251 return false;
252 }
253 ntfs_debug("Done.");
254 return true;
255 }
256
257 /**
258 * ntfs_check_log_client_array - check the log client array for consistency
259 * @vi: $LogFile inode to which the restart page belongs
260 * @rp: restart page whose log client array to check
261 *
262 * Check the log client array of the restart page @rp for consistency and
263 * return 'true' if it is consistent and 'false' otherwise.
264 *
265 * This function assumes that the restart page header and the restart area have
266 * already been consistency checked.
267 *
268 * Unlike ntfs_check_restart_page_header() and ntfs_check_restart_area(), this
269 * function needs @rp->system_page_size bytes in @rp, i.e. it requires the full
270 * restart page and the page must be multi sector transfer deprotected.
271 */
272 static bool ntfs_check_log_client_array(struct inode *vi,
273 RESTART_PAGE_HEADER *rp)
274 {
275 RESTART_AREA *ra;
276 LOG_CLIENT_RECORD *ca, *cr;
277 u16 nr_clients, idx;
278 bool in_free_list, idx_is_first;
279
280 ntfs_debug("Entering.");
281 ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
282 ca = (LOG_CLIENT_RECORD*)((u8*)ra +
283 le16_to_cpu(ra->client_array_offset));
284 /*
285 * Check the ra->client_free_list first and then check the
286 * ra->client_in_use_list. Check each of the log client records in
287 * each of the lists and check that the array does not overflow the
288 * ra->log_clients value. Also keep track of the number of records
289 * visited as there cannot be more than ra->log_clients records and
290 * that way we detect eventual loops in within a list.
291 */
292 nr_clients = le16_to_cpu(ra->log_clients);
293 idx = le16_to_cpu(ra->client_free_list);
294 in_free_list = true;
295 check_list:
296 for (idx_is_first = true; idx != LOGFILE_NO_CLIENT_CPU; nr_clients--,
297 idx = le16_to_cpu(cr->next_client)) {
298 if (!nr_clients || idx >= le16_to_cpu(ra->log_clients))
299 goto err_out;
300 /* Set @cr to the current log client record. */
301 cr = ca + idx;
302 /* The first log client record must not have a prev_client. */
303 if (idx_is_first) {
304 if (cr->prev_client != LOGFILE_NO_CLIENT)
305 goto err_out;
306 idx_is_first = false;
307 }
308 }
309 /* Switch to and check the in use list if we just did the free list. */
310 if (in_free_list) {
311 in_free_list = false;
312 idx = le16_to_cpu(ra->client_in_use_list);
313 goto check_list;
314 }
315 ntfs_debug("Done.");
316 return true;
317 err_out:
318 ntfs_error(vi->i_sb, "$LogFile log client array is corrupt.");
319 return false;
320 }
321
322 /**
323 * ntfs_check_and_load_restart_page - check the restart page for consistency
324 * @vi: $LogFile inode to which the restart page belongs
325 * @rp: restart page to check
326 * @pos: position in @vi at which the restart page resides
327 * @wrp: [OUT] copy of the multi sector transfer deprotected restart page
328 * @lsn: [OUT] set to the current logfile lsn on success
329 *
330 * Check the restart page @rp for consistency and return 0 if it is consistent
331 * and -errno otherwise. The restart page may have been modified by chkdsk in
332 * which case its magic is CHKD instead of RSTR.
333 *
334 * This function only needs NTFS_BLOCK_SIZE bytes in @rp, i.e. it does not
335 * require the full restart page.
336 *
337 * If @wrp is not NULL, on success, *@wrp will point to a buffer containing a
338 * copy of the complete multi sector transfer deprotected page. On failure,
339 * *@wrp is undefined.
340 *
341 * Simillarly, if @lsn is not NULL, on success *@lsn will be set to the current
342 * logfile lsn according to this restart page. On failure, *@lsn is undefined.
343 *
344 * The following error codes are defined:
345 * -EINVAL - The restart page is inconsistent.
346 * -ENOMEM - Not enough memory to load the restart page.
347 * -EIO - Failed to reading from $LogFile.
348 */
349 static int ntfs_check_and_load_restart_page(struct inode *vi,
350 RESTART_PAGE_HEADER *rp, s64 pos, RESTART_PAGE_HEADER **wrp,
351 LSN *lsn)
352 {
353 RESTART_AREA *ra;
354 RESTART_PAGE_HEADER *trp;
355 int size, err;
356
357 ntfs_debug("Entering.");
358 /* Check the restart page header for consistency. */
359 if (!ntfs_check_restart_page_header(vi, rp, pos)) {
360 /* Error output already done inside the function. */
361 return -EINVAL;
362 }
363 /* Check the restart area for consistency. */
364 if (!ntfs_check_restart_area(vi, rp)) {
365 /* Error output already done inside the function. */
366 return -EINVAL;
367 }
368 ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
369 /*
370 * Allocate a buffer to store the whole restart page so we can multi
371 * sector transfer deprotect it.
372 */
373 trp = ntfs_malloc_nofs(le32_to_cpu(rp->system_page_size));
374 if (!trp) {
375 ntfs_error(vi->i_sb, "Failed to allocate memory for $LogFile "
376 "restart page buffer.");
377 return -ENOMEM;
378 }
379 /*
380 * Read the whole of the restart page into the buffer. If it fits
381 * completely inside @rp, just copy it from there. Otherwise map all
382 * the required pages and copy the data from them.
383 */
384 size = PAGE_CACHE_SIZE - (pos & ~PAGE_CACHE_MASK);
385 if (size >= le32_to_cpu(rp->system_page_size)) {
386 memcpy(trp, rp, le32_to_cpu(rp->system_page_size));
387 } else {
388 pgoff_t idx;
389 struct page *page;
390 int have_read, to_read;
391
392 /* First copy what we already have in @rp. */
393 memcpy(trp, rp, size);
394 /* Copy the remaining data one page at a time. */
395 have_read = size;
396 to_read = le32_to_cpu(rp->system_page_size) - size;
397 idx = (pos + size) >> PAGE_CACHE_SHIFT;
398 BUG_ON((pos + size) & ~PAGE_CACHE_MASK);
399 do {
400 page = ntfs_map_page(vi->i_mapping, idx);
401 if (IS_ERR(page)) {
402 ntfs_error(vi->i_sb, "Error mapping $LogFile "
403 "page (index %lu).", idx);
404 err = PTR_ERR(page);
405 if (err != -EIO && err != -ENOMEM)
406 err = -EIO;
407 goto err_out;
408 }
409 size = min_t(int, to_read, PAGE_CACHE_SIZE);
410 memcpy((u8*)trp + have_read, page_address(page), size);
411 ntfs_unmap_page(page);
412 have_read += size;
413 to_read -= size;
414 idx++;
415 } while (to_read > 0);
416 }
417 /*
418 * Perform the multi sector transfer deprotection on the buffer if the
419 * restart page is protected.
420 */
421 if ((!ntfs_is_chkd_record(trp->magic) || le16_to_cpu(trp->usa_count))
422 && post_read_mst_fixup((NTFS_RECORD*)trp,
423 le32_to_cpu(rp->system_page_size))) {
424 /*
425 * A multi sector tranfer error was detected. We only need to
426 * abort if the restart page contents exceed the multi sector
427 * transfer fixup of the first sector.
428 */
429 if (le16_to_cpu(rp->restart_area_offset) +
430 le16_to_cpu(ra->restart_area_length) >
431 NTFS_BLOCK_SIZE - sizeof(u16)) {
432 ntfs_error(vi->i_sb, "Multi sector transfer error "
433 "detected in $LogFile restart page.");
434 err = -EINVAL;
435 goto err_out;
436 }
437 }
438 /*
439 * If the restart page is modified by chkdsk or there are no active
440 * logfile clients, the logfile is consistent. Otherwise, need to
441 * check the log client records for consistency, too.
442 */
443 err = 0;
444 if (ntfs_is_rstr_record(rp->magic) &&
445 ra->client_in_use_list != LOGFILE_NO_CLIENT) {
446 if (!ntfs_check_log_client_array(vi, trp)) {
447 err = -EINVAL;
448 goto err_out;
449 }
450 }
451 if (lsn) {
452 if (ntfs_is_rstr_record(rp->magic))
453 *lsn = sle64_to_cpu(ra->current_lsn);
454 else /* if (ntfs_is_chkd_record(rp->magic)) */
455 *lsn = sle64_to_cpu(rp->chkdsk_lsn);
456 }
457 ntfs_debug("Done.");
458 if (wrp)
459 *wrp = trp;
460 else {
461 err_out:
462 ntfs_free(trp);
463 }
464 return err;
465 }
466
467 /**
468 * ntfs_check_logfile - check the journal for consistency
469 * @log_vi: struct inode of loaded journal $LogFile to check
470 * @rp: [OUT] on success this is a copy of the current restart page
471 *
472 * Check the $LogFile journal for consistency and return 'true' if it is
473 * consistent and 'false' if not. On success, the current restart page is
474 * returned in *@rp. Caller must call ntfs_free(*@rp) when finished with it.
475 *
476 * At present we only check the two restart pages and ignore the log record
477 * pages.
478 *
479 * Note that the MstProtected flag is not set on the $LogFile inode and hence
480 * when reading pages they are not deprotected. This is because we do not know
481 * if the $LogFile was created on a system with a different page size to ours
482 * yet and mst deprotection would fail if our page size is smaller.
483 */
484 bool ntfs_check_logfile(struct inode *log_vi, RESTART_PAGE_HEADER **rp)
485 {
486 s64 size, pos;
487 LSN rstr1_lsn, rstr2_lsn;
488 ntfs_volume *vol = NTFS_SB(log_vi->i_sb);
489 struct address_space *mapping = log_vi->i_mapping;
490 struct page *page = NULL;
491 u8 *kaddr = NULL;
492 RESTART_PAGE_HEADER *rstr1_ph = NULL;
493 RESTART_PAGE_HEADER *rstr2_ph = NULL;
494 int log_page_size, log_page_mask, err;
495 bool logfile_is_empty = true;
496 u8 log_page_bits;
497
498 ntfs_debug("Entering.");
499 /* An empty $LogFile must have been clean before it got emptied. */
500 if (NVolLogFileEmpty(vol))
501 goto is_empty;
502 size = i_size_read(log_vi);
503 /* Make sure the file doesn't exceed the maximum allowed size. */
504 if (size > MaxLogFileSize)
505 size = MaxLogFileSize;
506 /*
507 * Truncate size to a multiple of the page cache size or the default
508 * log page size if the page cache size is between the default log page
509 * log page size if the page cache size is between the default log page
510 * size and twice that.
511 */
512 if (PAGE_CACHE_SIZE >= DefaultLogPageSize && PAGE_CACHE_SIZE <=
513 DefaultLogPageSize * 2)
514 log_page_size = DefaultLogPageSize;
515 else
516 log_page_size = PAGE_CACHE_SIZE;
517 log_page_mask = log_page_size - 1;
518 /*
519 * Use ntfs_ffs() instead of ffs() to enable the compiler to
520 * optimize log_page_size and log_page_bits into constants.
521 */
522 log_page_bits = ntfs_ffs(log_page_size) - 1;
523 size &= ~(s64)(log_page_size - 1);
524 /*
525 * Ensure the log file is big enough to store at least the two restart
526 * pages and the minimum number of log record pages.
527 */
528 if (size < log_page_size * 2 || (size - log_page_size * 2) >>
529 log_page_bits < MinLogRecordPages) {
530 ntfs_error(vol->sb, "$LogFile is too small.");
531 return false;
532 }
533 /*
534 * Read through the file looking for a restart page. Since the restart
535 * page header is at the beginning of a page we only need to search at
536 * what could be the beginning of a page (for each page size) rather
537 * than scanning the whole file byte by byte. If all potential places
538 * contain empty and uninitialzed records, the log file can be assumed
539 * to be empty.
540 */
541 for (pos = 0; pos < size; pos <<= 1) {
542 pgoff_t idx = pos >> PAGE_CACHE_SHIFT;
543 if (!page || page->index != idx) {
544 if (page)
545 ntfs_unmap_page(page);
546 page = ntfs_map_page(mapping, idx);
547 if (IS_ERR(page)) {
548 ntfs_error(vol->sb, "Error mapping $LogFile "
549 "page (index %lu).", idx);
550 goto err_out;
551 }
552 }
553 kaddr = (u8*)page_address(page) + (pos & ~PAGE_CACHE_MASK);
554 /*
555 * A non-empty block means the logfile is not empty while an
556 * empty block after a non-empty block has been encountered
557 * means we are done.
558 */
559 if (!ntfs_is_empty_recordp((le32*)kaddr))
560 logfile_is_empty = false;
561 else if (!logfile_is_empty)
562 break;
563 /*
564 * A log record page means there cannot be a restart page after
565 * this so no need to continue searching.
566 */
567 if (ntfs_is_rcrd_recordp((le32*)kaddr))
568 break;
569 /* If not a (modified by chkdsk) restart page, continue. */
570 if (!ntfs_is_rstr_recordp((le32*)kaddr) &&
571 !ntfs_is_chkd_recordp((le32*)kaddr)) {
572 if (!pos)
573 pos = NTFS_BLOCK_SIZE >> 1;
574 continue;
575 }
576 /*
577 * Check the (modified by chkdsk) restart page for consistency
578 * and get a copy of the complete multi sector transfer
579 * deprotected restart page.
580 */
581 err = ntfs_check_and_load_restart_page(log_vi,
582 (RESTART_PAGE_HEADER*)kaddr, pos,
583 !rstr1_ph ? &rstr1_ph : &rstr2_ph,
584 !rstr1_ph ? &rstr1_lsn : &rstr2_lsn);
585 if (!err) {
586 /*
587 * If we have now found the first (modified by chkdsk)
588 * restart page, continue looking for the second one.
589 */
590 if (!pos) {
591 pos = NTFS_BLOCK_SIZE >> 1;
592 continue;
593 }
594 /*
595 * We have now found the second (modified by chkdsk)
596 * restart page, so we can stop looking.
597 */
598 break;
599 }
600 /*
601 * Error output already done inside the function. Note, we do
602 * not abort if the restart page was invalid as we might still
603 * find a valid one further in the file.
604 */
605 if (err != -EINVAL) {
606 ntfs_unmap_page(page);
607 goto err_out;
608 }
609 /* Continue looking. */
610 if (!pos)
611 pos = NTFS_BLOCK_SIZE >> 1;
612 }
613 if (page)
614 ntfs_unmap_page(page);
615 if (logfile_is_empty) {
616 NVolSetLogFileEmpty(vol);
617 is_empty:
618 ntfs_debug("Done. ($LogFile is empty.)");
619 return true;
620 }
621 if (!rstr1_ph) {
622 BUG_ON(rstr2_ph);
623 ntfs_error(vol->sb, "Did not find any restart pages in "
624 "$LogFile and it was not empty.");
625 return false;
626 }
627 /* If both restart pages were found, use the more recent one. */
628 if (rstr2_ph) {
629 /*
630 * If the second restart area is more recent, switch to it.
631 * Otherwise just throw it away.
632 */
633 if (rstr2_lsn > rstr1_lsn) {
634 ntfs_debug("Using second restart page as it is more "
635 "recent.");
636 ntfs_free(rstr1_ph);
637 rstr1_ph = rstr2_ph;
638 /* rstr1_lsn = rstr2_lsn; */
639 } else {
640 ntfs_debug("Using first restart page as it is more "
641 "recent.");
642 ntfs_free(rstr2_ph);
643 }
644 rstr2_ph = NULL;
645 }
646 /* All consistency checks passed. */
647 if (rp)
648 *rp = rstr1_ph;
649 else
650 ntfs_free(rstr1_ph);
651 ntfs_debug("Done.");
652 return true;
653 err_out:
654 if (rstr1_ph)
655 ntfs_free(rstr1_ph);
656 return false;
657 }
658
659 /**
660 * ntfs_is_logfile_clean - check in the journal if the volume is clean
661 * @log_vi: struct inode of loaded journal $LogFile to check
662 * @rp: copy of the current restart page
663 *
664 * Analyze the $LogFile journal and return 'true' if it indicates the volume was
665 * shutdown cleanly and 'false' if not.
666 *
667 * At present we only look at the two restart pages and ignore the log record
668 * pages. This is a little bit crude in that there will be a very small number
669 * of cases where we think that a volume is dirty when in fact it is clean.
670 * This should only affect volumes that have not been shutdown cleanly but did
671 * not have any pending, non-check-pointed i/o, i.e. they were completely idle
672 * at least for the five seconds preceding the unclean shutdown.
673 *
674 * This function assumes that the $LogFile journal has already been consistency
675 * checked by a call to ntfs_check_logfile() and in particular if the $LogFile
676 * is empty this function requires that NVolLogFileEmpty() is true otherwise an
677 * empty volume will be reported as dirty.
678 */
679 bool ntfs_is_logfile_clean(struct inode *log_vi, const RESTART_PAGE_HEADER *rp)
680 {
681 ntfs_volume *vol = NTFS_SB(log_vi->i_sb);
682 RESTART_AREA *ra;
683
684 ntfs_debug("Entering.");
685 /* An empty $LogFile must have been clean before it got emptied. */
686 if (NVolLogFileEmpty(vol)) {
687 ntfs_debug("Done. ($LogFile is empty.)");
688 return true;
689 }
690 BUG_ON(!rp);
691 if (!ntfs_is_rstr_record(rp->magic) &&
692 !ntfs_is_chkd_record(rp->magic)) {
693 ntfs_error(vol->sb, "Restart page buffer is invalid. This is "
694 "probably a bug in that the $LogFile should "
695 "have been consistency checked before calling "
696 "this function.");
697 return false;
698 }
699 ra = (RESTART_AREA*)((u8*)rp + le16_to_cpu(rp->restart_area_offset));
700 /*
701 * If the $LogFile has active clients, i.e. it is open, and we do not
702 * have the RESTART_VOLUME_IS_CLEAN bit set in the restart area flags,
703 * we assume there was an unclean shutdown.
704 */
705 if (ra->client_in_use_list != LOGFILE_NO_CLIENT &&
706 !(ra->flags & RESTART_VOLUME_IS_CLEAN)) {
707 ntfs_debug("Done. $LogFile indicates a dirty shutdown.");
708 return false;
709 }
710 /* $LogFile indicates a clean shutdown. */
711 ntfs_debug("Done. $LogFile indicates a clean shutdown.");
712 return true;
713 }
714
715 /**
716 * ntfs_empty_logfile - empty the contents of the $LogFile journal
717 * @log_vi: struct inode of loaded journal $LogFile to empty
718 *
719 * Empty the contents of the $LogFile journal @log_vi and return 'true' on
720 * success and 'false' on error.
721 *
722 * This function assumes that the $LogFile journal has already been consistency
723 * checked by a call to ntfs_check_logfile() and that ntfs_is_logfile_clean()
724 * has been used to ensure that the $LogFile is clean.
725 */
726 bool ntfs_empty_logfile(struct inode *log_vi)
727 {
728 VCN vcn, end_vcn;
729 ntfs_inode *log_ni = NTFS_I(log_vi);
730 ntfs_volume *vol = log_ni->vol;
731 struct super_block *sb = vol->sb;
732 runlist_element *rl;
733 unsigned long flags;
734 unsigned block_size, block_size_bits;
735 int err;
736 bool should_wait = true;
737
738 ntfs_debug("Entering.");
739 if (NVolLogFileEmpty(vol)) {
740 ntfs_debug("Done.");
741 return true;
742 }
743 /*
744 * We cannot use ntfs_attr_set() because we may be still in the middle
745 * of a mount operation. Thus we do the emptying by hand by first
746 * zapping the page cache pages for the $LogFile/$DATA attribute and
747 * then emptying each of the buffers in each of the clusters specified
748 * by the runlist by hand.
749 */
750 block_size = sb->s_blocksize;
751 block_size_bits = sb->s_blocksize_bits;
752 vcn = 0;
753 read_lock_irqsave(&log_ni->size_lock, flags);
754 end_vcn = (log_ni->initialized_size + vol->cluster_size_mask) >>
755 vol->cluster_size_bits;
756 read_unlock_irqrestore(&log_ni->size_lock, flags);
757 truncate_inode_pages(log_vi->i_mapping, 0);
758 down_write(&log_ni->runlist.lock);
759 rl = log_ni->runlist.rl;
760 if (unlikely(!rl || vcn < rl->vcn || !rl->length)) {
761 map_vcn:
762 err = ntfs_map_runlist_nolock(log_ni, vcn, NULL);
763 if (err) {
764 ntfs_error(sb, "Failed to map runlist fragment (error "
765 "%d).", -err);
766 goto err;
767 }
768 rl = log_ni->runlist.rl;
769 BUG_ON(!rl || vcn < rl->vcn || !rl->length);
770 }
771 /* Seek to the runlist element containing @vcn. */
772 while (rl->length && vcn >= rl[1].vcn)
773 rl++;
774 do {
775 LCN lcn;
776 sector_t block, end_block;
777 s64 len;
778
779 /*
780 * If this run is not mapped map it now and start again as the
781 * runlist will have been updated.
782 */
783 lcn = rl->lcn;
784 if (unlikely(lcn == LCN_RL_NOT_MAPPED)) {
785 vcn = rl->vcn;
786 goto map_vcn;
787 }
788 /* If this run is not valid abort with an error. */
789 if (unlikely(!rl->length || lcn < LCN_HOLE))
790 goto rl_err;
791 /* Skip holes. */
792 if (lcn == LCN_HOLE)
793 continue;
794 block = lcn << vol->cluster_size_bits >> block_size_bits;
795 len = rl->length;
796 if (rl[1].vcn > end_vcn)
797 len = end_vcn - rl->vcn;
798 end_block = (lcn + len) << vol->cluster_size_bits >>
799 block_size_bits;
800 /* Iterate over the blocks in the run and empty them. */
801 do {
802 struct buffer_head *bh;
803
804 /* Obtain the buffer, possibly not uptodate. */
805 bh = sb_getblk(sb, block);
806 BUG_ON(!bh);
807 /* Setup buffer i/o submission. */
808 lock_buffer(bh);
809 bh->b_end_io = end_buffer_write_sync;
810 get_bh(bh);
811 /* Set the entire contents of the buffer to 0xff. */
812 memset(bh->b_data, -1, block_size);
813 if (!buffer_uptodate(bh))
814 set_buffer_uptodate(bh);
815 if (buffer_dirty(bh))
816 clear_buffer_dirty(bh);
817 /*
818 * Submit the buffer and wait for i/o to complete but
819 * only for the first buffer so we do not miss really
820 * serious i/o errors. Once the first buffer has
821 * completed ignore errors afterwards as we can assume
822 * that if one buffer worked all of them will work.
823 */
824 submit_bh(WRITE, bh);
825 if (should_wait) {
826 should_wait = false;
827 wait_on_buffer(bh);
828 if (unlikely(!buffer_uptodate(bh)))
829 goto io_err;
830 }
831 brelse(bh);
832 } while (++block < end_block);
833 } while ((++rl)->vcn < end_vcn);
834 up_write(&log_ni->runlist.lock);
835 /*
836 * Zap the pages again just in case any got instantiated whilst we were
837 * emptying the blocks by hand. FIXME: We may not have completed
838 * writing to all the buffer heads yet so this may happen too early.
839 * We really should use a kernel thread to do the emptying
840 * asynchronously and then we can also set the volume dirty and output
841 * an error message if emptying should fail.
842 */
843 truncate_inode_pages(log_vi->i_mapping, 0);
844 /* Set the flag so we do not have to do it again on remount. */
845 NVolSetLogFileEmpty(vol);
846 ntfs_debug("Done.");
847 return true;
848 io_err:
849 ntfs_error(sb, "Failed to write buffer. Unmount and run chkdsk.");
850 goto dirty_err;
851 rl_err:
852 ntfs_error(sb, "Runlist is corrupt. Unmount and run chkdsk.");
853 dirty_err:
854 NVolSetErrors(vol);
855 err = -EIO;
856 err:
857 up_write(&log_ni->runlist.lock);
858 ntfs_error(sb, "Failed to fill $LogFile with 0xff bytes (error %d).",
859 -err);
860 return false;
861 }
862
863 #endif /* NTFS_RW */
864
~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~
kernel.org | git.kernel.org | LWN.net | Project Home | Wiki (Japanese) | Wiki (English) | SVN repository | Mail admin
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.
osdn.jp
|
__label__pos
| 0.94784 |
1
vote
1answer
29 views
Could 11.5 Million 401's be causing bottlenecks?
I'm going to preface this with a warning: My knowledge about servers and networking is VERY limited, and if you provide me with technical answers, I probably won't understand much until I research ...
2
votes
2answers
263 views
401 IIS Error for SearchAdmin.asmx
I have a three server SharePoint 2007 MOSS environment where my IIS logs continue to get pounded with 401.1 and 401.2. These logs are filling up so much that they consume my HDD. I can tell from ...
|
__label__pos
| 0.507303 |
nodejs – req.body returns undefined when sent to be handled in anther file
Advertisements
I’m working on a project involving nodejs and I’m having a problem with a request body that is undefined
On the client side I fetch some data
const userData = {
username,
email,
password
};
fetch(API_URL, {
method: 'POST',
body: JSON.stringify(userData),
headers: {
'content-type': 'application/json'
}
});
Then I have a file on the serverside called index.js
const express = require('express');
const cors = require('cors');
const { signup, login } = require ('./authController');
app.post('/signup', (req, res) =>{
console.log(req.body);
signup(req.body)
});
When I log req.body in "app.post" The values are correct but when i log it in this next file called authController.js it is suddenly undefined
const { createUser, findUserByEmail } = require('./user');
const { comparePassword, createToken } = require('./auth');
// Controller function to handle a signup request
async function signup(req, res) {
console.log(req.body);
try {
} catch (error) {
}
}
I also get the following error telling me the property is undefined or null
TypeError: Cannot destructure property email of ‘undefined’ or ‘null’.
Anyone know what might cause this and how I can fix this, all help is greatly appreciated.
>Solution :
Convert body: JSON.stringify(userData), to body: userData
Add app.use(express.json() above app.post so you can work with the json. req.body should have an email property on the server when fetching now. Your function signup takes 2 arguments, (req, res) instead of (req.body) BTW.
const userData = {
username,
email,
password
};
fetch(API_URL, {
method: 'POST',
body: userData,
headers: {
'content-type': 'application/json'
}
});
const express = require('express');
const cors = require('cors');
const { signup, login } = require ('./authController');
app.use(express.json())
app.post('/signup', signup)
const { createUser, findUserByEmail } = require('./user');
const { comparePassword, createToken } = require('./auth');
// Controller function to handle a signup request
async function signup(req, res) {
console.log(req.body);
try {
} catch (error) {
}
}
Leave a ReplyCancel reply
|
__label__pos
| 0.999542 |
Answers
The Brainliest Answer!
2014-11-24T06:08:11-05:00
First of, what you are referring to zeroth power is not that, actually. It is the symbol for degrees.
If the angles are complementary it means they add up to 90 degrees.
Therefore (2x+2) + (3x-5) = 90
2x + 3x +2 -5 = 90
5x - 3 = 90
5x = 90+3.
5x = 93. Divide both sides by 5.
x = 93/5 = 18.6
Therefore: (2x+2) = (2*18.6+2) = 37.2 + 2 = 39.2 degrees.
(3x- 5) = (3*18.6-3) = 55.8 - 3 = 52.8 degrees.
So the measure of the smaller anger is 39.2 degrees.
Cheers.
1 5 1
Sorry there is an error in the second line (3x- 5) = (3*18.6-3) = 55.8 - 5 = 50.8
|
__label__pos
| 0.998887 |
Регистрация - или - Войти
CthulhuCthulhu
Модификация по рассказу Говарда Лавкравта...
24/02/2006
ак 47
• Описание
• КАК УСТАНОВИТЬ
Кодим оружие
ак 47
Просмотров : 2557 ( +1 )
Скачиваний : 0
Прислал / (а) : alix1995alix
Дата создания : 26.03.2011 17:08:40
Рейтинг :
( 3.93 )
Поделиться :
Имеются следующие переводы : | русский |
Сегодня я расскажу вам как сделать простой Ak 47 с подробным описанием его функций и опций :) хз как вам на понятном языке объяснить сложные термины C++
Итак приступим… Для того чтобы внедрить такое орудие убийства в ваш мод вам потребуется минимум знать как компилить проект в C++ И как туда добавлять cpp файл :)
Нус объяснение самого тутора будет на примере кода. И так для начала зайдём в Visual Studio и добавим в ваш проект с++ файл. добавим его в Server.dll вот таким методом
2786316377.weaponTut01.jpg
И копируем туда этот код. Прочитав сам код вы поймете как сделать простое оружие
// 15.02.06
// Кто кодил? Кодил Lolmen
// Как кодил? Сам! :D
// Весь код принадлежит протасову виталию и отправлен на изучение
// людьми сообщества Nashalife :)
// Вам разрешается его использовать по полной программе :)
#include "cbase.h" // засунем базу без неё никуда
#include "basehlcombatweapon.h" // определим какие свойства юзать бум юзать
#include "NPCevent.h" // для NPC
#include "basecombatcharacter.h" // для игрока
#include "AI_BaseNPC.h" // для AI
#include "player.h" // сам игрок
#include "game.h" // игра
#include "in_buttons.h" // для кнопок мыши
#include "AI_Memory.h" // для AI память
#include "soundent.h" // хз вродь звуки или чёто такое
// Обозначили все в своём классе
// Класс оружиеАк47 юзаем для него класс Автоматов :)
class CWeaponAK47 : public CHLSelectFireMachineGun
{
DECLARE_DATADESC();
public:
DECLARE_CLASS( CWeaponAK47, CHLSelectFireMachineGun );
CWeaponAK47();
DECLARE_SERVERCLASS();
void Precache( void ); // подгрузка
void AddViewKick( void ); // как нас будет трясти при стрельбе
void SecondaryAttack( void ); // вторичная атака
void ItemPostFrame( void ); // оружие на каждый кадр
int GetMinBurst() { return 2; } // взять минимальный разброс
int GetMaxBurst() { return 6; } // взять максимальный разброс
virtual void Equip( CBaseCombatCharacter *pOwner ); // когда NPC снабжен пушкой
bool Reload( void ); // обозначим тип BOOL переключатель для перезарядки
float GetFireRate( void ) { return 0.1f; } // скорость стрельбы
int CapabilitiesGet( void ) { return bits_CAP_WEAPON_RANGE_ATTACK1; }
virtual const Vector& GetBulletSpread( void ) // берем конус разброса
{
static const Vector cone = VECTOR_CONE_5DEGREES; // конус разброса от дула
return cone;
}
const WeaponProficiencyInfo_t *GetProficiencyValues(); // приоритеты
void Operator_HandleAnimEvent( animevent_t *pEvent, CBaseCombatCharacter *pOperator )
{
switch( pEvent->event )
{
case EVENT_WEAPON_SMG1: // Дает понять что мы юзаем опции SMG1 для NPC
{
Vector vecShootOrigin, vecShootDir; // дадим названия вкторам
QAngle angDiscard;
if ((pEvent->options == NULL) || (pEvent->options[0] == '') || (!pOperator->GetAttachment(pEvent->options, vecShootOrigin, angDiscard)))
{
vecShootOrigin = pOperator->Weapon_ShootPosition(); // позиция откуда вылетают пули
}
CAI_BaseNPC *npc = pOperator->MyNPCPointer(); // найдём NPC
ASSERT( npc != NULL ); // если NPC нету то ждём его
vecShootDir = npc->GetActualShootTrajectory( vecShootOrigin ); // куда NPC стрелять
WeaponSoundRealtime( SINGLE_NPC ); // звук для NPC
CSoundEnt::InsertSound( SOUND_COMBAT, pOperator->GetAbsOrigin(), SOUNDENT_VOLUME_MACHINEGUN, 0.2, pOperator ); // параметры звука
pOperator->FireBullets( 1, vecShootOrigin, vecShootDir, VECTOR_CONE_PRECALCULATED,
MAX_TRACE_LENGTH, m_iPrimaryAmmoType, 2, entindex(), 0 ); //чем будет NPC стрелять
pOperator->DoMuzzleFlash(); // Будет мерцать Вспышка от выстрелов
m_iClip1 = m_iClip1 - 1; // сколько патронов займем за один выстрел
}
break;
default:
BaseClass::Operator_HandleAnimEvent( pEvent, pOperator );
break;
}
}
DECLARE_ACTTABLE();
};
// начали использовать функции из класса
IMPLEMENT_SERVERCLASS_ST(CWeaponAK47, DT_WeaponAK47)
END_SEND_TABLE()
LINK_ENTITY_TO_CLASS( weapon_ak47, CWeaponAK47 );
PRECACHE_WEAPON_REGISTER(weapon_ak47);
BEGIN_DATADESC( CWeaponAK47 )
END_DATADESC()
acttable_t CWeaponAK47::m_acttable[] = // Активности (Анимации для NPC)
{
{ ACT_RANGE_ATTACK1, ACT_RANGE_ATTACK_SMG1, true }, // Скажем NPC что ему при выстреле нужно юзать анимку от SMG1
};
IMPLEMENT_ACTTABLE(CWeaponAK47);
//=========================================================
CWeaponAK47::CWeaponAK47( )
{
// Конструктор для калаша
m_fMinRange1 = 64;// нет минимальной дальности полёта пули.
m_fMaxRange1 = 1400; // максимальная дальность полёта пули;
}
//-----------------------------------------------------------------------------
// Что делаем? : Подгружаем
//-----------------------------------------------------------------------------
void CWeaponAK47::Precache( void )
{
// что подгружаем? ничего просто инициализируем подгрузку
// но ниче не грузим :)
BaseClass::Precache();
}
//-----------------------------------------------------------------------------
// Что делаем? : даем NPC более длинный маштаб полета пули.
//-----------------------------------------------------------------------------
void CWeaponAK47::Equip( CBaseCombatCharacter *pOwner )
{
if( pOwner->Classify() == CLASS_PLAYER_ALLY ) // если пушка у игрока то
{
m_fMaxRange1 = 3000; // максммальный полёт пули
}
else // потому что пока она не у игрока полёт пули равен
{
m_fMaxRange1 = 1400; // равен 1400
}
BaseClass::Equip( pOwner );
}
//-----------------------------------------------------------------------------
// Что делаем? Делаем все для перезарядки
//-----------------------------------------------------------------------------
bool CWeaponAK47::Reload( void )
{
bool fRet; // переключатель
float fCacheTime = m_flNextSecondaryAttack; // время для перезарядки
fRet = DefaultReload( GetMaxClip1(), GetMaxClip2(), ACT_VM_RELOAD ); // что перезаряжаем? обойму 1 обойму 2 и юзаем анимку перезарядки ACT_VM_RELOAD
if ( fRet ) // если переключатель fRet включён , то
{
// Перекрываем доступ к вторичной атаке чтобы небыло багов
// Неразрешено стрелять когда мы перезаряжаемся
m_flNextSecondaryAttack = GetOwner()->m_flNextAttack = fCacheTime;
WeaponSound( RELOAD ); // звук из скрипта проиграем RELOAD
}
return fRet; // возратим fRet
}
//-----------------------------------------------------------------------------
// Что делаем? Добовляем потясывание экрана при стрельбе
//-----------------------------------------------------------------------------
void CWeaponAK47::AddViewKick( void )
{
#define EASY_DAMPEN 0.5f // легкое подергивание
#define MAX_VERTICAL_KICK 15.0f //макс вертикальный наклон в градусах
#define SLIDE_LIMIT 3.0f // всю эту батву будем делать за столько то секунд
//Кого трясти будем?
CBasePlayer *pPlayer = ToBasePlayer( GetOwner() ); // игрока :D
if ( pPlayer == NULL )
return; // если игрока у нас нету то и трясти не будем!
DoMachineGunKick( pPlayer, EASY_DAMPEN, MAX_VERTICAL_KICK, m_fFireDuration, SLIDE_LIMIT ); // а вот это уже инициализирует все что мы набрали
}
//-----------------------------------------------------------------
// Че делаем? Проверяем не нажали ли геймер по кнопке мыши :)
// Первичный огонь у пушки вызывается по дефолту его не трогаем
// А вот что делать, если мы нажали по правой мыше,
// которая у нас стоит в настройках как вторичная атака?
//-----------------------------------------------------------------
void CWeaponAK47::ItemPostFrame( void )
{
// Если геймер есть то все ока :)
CBasePlayer *pOwner = ToBasePlayer( GetOwner() );
// Если геймера нету то пипец ниче не делаем :)
if ( pOwner == NULL )
return;
// Если геймер нажал кнопку ATTACK2(подефолту правая кнопка мыши)
// то время вторичной атаки меньше либо равно текущему времени
if ( pOwner->m_nButtons & IN_ATTACK2 )
{
if (m_flNextSecondaryAttack <= gpGlobals->curtime)
{
SecondaryAttack(); // и тут мы вызовем функцию вторичной стрельбы оружия
pOwner->m_nButtons &= ~IN_ATTACK2; // а вот если ненажал кнопку ATTACK2
return;// то будем ждать пока нажмет :)
}
}
BaseClass::ItemPostFrame(); // используем и базовый класс тоже
}
//-----------------------------------------------------------------------------
// Что делаем? определяем что у нас будет вторичной Атакой :)
//-----------------------------------------------------------------------------
void CWeaponAK47::SecondaryAttack( void )
{
// А вот тут пока пусто сюда мы можем сделать :
// Приближение,Выстрел из подстволки,Удар штыком и тд
}
//------------------------------------------------------------------
// Что делаем? Опа самая непонятная часть "приоритеты оружия"
// Короче это определяет насколько крут наш ствол
// Если например в пистолете закончились патроны на что же сменить оружие
// В первую очередь? На самый крутой ствол :)
// Наш ствол будет покруче чем SMG :)
//-----------------------------------------------------------------
const WeaponProficiencyInfo_t *CWeaponAK47::GetProficiencyValues()
{
static WeaponProficiencyInfo_t proficiencyTable[] =
{
{ 8.0, 0.75 },
{ 6.00, 0.75 },
{ 10.0/2.0, 0.75 },
{ 5.0/3.0, 0.75 },
{ 2.00, 1.0 },
};
COMPILE_TIME_ASSERT( ARRAYSIZE(proficiencyTable) == WEAPON_PROFICIENCY_PERFECT + 1); // а это нам сократит время компила пушки :)
return proficiencyTable;
}
Теперь зайдем в X:\папку с вашим проэктом\src\cl_dll\hl2_hud\c_weapon__stubs_hl2.cpp Откроем этот файлец и добавим туда строку после:
STUB_WEAPON_CLASS( weapon_smg1, WeaponSMG1, C_HLSelectFireMachineGun );
вот эту
STUB_WEAPON_CLASS( weapon_ak47, WeaponAK47, C_HLSelectFireMachineGun );
Теперь компельнём проэкт :)
теперь самая малость осталось написать скрипт.
Скрипт напишем следующим образом. Зайдем в папку со стимом там в SteamApps\SourceMods\папка с вашим модом\scripts и создадим там текстовик c именем weapon_ak47 след содержания :
// Small Machine Gun 1
WeaponData
{
// Weapon data is loaded by both the Game and Client DLLs.
"printname" "AK47"
"viewmodel" "models/weapons/v_smg1.mdl" // Замените модельку на вашу
"playermodel" "models/weapons/w_smg1.mdl" //Замените модельку на вашу
"anim_prefix" "smg2"
"bucket" "2"
"bucket_position" "9"
"clip_size" "39"
"default_clip" "30"
"primary_ammo" "SMG1"
"secondary_ammo" "None"
"weight" "3"
"item_flags" "0"
// Sounds for the weapon. There is a max of 16 sounds per category (i.e. max 16 "single_shot" sounds)
SoundData
{
"reload" "Weapon_SMG1.Reload"
"reload_npc" "Weapon_SMG1.NPC_Reload"
"empty" "Weapon_SMG1.Empty"
"single_shot" "Weapon_SMG1.Single"
"single_shot_npc" "Weapon_SMG1.NPC_Single"
"special1" "Weapon_SMG1.Special1"
"special2" "Weapon_SMG1.Special2"
"double_shot" "Weapon_SMG1.Double"
"burst" "Weapon_SMG1.Burst"
}
// Weapon Sprite data is loaded by the Client DLL.
TextureData
{
"weapon"
{
"font" "WeaponIcons"
"character" "a"
}
"weapon_s"
{
"font" "WeaponIconsSelected"
"character" "a"
}
"ammo"
{
"font" "WeaponIcons"
"character" "r"
}
"ammo2"
{
"font" "WeaponIcons"
"character" "t"
}
"crosshair"
{
"font" "Crosshairs"
"character" "Q"
}
"autoaim"
{
"file" "sprites/crosshairs"
"x" "0"
"y" "48"
"width" "24"
"height" "24"
}
}
}
Затем запускаем ваш мод…
В игре прописываем give weapon_ak47 и наблюдаем за тем как это дело работает.
Автор: Lolmen aka Протасов Виталий Александрович.
Вы не можете комментировать, т.к. вы не зарегистрированы.
Оригинальный Desert Eagle со стикерами
Counter-Strike 1.6Оригинальный Desert Eagle со стикерами
Liandri Trooper Female
Half-LifeLiandri Trooper Female
awp_big_sniperfield
Counter-Strike Sourceawp_big_sniperfield
de_alaska
Counter-Strike 1.6de_alaska
Время --------- ___ALL___ --------- : 0.32026 секунд
|
__label__pos
| 0.680239 |
1.5 as a Percent
Welcome to 1.5 as a percent
Welcome to 1.5 as a percent, our page dedicated to understanding and calculating 1.5 percent. Whether you’re looking for a simple conversion or want to explore the concept in more depth, you’ve come to the right place. Here, you’ll find information on how to convert 1.5 to percent, a calculator to assist you, and the formula to use. So let’s dive in and explore the world of 1.5 percent!
How to Convert 1.5 as a Percent
Converting 1.5 to percent is a straightforward process. To convert any decimal to a percent, you need to multiply it by 100. In the case of 1.5, you would multiply it by 100:
1.5 * 100 = 150
This means that 1.5 as a percent is equal to 150%. You can also represent it using the percent sign as 150%. So, to convert 1.5 to percent, you simply move the decimal point two places to the right. It’s as simple as that!
Using the Decimal ⇄ Percent Calculator
If you find yourself frequently needing to convert decimals to percents, our Decimal ⇄ Percent Calculator is here to help. This calculator allows you to quickly and accurately convert any decimal to a percent and vice versa. Simply enter the decimal value in the calculator, and it will instantly provide you with the corresponding percent value. It’s a handy tool to have at your disposal for all your decimal to percent conversion needs!
What is 1.5 as a Percent?
Now that we’ve discussed how to convert 1.5 to percent, let’s answer the question of what exactly is 1.5 as a percent. One point five as a percent is equal to 150%. So, if you’ve been looking for the answer to what is 1.5 as a percent, you now have your answer!
It’s worth noting that you can use the search form in the sidebar to find many other decimal to percent conversions, including 1.5 in percentage. Simply enter the desired value, such as 1.5, followed by “to percents,” and the search form will provide you with the information you’re looking for.
How to Write 1.5 as a Percent
When it comes to writing 1.5 as a percent, there are a few different ways to do it. The most common method is to multiply 1.5 by 100 to obtain the numerical value, and then append the percent sign (%). So, for 1.5, you would write it as 150%. This format is commonly used in tables and texts with space restrictions.
In running text, such as this article, you may choose to write it out as “150 percent” in American English or “150 per cent” elsewhere. In the UK, you may also come across the abbreviations pc, pct., and pct., for example, 150pc. It’s worth noting that the old forms “150 per centum” and “per cent.” followed by a period are rarely used these days and can be considered obsolete in daily life.
Ultimately, there is no difference in meaning between the two-word “per cent” and “percent.” The choice between “150 per cent” and “150 percent” is simply a matter of personal preference.
1.5 as a Percentage of a Number
Calculating what 1.5 is as a percentage of a certain number requires a simple formula. To determine the percentage, divide 1.5 by the given number and multiply the result by 100. For example, if you want to find out what 1.5 is as a percentage of 10:
(1.5 / 10) * 100% = 15%
In this case, 1.5 is 15% of 10. This formula can be applied to any number to find the corresponding percentage. To make things even easier, you can use our decimals to percents calculator. Simply enter 1.5 in the number field and insert your desired value in the field labeled “% of.” The calculator will automatically perform the conversion, rounding the result to ten decimal places.
Conclusion
Converting 1.5 to percent is a simple process that involves multiplying the decimal by 100. In this case, 1.5 as a percent equals 150%. We hope this article has provided you with a clear understanding of 1.5 as a percent and how to convert it. Remember, you can always use our Decimal ⇄ Percent Calculator for quick and accurate conversions. Thank you for visiting our website, and we hope you found this information helpful!
Summary of 1.5 as a percent
If our content about what is 1.5 as a percent has been useful to you, then feel free to share it with others using the sharing buttons below. Your support helps us reach more people who may find this information valuable. For any questions or comments regarding 1.5 in percent, please use the form below. We appreciate your feedback and look forward to assisting you further.
Thanks for visiting our website.
Article written by Mark
|
__label__pos
| 0.919897 |
Download MetaTrader 5
How does GetLastError() work ?
To add comments, please log in or register
ffoorr
1108
ffoorr
Does the last error remain in the GetLastError() fonction until a new error come
Or does the GetLastError() fonction is reset ?
Keith Watford
Moderator
10614
Keith Watford
Does the last error remain in the GetLastError() fonction until a new error come
Yes
Or does the GetLastError() fonction is reset ?
No, unless you reset it with ResetLastError()
Usually , when calling a function, you can check the return value to see if there was an error or not. If there was an error call GetLastError()
ffoorr
1108
ffoorr
Thank's
honest_knave
Moderator
2318
honest_knave
From GetLastError():
"After the function call, the contents of _LastError are reset."
whroeder1
16105
whroeder1
honest_knave: From GetLastError():
"After the function call, the contents of _LastError are reset."
1. And that used to be true before Build 600.
2. Then the documentation said ResetLastError - MQL4 Documentation (29 Jul 2014)
Note
It should be noted that the GetLastError() function doesn't zero the _LastError variable. Usually the ResetLastError() function is called before calling a function, after which an error appearance is checked.
3. And now it says
Note
The GetLastError() function zero the _LastError variable.
To add comments, please log in or register
|
__label__pos
| 0.926984 |
9 septiembre, 2024
Introducción
El comando echo es una de las herramientas más utilizadas en Linux para imprimir texto en la terminal. Este comando puede ser especialmente útil cuando se desea mostrar información organizada en diferentes líneas. En este artículo, exploraremos cómo utilizar echo para insertar saltos de línea y cómo ha evolucionado este comando a lo largo del tiempo.
Historia del Comando echo
El comando echo tiene sus raíces en los primeros días de Unix, desarrollado en los años 70 en los laboratorios Bell de AT&T. Desde entonces, ha sido una parte integral de los sistemas Unix y sus derivados, incluyendo Linux. Originalmente, echo se utilizaba para imprimir texto simple en la terminal, pero con el tiempo, se han añadido opciones y funcionalidades que permiten un control más fino sobre la salida del texto.
Usando echo para Insertar Saltos de Línea
Para insertar saltos de línea usando echo, puedes utilizar la opción -e que permite la interpretación de secuencias de escape. La secuencia de escape \n se utiliza para representar un salto de línea. Aquí tienes un ejemplo básico:
#!/bin/bash
echo -e "Primera línea\nSegunda línea\nTercera línea"
En este script:
• #!/bin/bash: Es la línea de “shebang” que indica al sistema que este script debe ser ejecutado usando el intérprete de Bash.
• echo -e: La opción -e habilita la interpretación de secuencias de escape.
• "Primera línea\nSegunda línea\nTercera línea": El texto a ser impreso, con \n insertando saltos de línea.
Cuando se ejecuta este script, la salida será:
Primera línea
Segunda línea
Tercera línea
Ejemplos Prácticos
Ejemplo 1: Generando un Mensaje Multilínea
Imagina que quieres generar un mensaje de bienvenida que se muestre en varias líneas. Puedes hacer lo siguiente:
#!/bin/bash
echo -e "Bienvenido al sistema.\nPor favor, siga las instrucciones:\n1. Inicie sesión.\n2. Verifique sus tareas.\n3. Cierre sesión cuando termine."
La salida será:
Bienvenido al sistema.
Por favor, siga las instrucciones:
1. Inicie sesión.
2. Verifique sus tareas.
3. Cierre sesión cuando termine.
Ejemplo 2: Creación de un Archivo de Texto con Saltos de Línea
También puedes redirigir la salida de echo a un archivo para crear documentos de texto. Por ejemplo:
#!/bin/bash
echo -e "Reporte Diario\nFecha: $(date)\n--------------------\n- Tarea 1: Completa\n- Tarea 2: En progreso\n- Tarea 3: Pendiente" > reporte.txt
Este script crea un archivo llamado reporte.txt con el siguiente contenido:
Reporte Diario
Fecha: [fecha actual]
--------------------
- Tarea 1: Completa
- Tarea 2: En progreso
- Tarea 3: Pendiente
Conclusión
El comando echo es una herramienta poderosa y versátil en el entorno de comandos de Linux. La capacidad de insertar saltos de línea usando secuencias de escape hace que sea posible crear salidas de texto organizadas y legibles, tanto en la terminal como en archivos de texto. Con una comprensión clara de cómo usar echo con la opción -e, puedes mejorar significativamente la presentación de tus scripts y mensajes en Linux.
Deja un comentario
Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *
|
__label__pos
| 0.705782 |
What you need to know
Things to remember:
• To convert a percentage into a decimal we divide by 100.
• To divide by 100 we move all the digits right 2 places.
First, let’s break up the word “percent” into “per” and “cent”.
per – “for every” or “out of”
cent – 100 (Century = 100 years)
So, if we put these together, then “percent” means “for every 100” or “out of 100”. Writing “percent” takes waaaaaaaay too long, so we use the symbol %.
10% – 10 percent
25% – 25 percent
47% – 47 percent
So, something like 47% just means “47 out of every 100”. We could show this percentage by cutting a block into 100 pieces and taking 47 of them:
So really, a percentage is actually just dividing by 100!
47\% =47\div100=0.47
Remember: To divide by 100 we move all of the digits right one place.
So, to covert a percentage into a decimal we just divide by 100!
5\% =5\div100=0.05
23\% =23\div100=0.23
146\% =146\div100=1.46
Example Questions
Question 1: Convert 65% into a decimal.
Answer
65%=65\div100=0.65
Question 2: Convert 250% into a decimal.
Answer
250%=250\div100=2.50=2.5
|
__label__pos
| 0.988323 |
Tivo – If you don’t have one you are missing out.
We got one of these when they were first released in Australia after I had been following them in the US for some time.
Why I think they are great?
1. Very high WAF (wife acceptance factor) . This is really important.
2. Expanding functionality. Because it is internet connected it just keeps getting better as they roll our more new features like Blockbuster on demand movies etc.
3. Seasons pass. This is probably the bit that really matters. Basically we have enough good content on free to air TV and if you can capture the bits you want you will never be short of something to watch. We have seasons passes setup for things like. Record the news every night and only keep 1 days worth (what is the likelyhood of watching 2 day old news). Record Top Gear every week regardless of timeslot of channel it is on and keep 2 of them (sometimes takes me a while to get to it). Record and keep 3 episodes of Play School (Maya), Thomas (Jay) and Little Princess (Charli). I think you get the idea.
Before I owned a TiVo I tried an AppleTV box. It was good but a low WAF factor killed it.
In Australia TiVo’s are sold via the big retailers like Harvey Norman, Good Guys, JB etc. Elsewhere click on the image below.
profile
|
__label__pos
| 0.777703 |
Take the 2017 Ember Community Survey!
Finding Records Edit Page
The Ember Data store provides an interface for retrieving records of a single type.
Retrieving a Single Record
Use store.findRecord() to retrieve a record by its type and ID. This will return a promise that fulfills with the requested record:
1
var blogPost = this.get('store').findRecord('blog-post', 1); // => GET /blog-posts/1
Use store.peekRecord() to retrieve a record by its type and ID, without making a network request. This will return the record only if it is already present in the store:
1
var blogPost = this.get('store').peekRecord('blog-post', 1); // => no network request
Retrieving Multiple Records
Use store.findAll() to retrieve all of the records for a given type:
1
var blogPosts = this.get('store').findAll('blog-post'); // => GET /blog-posts
Use store.peekAll() to retrieve all of the records for a given type that are already loaded into the store, without making a network request:
1
var blogPosts = this.get('store').peekAll('blog-post'); // => no network request
store.findAll() returns a DS.PromiseArray that fulfills to a DS.RecordArray and store.peekAll directly returns a DS.RecordArray.
It's important to note that DS.RecordArray is not a JavaScript array, it's an object that implements Ember.Enumerable. This is important because, for example, if you want to retrieve records by index, the [] notation will not work--you'll have to use objectAt(index) instead.
Querying for Multiple Records
Ember Data provides the ability to query for records that meet certain criteria. Calling store.query() will make a GET request with the passed object serialized as query params. This method returns a DS.PromiseArray in the same way as findAll.
For example, we could search for all person models who have the name of Peter:
1
2
3
4
5
6
7
8
// GET to /persons?filter[name]=Peter
this.get('store').query('person', {
filter: {
name: 'Peter'
}
}).then(function(peters) {
// Do something with `peters`
});
Querying for A Single Record
If you are using an adapter that supports server requests capable of returning a single model object, Ember Data provides a convenience method store.queryRecord()that will return a promise that resolves with that single record. The request is made via a method queryRecord() defined by the adapter.
For example, if your server API provides an endpoint for the currently logged in user:
1
2
3
4
5
6
7
// GET /api/current_user
{
user: {
id: 1234,
username: 'admin'
}
}
and the adapter for the User model defines a queryRecord() method that targets that endpoint:
app/adapters/user.js
1
2
3
4
5
6
7
8
// app/adapters/user.js
import DS from "ember-data";
export default DS.Adapter.extend({
queryRecord(modelName, query) {
return Ember.$.getJSON("/api/current_user");
}
});
then calling store.queryRecord() will retrieve that object from the server:
1
2
3
4
store.queryRecord('user', {}).then(function(user) {
let username = user.get('username');
console.log(`Currently logged in as ${username}`);
});
As in the case of store.query(), a query object can also be passed to store.queryRecord() and is available for the adapter's queryRecord() to use to qualify the request. However the adapter must return a single model object, not an array containing one element, otherwise Ember Data will throw an exception.
Note that Ember's default JSON API adapter does not provide the functionality needed to support queryRecord() directly as it relies on REST request definitions that return result data in the form of an array.
If your server API or your adapter only provides array responses but you wish to retrieve just a single record, you can alternatively use the query() method as follows:
1
2
3
4
5
6
7
8
// GET to /users?filter[email][email protected]
tom = store.query('user', {
filter: {
email: '[email protected]'
}
}).then(function(users) {
return users.get("firstObject");
});
|
__label__pos
| 0.976879 |
Official Content
By default Dynamic Transactions are defined with its Update Policy property = Read Only. So, as we know, the data is queried in runtime, and the Transaction form does neither allow updating nor inserting nor deleting.
However, in some scenarios, the capability of allowing the update of data is practical.
To define a Dynamic Transaction that allows data updates, the following transaction properties must be set:
and you have to complete the Data Provider that is automatically created in consequence of having set Data Provider property = True.
Let's see an interesting scenario that proposes the use of a Dynamic Transaction that allows data updates.
We have a GeneXus KB for tracking body weight, with the following transactions:
Person
{
PersonId*
PersonName
GenderId
GenderName
}
Gender
{
GenderId*
GenderName
GenderMembers = count(PersonName)
}
WeightLog
{
PersonId*
WeightLogDate*
WeightLogKilos
}
Now suppose that, with the system already up and running, people want to track not only their weight but also other body measurements (like chest or waist circumference). The database model needs to be redesigned in order to store this new data. Of course, it's possible to create a new Transaction object for each new measurement to be tracked, but a better (and more extensible) design is to have just one Transaction for any kind of measurements:
MeasureLog
{
PersonId*
MeasureId*
MeasureLogDate*
MeasureLogValue
}
in conjunction with a Measure Transaction:
Measure
{
MeasureId*
MeasureName
}
whose Data Provider property = True, its Used To property = Retrieve Data, its Update Policy property = Read Only, and its associated Data Provider is:
Measure_DataProvider
The WeightLog transaction is not needed anymore since all measurements will be stored in the physical table associated with the new MeasureLog transaction. However, the application code still references it as Base Transaction in many places, such as For Each statements. So, instead of removing the WeightLog transaction and having to modify wherever it is referenced, it's a good idea to change it into a Dynamic Transaction.
For that purpose, you have to:
1. "Turn on" the WeightLog Data Provider property = True
2. Set the WeightLog Used to property = Retrieve Data
3. Complete the Data Provider that was automatically created and named "WeightLog_DataProvider", like the following image shows:
WeightLog_DataProvider
With these definitions, the WeightLog Transaction can still be used in queries exactly as before (no code changes are needed to any For Each statement that references it, and its attributes can be kept in grids, printblocks, etc.). However, you must not forget that if you define a transaction as Dynamic, the associated physical tables will no longer exist. So, before proceeding with this proposal, you have to move the data (in this case, weights from WeightLog to MeassureLog table).
Well, and what about the updates? The user is accustomed to executing the WeightLog Transaction form, so, the idea is he can use both: the MeasureLog and the WeightLog transactions.
By setting the WeightLog Transaction Update Policy property = Updatable, its Form will allow the user to edit the data; but in which physical table the updates will be stored?
You have to codify the Insert, Update and Delete events in the WeightLog Transaction Events section, in order to specify his intention. In this example, the logical solution is to store the data in the MeasureLog physical table, using the Business Component concept as follows:
Event Insert(&Messages)
&MeasureLog = new()
&MeasureLog.PersonId = PersonId
&MeasureLog.MeasureId = 1
&MeasureLog.MeasureLogDate = WeightLogDate
&MeasureLog.MeasureLogValue = WeightLogKilos
&MeasureLog.Insert()
&Messages = &MeasureLog.GetMessages()
Endevent
Event Update(&Messages)
&MeasureLog.Load(PersonId, 1, WeightLogDate)
&MeasureLog.MeasureLogValue = WeightLogKilos
&MeasureLog.Update()
&Messages = &MeasureLog.GetMessages()
Endevent
Event Delete(&Messages)
&MeasureLog.Load(PersonId, 1, WeightLogDate)
&MeasureLog.Delete()
&Messages = &MeasureLog.GetMessages()
Endevent
Note that after applying respectively the Insert(), Update() and Delete() methods to the &MeasureLog business component variable, you obtain the messages and/or errors triggered (in the &Messages collection variable). By declaring the &Messages variable as a parameter in each event (as shown), those messages are displayed in the WeightLog Dynamic Transaction in a transparent way, like its own messages.
In this way, the WeightLog Dynamic Transaction can be used exactly the same way as before and no changes are necessary to dependent programs. This also applies if the transaction is used as Business Component, because it is a Dynamic Transaction that allows updates and the corresponding events to store the data are codified.
Considerations
Some considerations must be taken into account to use this feature:
• This feature is not available for multi-level dynamic transactions.
• If you are using MySQL the 5.7.7 or higher version is required.
• Informix and SQLite do not support this kind of Transaction.
• To prototype Java applications on the cloud, use apps6.genexus.com
Last update: November 2023 | © GeneXus. All rights reserved. GeneXus Powered by Globant
|
__label__pos
| 0.90904 |
Write a C Program to Generate the First N Terms of the Sequence
In this article, we will write a C program to generate the first n terms of the Fibonacci sequence.
Initial Fibonacci numbers are 0 and 1. The next number can be generated by adding two numbers. So 0+1=1. Therefore, the next number can be generated by adding two previous ones. The Fibonacci series is 0 1 1 2 3 5 ……
ALGORITHM:
Step 1: Start
Step 2: Read n
Step 3: Initialize f0 ← 0, f1 ← 1, f ← 0
Step 4:i=0
Step 5:
while(i<=n) do as follows
printf("%d\t",f0);
f=f0+f1;
f0=f1;
f1=f; i=i+1;
If not goto step 7
Step 6: Stop
FLOWCHART:
PROGRAM: Write a C Program to Generate the First N Terms of the Sequence
include <stdio.h>
#include <conio.h>
void main()
{
int f0, f1, f, n, i;
clrscr();
printf("ENTER THE VALUE FOR n \n");
scanf("%d", &n);
f0 = 0;
f1 = 1;
printf("FIBONACCI SEQUENCE FOR THE FIRST %d TERMS:\n", n);
i = 0;
while (i < n)
{
printf("%d\t", f0);
f = f0 + f1;
f0 = f1;
f1 = f;
i = i + 1;
}
}
SAMPLE INPUT:
ENTER THE VALUE FOR n
10
OUTPUT:
FIBONACCI SEQUENCE FOR THE FIRST 10 TERMS:
0 1 1 2 3 5 8 13 21 34
Related C Programs with Output
1. Write a C Program to Find the Sum and Average of Three Numbers
2. Write a C Program to Find the Sum of Individual Digits of Positive Integer
3. Write a C Program to Generate the First N Terms of the Sequence
4. Write a C Program to Generate All Prime Numbers Between 1 and N
5. Write a C Program to Check Whether Given Number Is Armstrong Number or Not
6. Write a C program to evaluate algebraic expression (ax+b)/(ax-b)
7. Write a C program to check whether a given number is a perfect number or Not
8. Write a C program to check whether a number is strong number or not
9. Write a C program to find the roots of a quadratic equation
10. Write a C program to find the factorial of a given integer using a non-recursive function
11. Write a C program to find the factorial of a given integer using a recursive function
12. Write a C program to find the GCD of two given integers by using the recursive function
13. Write a C program to find the GCD of two given integers using a non-recursive function
14. Write a C program to find both the largest and smallest number in a list of integers
15. Write a C Program to Sort the Array in an Ascending Order
16. Write a C Program to find whether the given matrix is symmetric or not
17. Write a C program to perform the addition of two matrices
18. Write a C Program That Uses Functions to Perform Multiplication Of Two Matrices
19. Write a C program to use a function to insert a sub-string in to a given main string from a given position
20. To delete n Characters from a given position in a given string
21. Write a C program using user-defined functions to determine whether the given string is palindrome or not
22. Write a C program to count the number of lines, words, and characters in a given text
23. Write a C program to find the length of the string using Pointer
24. Write a C program to Display array elements using calloc( ) function
25. Write a C Program to Calculate Total and Percentage Marks of a Student Using Structure
26. Write a C Program to Display the Contents of a File
27. Write a C program to copy the contents of one file to another
|
__label__pos
| 1 |
What Is muxu.exe? Is It A Virus Or Malware? Uninstall?
What is muxu.exe?
muxu.exe is an executable exe file which belongs to the InstallShield process which comes along with the InstallShield Software developed by Flexera Software software developer.
I have faced similar issues with unknown exe files running in the background in my windows computer too. Read this tutorial to learn more about muxu.exe and weather to disable it.
If the muxu.exe process in Windows 10 is important, then you should be careful while deleting it. Sometimes muxu.exe process might be using CPU or GPU too much. If it is malware or virus, it might be running in the background.
TIP: If you are facing System related issues on Windows like registry errors or System files being deleted by virus or System crashes we recommend downloading Restoro software which scans your Windows PC for any issues and fixes them with a few steps.
The .exe extension of the muxu.exe file specifies that it is an executable file for the Windows Operating System like Windows XP, Windows 7, Windows 8, and Windows 10.
Malware and viruses are also transmitted through exe files. So we must be sure before running any unknown executable file on our computers or laptops.
Now we will check if the muxu.exe file is a virus or malware? Whether it should be deleted to keep your computer safe? Read more below.
Is muxu.exe safe to run? Is it a virus or malware?
Let’s check the location of this exe file to determine whether this is a legit software or a virus. The location of this file and dangerous rating is mentioned below.
File Location / Rating : C:ProgramData{CB28D9D3-6B5D-4AFA-BA37-B4AFAABF70B8}
To check whether the exe file is legit you can start the Task Manager. Then click on the columns field and add Verified Signer as one of the columns.
Now look at the Verified Signer value for muxu.exe process if it says “Unable to verify” then the file may be a virus.
File Name muxu.exe
Software Developer Flexera Software
File Type
File Location C:ProgramData{CB28D9D3-6B5D-4AFA-BA37-B4AFAABF70B8}
Software InstallShield
Over All Ratings for muxu.exe
If the developer of the software is legitimate, then it is not a virus or malware. If the developer is not listed or seems suspicious, you can remove it using the uninstall program.
Based on our analysis of whether this muxu file is a virus or malware we have displayed our result below.
Is muxu.exe A Virus or Malware: muxu.exe is a Virus.
How To Remove or Uninstall muxu.exe
To remove muxu.exe from your computer do the following steps one by one. This will uninstall muxu.exe if it was part of the software installed on your computer.
1. If the file is a part of a software program, then it will also have an uninstall program. Then you can run the Uninstaller located at directory like C:Program Files>Flexera Software>InstallShield >InstallShield> muxu.exe_uninstall.exe.
2. Or the muxu.exe was installed using the Windows Installer then to uninstall it Go to System Settings and open Add Or Remove Programs Option.
3. Then Search for muxu.exe or the software name InstallShield in the search bar or try out the developer name Flexera Software.
4. Then click on it and select the Uninstall Program option to remove muxu.exe file from your computer. Now the software InstallShield program along with the file muxu.exe will be removed from your computer.
Frequently Asked Questions
How do i stop muxu.exe process?
In order to stop the muxu.exe process from running you either have to uninstall the program associated with the file or if it’s a virus or malware, remove it using a Malware and Virus removal tool.
Is muxu.exe a Virus or Malware?
As per the information we have the muxu.exe is a Virus. But a good file might be infected with malware or virus to disguise itself.
Is muxu.exe causing High Disk Usage?
You can find this by opening the Task Manager application (Right-click on Windows Taskbar and choose Task Manager) and click on the Disk option at the top to sort and find out the disk usage of muxu.exe.
Is muxu.exe causing High CPU Usage?
You can find this by opening the Task Manager application and find the muxu process and check the CPU usage percentage.
Is muxu.exe causing High Network Usage?
If the muxu.exe has a high data usage, you can find it by opening the Task Manager windows app and find the muxu process and check the Network Usage percentage.
How to check GPU Usage of muxu.exe?
To check muxu.exe GPU usage. Open Task Manager window and look for the muxu.exe process in the name column and check the GPU usage column.
I hope you were able to learn more about the muxu.exe file and how to remove it. Also, share this article on social media if you found it helpful.
Let us know in the comments below if you face any other muxu.exe related issues.
About The Author: Gowtham V is a tech blogger and founder of HowToDoNinja.com who is an expert in Technology & Software and writes awesome How-To Tutorials to help people online. He has 5 years of experience in creating websites and writing content. He uses a Windows PC, a Macbook Pro, and an Android phone. Check out more about our website and our writers on our About US page. Also follow me on Twitter page and Linkedin
0 comments… add one
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.796327 |
WordPress как на ладони
Недорогой хостинг для сайтов на WordPress: wordpress.jino.ru Получай пассивный доход от сайта с помощью браузерных PUSH уведомлений
функция не описана
WC_Product_Grouped::sync() public WC 1.0
Sync a grouped product with it's children. These sync functions sync upwards (from child to parent) when the variation is saved.
{} Это метод класса: WC_Product_Grouped{}
Хуков нет.
Возвращает
WC_Product. Synced product object.
Использование
$result = WC_Product_Grouped::sync( $product, $save );
$product(WC_Product/число) (обязательный)
Product object or ID for which you wish to sync.
$save(true|false)
If true, the product object will be saved to the DB before returning it.
Код WC_Product_Grouped::sync() WC 5.3.0
<?php
public static function sync( $product, $save = true ) {
if ( ! is_a( $product, 'WC_Product' ) ) {
$product = wc_get_product( $product );
}
if ( is_a( $product, 'WC_Product_Grouped' ) ) {
$data_store = WC_Data_Store::load( 'product-' . $product->get_type() );
$data_store->sync_price( $product );
if ( $save ) {
$product->save();
}
}
return $product;
}
|
__label__pos
| 0.640897 |
Skip to content Skip to sidebar Skip to footer
Widget Atas Posting
I will create automation scripts in python
I will create automation scripts in python
Automation lets you hand off business-critical tasks to the robots, so you can focus on the most important items on your to-do list—the ones that require active thought and engagement. No-code automationshould be your first stop, but Python's popularity for task automation comes from a variety of factors
Get I will create automation scripts in python
Python is a great language for automation scripts. You can use it to automate repetitive tasks such as file management, data scraping, web testing, and more.
There are many libraries available in Python that can help you with automation tasks such as Selenium, BeautifulSoup, PyAutoGUI, OpenCV, and Pillow1.
If you’re new to Python, you can start by learning the basics of Python programming such as variables, data types, loops, functions, and modules2.
Once you have a good understanding of the basics, you can start learning about automation libraries and how to use them.
Do you have any specific questions about Python automation scripts?
There are many examples of automation scripts that you can create using Python. Here are some examples:
1. Pull live traffic data from third-party APIs
2. Compile data from a webpage
3. Convert PDF to audio file
4. Convert a JPG to a PNG
5. Read a CSV file and modify it
6. Sending personalized emails to multiple people
7. Bulk uploading files to a cloud-based platform
These are just a few examples of what you can do with Python automation scripts1. Do you have any other questions about Python automation scripts?
Here are some best practices for writing automation scripts in Python:
1. Use descriptive variable names
2. Write modular code
3. Use comments to explain your code
4. Use version control to keep track of changes
5. Write unit tests to ensure your code works as expected
6. Use exception handling to handle errors gracefully
7. Keep your code simple and easy to understand
These are just a few best practices that can help you write better automation scripts in Python. Do you have any other questions about Python automation scripts?
If you’re new to Python, there are many resources available online to help you get started. Here are some steps you can follow:
1. Download and install Python on your computer
2. Learn the basics of Python programming such as variables, data types, loops, functions, and modules
3. Practice writing simple programs to get comfortable with the language
4. Learn about automation libraries such as Selenium, BeautifulSoup, PyAutoGUI, OpenCV, and Pillow
5. Start writing automation scripts for simple tasks such as file management or data scraping
6. Refine your scripts and learn more advanced topics as you become more comfortable with the language
Here are some common mistakes to avoid when writing automation scripts:
• Not using descriptive variable names
• Writing code that is difficult to read and understand
• Not using comments to explain your code
• Not testing your code thoroughly
• Not handling errors gracefully
• Writing code that is too complex
• Not using version control to keep track of changes
By avoiding these common mistakes, you can write better automation scripts that are easier to maintain and understand. Do you have any other questions about Python automation scripts?
STANDARD : $550
automation, web scraping
4 Days Delivery
• 1 Revision
• 20 pages mined/scraped
• 2 sources mine/scraped
• Install script
• Test script
• Task automation
|
__label__pos
| 0.999435 |
Jump to content
• Advertisement
Sign in to follow this
tmason
OpenGL Speeding up loading of uniforms and drawing ...
This topic is 1343 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
Recommended Posts
Hello,
So I have this strange issue with my code working very slowly at first and then "speeding up" after about 15-30 seconds of running depending on the size of the model.
The code in question is my drawing functionality for objects.
The first code that is initially dreadfully slow is loading of uniforms into OpenGL and the second set of code is the actual draws.
The thing that really bothers me is when things speed up after as mentioned 30 seconds. I ensured that it isn't system load, etc.
Has anyone seen this before?
Thanks.
void OpenGLObject::DrawMe(void) {
ViewModelMatrix = (*ViewMatrix) * ModelMatrix;
MVPMatrix = (*ProjectionMatrix) * (*ViewMatrix) * ModelMatrix;
NormalMatrix = glm::transpose(glm::inverse(glm::mat3(MVPMatrix)));
glBindVertexArray(VertextArrayObjectID);
glUniformMatrix4fv((*AssociatedOpenGLProgram->GetMVPMatrixID()), 1, GL_FALSE, glm::value_ptr(MVPMatrix));
glUniformMatrix4fv((*AssociatedOpenGLProgram->GetViewMatrixID()), 1, GL_FALSE, glm::value_ptr((*ViewMatrix)));
glUniformMatrix4fv((*AssociatedOpenGLProgram->GetViewModelMatrixID()), 1, GL_FALSE, glm::value_ptr(ViewModelMatrix));
glUniformMatrix3fv((*AssociatedOpenGLProgram->GetNormalMatrixID()), 1, GL_FALSE, glm::value_ptr(NormalMatrix));
/*
Slow code here.
*/
AssociatedMaterial->LoadColorsIntoOpenGL(&AssociatedOpenGLProgram[0]);
/*
End slow code.
*/
if (AssociatedMaterial->IsWireframeEnabled() != false && AssociatedOpenGLProgram->IsWireframeEnabled() == false) {
GLfloat EnableWireframe = 1.0f;
glUniform1fv((*AssociatedOpenGLProgram->GetEnableWireframeID()), 1, &EnableWireframe);
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
}
else {
if (AssociatedMaterial->HasAssociatedTexture()) {
GLfloat ObjectHasMaterial = 1.0f;
glUniform1fv((*AssociatedOpenGLProgram->GetObjectHasTextureID()), 1, &ObjectHasMaterial);
glUniform1i((*AssociatedOpenGLProgram->GetTextureSamplerID()), 1);
AssociatedMaterial->BindTexture(1);
}
else {
glBindTexture(GL_TEXTURE_2D, NULL);
}
}
/*
Slow code here.
*/
glDrawElementsInstanced(GL_TRIANGLES, NumOfIndices, GL_UNSIGNED_INT, NULL, 1);
/*
End slow code.
*/
if (AssociatedMaterial->IsWireframeEnabled() != false && AssociatedOpenGLProgram->IsWireframeEnabled() == false) {
GLfloat EnableWireframe = 0.0f;
glUniform1fv((*AssociatedOpenGLProgram->GetEnableWireframeID()), 1, &EnableWireframe);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
}
And the referenced Uniforms function "AssociatedMaterial->LoadColorsIntoOpenGL(&AssociatedOpenGLProgram[0]);" :
void OpenGLCompatibilityMaterial::LoadColorsIntoOpenGL(OpenGLProgram* CurrentProgram) {
glUniform4fv((*CurrentProgram->GetAmbientColorID()), 1, GetAmbientColor());
glUniform4fv((*CurrentProgram->GetDiffuseColorID()), 1, GetDiffuseColor());
glUniform4fv((*CurrentProgram->GetEmissiveColorID()), 1, GetEmissiveColor());
glUniform4fv((*CurrentProgram->GetSpecularColorID()), 1, GetSpecularColor());
GLfloat MeshShininess = GetShininess();
glUniform1fv((*CurrentProgram->GetMeshShininessID()), 1, &MeshShininess);
}
Share this post
Link to post
Share on other sites
Advertisement
How many milliseconds per frame does that 'slow' function take in the slow and fast situations?
Is the performance difference on the CPU time per frame or the gpu time per frame?
GL drivers are known to very quickly compile unoptimized shaders initially, and then slowly work on compiling more optimal ones in the background. Something like that could be affecting you.
Share this post
Link to post
Share on other sites
How many milliseconds per frame does that 'slow' function take in the slow and fast situations?
Is the performance difference on the CPU time per frame or the gpu time per frame?
GL drivers are known to very quickly compile unoptimized shaders initially, and then slowly work on compiling more optimal ones in the background. Something like that could be affecting you.
The performance I measured is on the CPU side; in a slow situation it will take about 3-10 milliseconds per frame (maybe more) but then when things speed up the operation is almost instantaneous.
Interesting on the compiler situation. Any way to get the GL driver to compile better shaders up front? I would pay the cost (maybe 10 additional seconds of load time?) versus jittery behavior when the environment is fully loaded.
Thank you for your time.
Share this post
Link to post
Share on other sites
Sign in to follow this
• Advertisement
×
Important Information
By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.
Participate in the game development conversation and more when you create an account on GameDev.net!
Sign me up!
|
__label__pos
| 0.556336 |
Aloong Aloong - 5 months ago 10
Java Question
Why String.replaceAll() don't work on this String?
//This source is a line read from a file
String src = "23570006,music,**,wu(),1,exam,\"Monday9,10(H2-301)\",1-10,score,";
//This sohuld be from a matcher.group() when Pattern.compile("\".*?\"")
String group = "\"Monday9,10(H2-301)\"";
src = src.replaceAll("\"", "");
group = group.replaceAll("\"", "");
String replacement = group.replaceAll(",", "#@");
System.out.println(src.contains(group));
src = src.replaceAll(group, replacement);
System.out.println(group);
System.out.println(replacement);
System.out.println(src);
I'm trying to replace the
","
between
\"s
so I can use
String.split()
latter.
But the above just not working , the result is:
true
Monday9,10(H2-301)
Monday9#@10(H2-301)
23570006,music,**,wu(),1,exam,Monday9,10(H2-301),1-10,score,
but when I change the src string to
String src = "123\"9,10\"123";
String group = "\"9,10\"";
It works well
true
9,10
9#@10
1239#@10123
What's the matter with the string???
Answer
( and ) are regex metacharacter; they need to be escaped if you want to match it literally.
String group = "\"Monday9,10\\(H2-301\\)\"";
^ ^
The reason why you need two slashes is that because \ in a string literal is itself an escape character, so "\\" is a string of length 1 containing a slash.
|
__label__pos
| 0.60815 |
0
$\begingroup$
Let $R$ be a unital commutative ring and $m_1,m_2$ distinct maximal ideals. Prove that $$\frac{R}{m_1m_2}\simeq\frac{R}{m_1} \times \frac{R}{m_2}.$$
I think something like this homomorphism might work: $\phi(r)=(r+m_1,r+m_2)$.
Now $\ker\phi=\cdots=m_1\cap m_2$, but it's not clear if it is a surjection.
Also it's true that
If $m_1,m_2$ are maximal ideals of $R$, then $m_1m_2=m_1\cap m_2$.
$\endgroup$
2
• $\begingroup$ Chinese remainder theorem in generalized form. $\endgroup$ – egreg Dec 28 '13 at 20:56
• 1
$\begingroup$ Note the statement here en.wikipedia.org/wiki/… that if the ideals $m_i$ are pairwise coprime (meaning $m_ + m_j = R$ for $i \ne j$, which holds if they are maximal) then their produtc coincides with their intersection. $\endgroup$ – Andreas Caranti Dec 28 '13 at 22:17
3
$\begingroup$
The map $\phi : R \to \frac{R}{m_1}\times \frac{R}{m_2}$, $\phi(r) = (r + m_1, r + m_2)$ is the correct homomorphism to consider.
As $m_1$ and $m_2$ are comaximal, $m_1 + m_2 = R$. As $R$ is unital $1 \in R$. So there are $a\in m_1$, $b \in m_2$ such that $a + b = 1$.
Exercise: Show that $\phi(a) = (0 + m_1, 1 + m_2)$ and $\phi(b) = (1 + m_1, 0 + m_2)$.
We have $\phi(a) = (a + m_1, a + m_2) = (0 + m_1, 1 - b + m_2) = (0 + m_1, 1 + m_2)$. Likewise for $\phi(b)$.
Now, any $(r_1 + m_1, r_2 + m_2) \in \frac{R}{m_1}\times\frac{R}{m_2}$ can be written as a linear combination of $\phi(a)$ and $\phi(b)$.
Exercise Show that there is $x \in R$ such that $\phi(x) = (r_1 + m_1, r_2 + m_2)$.
We have $(r_1 + m_1, r_2 + m_2) = (r_1 + m_1, 0 + m_2) + (0 + m_1, r_2 + m_2) = r_1\phi(a) + r_2\phi(b)$ so if we set $x = r_1a+r_2b$, we see that $\phi(x) = (r_1 + m_1, r_2 + m_2)$.
So the map $\phi$ is surjective, so by the first isomorphism theorem
$$\frac{R}{\ker\phi} \cong \frac{R}{m_1}\times\frac{R}{m_2}.$$
As you've already determined, $\ker\phi = m_1\cap m_2$. As $m_2$ is an ideal, $Rm_2 \subseteq m_2$, so $m_1m_2 \subseteq m_2$. Likewise, $m_1m_2 \subseteq m_1$ so $m_1m_2 \subseteq m_1\cap m_2$.
Exercise: Show $m_1\cap m_2 \subseteq m_1m_2$. Hint, use the fact that we have $a \in m_1$ and $b \in m_2$ such that $a + b = 1$, then pick $r \in m_1\cap m_2$.
As $a + b = 1$, $ra + rb = r$. As $r \in m_1\cap m_2 \subseteq m_2$ and $a \in m_1$, $ra \in m_1m_2$. Likewise, $rb \in m_1m_2$, so $r = ra + rb \in m_1m_2$.
Then $m_1\cap m_2 = m_1m_2$ so we finally obtain $$\frac{R}{m_1m_2} \cong \frac{R}{m_1}\times\frac{R}{m_2}.$$
$\endgroup$
1
• 1
$\begingroup$ If you are stuck on an exercise, you can put your mouse over the grey box beneath it to reveal the solution. I recommend you try the exercises yourself first. $\endgroup$ – Michael Albanese Dec 28 '13 at 23:30
1
$\begingroup$
The fact that $m_1\cap m_2=m_1m_2$ when $m_1\ne m_2$ is really easy to prove. Now the morphism $$ R\to \frac{R}{m_1}\times\frac{R}{m_2} $$ defined by $f(r)=(r+m_1,r+m_2)$ is surjective because $m_1+m_2=R$, so you can write $1=x+y$, with $x\in m_1$ and $y\in m_2$.
If $a,b\in R$, we need to find $r\in R$ such that $r-a\in m_1$ and $r-b\in m_2$. But $$ a-b=(a-b)1=(a-b)x+(a-b)y $$ so $$ r=a-(a-b)x=b+(a-b)y $$ is the element we're looking for.
The kernel of the above morphism is obviously $m_1\cap m_2=m_1m_2$.
Nothing really different from the well known Chinese Remainder Theorem.
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999255 |
Leistungen Umsetzung
Articles
Schema languages for technical documentation
Schema languages define rules for XML documents and describe the structure and content of these documents syntactically. The defined set of rules ensures that the XML document is valid, meaning it consists of correct, consistent, and machine-readable information units.
There are grammar-based and rule-based schema languages, with DTD, XML Schema, and Relax NG being examples of grammar-based schema languages. They allow the definition of elements, attributes, and data types, determining the order, frequency, and hierarchy in which elements can be used. On the other hand, Schematron is a rule-based schema language that allows the formulation of additional conventions that complement the grammatical rules of an XML application.
DTD:
A Document Type Definition (DTD) is a set of rules that describe the logical structure of a document, including elements, attributes, entities, and notations. DTD specifies the order, arrangement, and type of content. The level of detail and semantics is determined by the number and quality of elements. DTD is the most widely used syntax for document types.
XML Schema:
XML Schema (XSD – XML Schema Definition) is also used to define structures in XML documents. Unlike DTD, XML Schema describes the structure itself as an XML document. XML Schema is primarily designed for exchange between applications (e.g., web services) and data-intensive workflows. On the other hand, DTDs are more suitable for text-based applications.
What differentiates XML Schema from a DTD?
– XML Schema supports the use and creation of data types for elements and attributes, which can be complemented by integrity conditions and additional cardinalities.
– XML Schema supports namespaces (collections of elements and attributes identified by an IRI) to avoid conflicts when using multiple vocabularies. DTDs can only be combined if no naming conflicts exist.
– XML Schema is written in XML syntax itself and can be easily validated by an XML parser, while DTDs require their own parser due to their own syntax.
– XML Schema allows the definition of inheritance hierarchies.
– XML Schema makes it straightforward to define unordered structures.
– XML Schema can specify more semantics than DTDs.
With these extensions, XML Schema is more powerful and expressive compared to a DTD, especially for data description. While an equivalent XML Schema can be created for each DTD, the reverse is not necessarily possible.
Relax NG:
Regular Language Description for XML New Generation (RELAX NG) is also a schema language for defining the structure of XML documents and is an international standard ISO/IEC 1975-2 within the Document Schema Definition Language. Like XML Schema, RELAX NG also uses XML-based vocabulary instead of its own syntax, providing a more powerful alternative to DTD and a less complex alternative to XML Schema for validating elements and attributes in XML documents. RELAX NG was created to eliminate the drawbacks of DTD and XML Schema, aiming to use XML syntax and be lightweight, which it has achieved.
Schematron:
Schematron is a schema language that does not use formal grammar and is not used for definition (like DTD and XML Schema) but rather for validation of XML documents. It defines rules that are not possible with grammar-based schema languages. Since May 2006, Schematron 1.6 has been an official ISO/IEC standard. Schematron complements traditional schema languages and is now supported by common XML editors. For example, the Oxygen Editor provides a Schematron Quick Fix functionality that automatically detects and corrects rule violations.
To learn more about the application and practical use of schema languages, refer to our Whitepaper.
DTD & Schematron vs. XML Schema
We provide you with an insight into various schema languages and categorize the different terms. We address the question of whether there is a suitable schema language for technical documentation and illustrate two practical examples that showcase different scenarios for the application of schema languages and the use of Schematron for validation.
Subscribe to the free newsletter from PANTOPIX.
We will gladly keep you informed regularly about new articles.
Articles
PIM Systems based on Knowledge Graphs
Industries rely on Product Information Management (PIM) systems to create, manage and distribute product information across multiple channels.
We can address the challenges that arise by developing PIM systems based on knowledge graphs.
read more>
Articles
Knowledge Graph Embeddings
Combining different data sources in a knowledge database and the semantic representation of the information contained therein can make technical communication much easier. Building a knowledge base using semantic knowledge graphs offers numerous advantages, including the important possibility of continuously expanding the knowledge graph. One method of expanding knowledge is the use of knowledge graph embeddings.
read more>
Articles
Docs-as-Code: Automated Software Documantation
The world of software development is constantly changing, and with it the way we document software. One innovative method that has gained popularity in recent years is “Docs-as-Code” or documentation as code. In this article, we will explore this exciting development and take a closer look at its application in technical documentation.
read more>
Contact us
Maraike Heim
Senior Marketing Manager
Team Maraike Heim
|
__label__pos
| 0.529386 |
Cudos: a sustainable alternative for miners
As blockchain networks evolve, they leave proof-of-work (PoW) protocols behind, which is why mining is reduced. Cudos turns out to be an alternative for miners, who can participate in a sustainable way, thanks to its distributed computing model in the cloud, and receive rewards for providing computing capacity to the network. In the following article we tell you all the details.
Distributed Cloud Computing
Distributed computing in the cloud is one of the central axes for the new web paradigm, since the systems can be executed remotely, without the need for large hardware capacities. The emergence of blockchain networks has given a new direction to this model, because the applications can count on the unique security support provided by this technology.
The developments related to the metaverse have raised the need for large computing capacities, in order to support this type of platform. A decentralized solution must be sought so that this is not only reserved for the hands of large corporations. This is where Cudos also provides an efficient solution, thanks to its blockchain infrastructure.
In order for this entire model to work, Cudos has developed a work plan that includes both end users, as well as those who want to provide resources to the network and receive rewards for it.
Underutilized technical capabilities in devices
There are a large number of devices in use in the world, from computers to mobile phones in the hands of a large number of people. The problem is that the technical capacity of these devices is underutilized, wasting a large amount of resources. In this way, through an innovative model, Cudos proposes to be able to reuse those requirements and manage them in a more efficient way.
Through a new model, unique in its kind, users can use the Cudos platform to share the excess capacity of their devices and become part of the network infrastructure. In the next section we will explain in detail how this issue can be exploited.
Sustainable alternative to mining
Proof-of-work (Pow) protocols that use mining, such as those present in Bitcoin or Ethereum, are falling out of favor for various reasons. On the one hand, one of the problems they present is their burden, given that it is currently very expensive to be able to acquire equipment for mining. On the other hand, this type of protocol consumes a large amount of energy, being unfriendly to the environment.
In turn, these types of protocols are much slower for confirming transactions, compared to proof-of-stake (PoS) protocols, thus making their application difficult in daily operations. On the other hand, the Pos protocols allow a more democratic participation of the users that make up the network, providing greater equity for the ecosystem.
This is where Cudos becomes a sustainable alternative for miners who are currently migrating from other types of protocols. This new generation blockchain network does not use mining, on the contrary, it has established a business model to provide computing capacity in a distributed manner and that companies can contract these requirements based on their needs. In this way, operating costs are drastically reduced, while creating a more efficient environment.
Users can share the capacity of their devices, forming the distributed infrastructure so that later Cudos can provide services in the cloud. The interesting thing about this issue is that anyone can participate, and you will get higher rewards according to the number of requirements you grant. In any case, you can be part of the network, even with a mobile device.
This is how everyone benefits, companies can contract technical capacity as a service in «quotas» according to the demand of the moment, and users can receive rewards for underused requirements on their devices.
For more information, visit: https://www.cudos.org/
Conclusion
Distributed computing in the cloud is laying the foundations for the platforms of the future to be developed. As blockchain networks evolve, they are leaving behind inefficient and polluting protocols such as proof-of-work (PoW). Cudos becomes a sustainable alternative for miners, since it can grant computing capacity to the network with their devices and receive rewards for it.
Deja una respuesta
|
__label__pos
| 0.566385 |
Overview
Get started today
Replay past traffic, gain confidence in optimizations, and elevate performance.
All but the simplest applications borrow code. You could write everything yourself from just core language features but who has time for that? Instead you take on dependencies, pieces of code written by others that usually give us 80% or more of what we need with 20% of the effort. Sometimes these dependencies are made to interact with a specific technology like a database, or perhaps it’s just a library providing some feature that would be onerous to write yourself. The differences are outside the scope of this article. What I would like to concentrate on here is how to use imported code and dependency wrapping in go, while maintaining clean abstractions so that the code base can change over time with your needs.
The Approach
If you need to interact with github for example you have a few choices. You could “shell out” and call the shell client but that’s probably slower than you want, and requires the git client to be installed in the runtime environment. You could use the HTTP API but that will require a lot of boilerplate if you’re calling more than one or two endpoints. The choice for most developers would be to import a library that communicates with github and call it a day. The public library’s quality is likely somewhere between “perfect for my use case” and “works well enough.” It may be well tested, and if not it at least has more users than anything you would write from scratch today. But we still have a problem. While I wouldn’t fault you for betting on git (still) being the version control de facto in ten years, you may switch hosting providers, and most technologies don’t come with the same assurances.
As modern developers, we switch dependencies all the time. The only constant in software is change. Our applications rely on external dependencies like databases, third party APIs, caches, and queues. They serve our needs today but tomorrow we may need a faster option, one that doesn’t cost as much, or a version not tied to a cloud provider. If we want to make these changes without too much pain, or worse rewriting our core business logic, the code that handles a dependency must be isolated. If you are familiar with the hexagonal architecture, often called “ports and adapters,” this pattern may look familiar.
The Queue, as an Example
This feels like a good one because it’s something your application may want to replace in time. Our sample application is a distributed note taking service. For when you need your notes available on eight continents and resilient against global disasters. Our notes application starts with SQS, a queue service provided by AWS, used to notify other services when a note is saved. Error handling removed for brevity.
type Note struct {
ID string
Text string
Created time.Time
}
func sendNote(q, *sqs.SQS, queueURL, n Note) *sqs.SendMessageOutput {
body, _ := json.Marshal(n)
in := sqs.SendMessageInput{
QueueUrl: queueUrl,
MessageBody: aws.String(string(body)),
}
out, _ := q.SendMessage(&in)
return out
}
This code is fairly simple, and we can imagine using several such functions within our business logic. We create a note, send it to the queue. Modify it somewhere else, send it to the queue. Now we have other services, maybe many services, that listen for changes and react. But three continents into our rollout we realize we need a feature that SQS doesn’t have. We need RabbitMQ, or maybe Kafka. Or maybe we need to support more than one. We need to move to a new technology, a new library, and potentially a new model. The code examples for the new queues don’t look anything like what we’ve been doing with SQS and by now we have SQS logic sprinkled everywhere. This could take a while… Unfortunately I’ve been in this situation too many times. I’ve felt the pain of the code migration, and the manual validation that comes afterwards to make sure nothing has broken (see the Speedscale home page for help with the validation part). There is a better option though. Have you guessed that it includes wrapping your dependencies?
Introduce a Thin Wrapper
What we want is to get the benefit of someone else’s code without tying ourselves to it. What if we start by providing a thin wrapper around the code we import?
type wrapper struct {
queue *sqs.SQS
queueUrl *string
}
func (w *wrapper) send(msgBody string) string {
in := sqs.SendMessageInput{
QueueUrl: w.queueUrl,
MessageBody: aws.String(msgBody),
}
out, _ := w.queue.SendMessage(&in)
return *out.MessageId
}
The `wrapper` type provides the behavior we need from SQS and no more. The wrapper does not accept or return any SQS specific types, which is intentional. We want to make our lives easier with the SQS library but we don’t want it to pollute our code with it. But this code doesn’t handle all of the same logic. We want to work with the `Note` in our business logic so we can keep high-level code with high-level code. We can write our internal queue logic around this type without worrying too much about the SQS implementation.
type NoteQueue struct {
queue *wrapper
}
func (nq *NoteQueue) Send(n Note) string {
body, _ := json.Marshal(n)
return nq.queue.send(string(body))
}
The goal is to create a boundary for the SQS code. Anywhere we use an SQS library type in our business logic we are leaking details that we will have to replace later. But perhaps more importantly, if we are using SQS types in our business logic then we are also thinking in terms of SQS, as opposed to thinking of something that meets our specific needs. This could shape our core logic so that a migration is even more difficult. The sooner we can move from the external representation of a concept to one that is opinionated towards the problem we are trying to solve, the better.
Now we have two layers here, the `wrapper` type and the `NoteQueue` type, but that isn’t strictly necessary. We could use SQS directly in the `NoteQueue` and still have a clean boundary so long as the SQS details don’t leak into code that uses the `NoteQueue`, though we gain something else in exchange for the bit of extra code. Instead of using the `wrapper` directly we can represent its behavior with an interface.
type NoteQueue struct {
queue interface { // optionally represent wrapper with an interface
send(msgBody string)
}
}
This is a drop-in replacement for the `wrapper` but now we can now replace the SQS implementation of `wrapper` as needed. Referencing back to the hexagonal architecture, the `queue` interface here can be considered a “port,” something to be plugged into. The `wrapper` is an “adapter,” something to be plugged in. And just like the outlets at your house support televisions and vacuums, your code can support a mock or an in-memory queue, which makes most of this code unit-testable. Integrations tests are always an option but they are usually slow to run, painful to write and orchestrate with real systems, and flaky. Again, see the Speedscale home page… this is what we do.
The Result
It should be said that while wrapping external code like this can provide benefits like isolation, testability, long term scalp health… there are no silver bullets. Adding layers to your application will add more code, which means opportunities for more bugs. Also, using interfaces in place of concrete types almost always makes code more difficult to reason about and debug. That said, the proper abstractions can keep your business logic clean of external influences, saving headaches and issues down the line. Speedscale helps developers release with confidence by automatically generating integration tests and environments. This is done by collecting API calls to understand the environment an application encounters in production and replaying the calls in non-prod. If you would like more information, drop us a note at [email protected] .
|
__label__pos
| 0.84991 |
For a cube, the long internal diagonal from bottom to top diagonally cross-corner to cross-corner = side times the square root of 3, which is related to the formula for the diagonal for a square, side times the square root of 2. These follow from the Pythagorean Theorem: the latter due to the fact that 1^2 + 1^2 = sqrt(2)^2, and for the cube, 1^2 + sqrt(2)^2 = sqrt(3)^2, that is, the height squared plus the floor diagonal squared = the long top-to-bottom cross-corner diagonal squared. The square root of 3 was discovered geometrically upon extrapolation from BOOK I PROPOSITION 1 of Euclid's "Elements".
Part 1 of 3:
The Tutorial
1. 1
Get to know the image you'll be creating.
2. 2
Make a given finite blue horizontal line of unit length = 1, and treating each endpoint as the center of a radius, make two overlapping circles.
3. 3
Connect the endpoints of the original line (radius) from either side with the intersection point of the two circles. Both top and bottom, with straight lines will form two equilateral triangles, one atop the other, the bottom one an inverted mirror image of the top triangle. All the radii are equal and all sides being equal, these are proven equilateral triangles.
4. 4
Drop the connecting perpendicular between the top intersection point of the two circles and the bottom intersection point of the two circles. The length of this line equals the square root of 3.
5. 5
Do the math. Where the perpendicular cuts the original given unit line to the line's left (or right) endpoint is a distance of .5 -- let us call this distance a. a^2 = .25. The hypotenuse has a length of 1; let us call the hypotenuse c and c^2 = 1. c^2 - a^2 = b^2 = 1 - .25 = 3/4 and the square root of this is sqrt(3)/2 and equals 1/2 the dropped perpendicular between the intersection points, top and bottom, of the two circles. Therefore twice this distance, or the measure of the full perpendicular between the circle's intersection points, equals sqrt(3)/2 * 2 which = the square root of 3 ... the very distance which was sought to be determined geometrically.
Advertisement
Part 2 of 3:
Explanatory Charts, Diagrams, Photos
1. 1
The black line equals the square root of 3 relative to the radius of 1 between 0 and +1 on the x axis. Sqrt(3) = 1.73205080756888 and we can see the black line is about 2*.85 or 1.7 units in length, roughly.
Advertisement
Part 3 of 3:
Helpful Guidance
1. 1
Make use of helper articles when proceeding through this tutorial:
Advertisement
Community Q&A
Ask a Question
200 characters left
Include your email address to get a message when this question is answered.
Submit
Advertisement
Video
By using this service, some information may be shared with YouTube.
Tips
• There is often more to Euclid than is obvious in the proof, if one does a little extra thinking.
• For example, in BOOK I, PROPOSITION 2 of the "Elements", what is not obvious is that the baby step taken forward by the line transferred to an arbitrary point has actually been rotated 180 degrees -- heel to toe. If one instead of the proof draws a straight line from point c to point a and then creates a circle equal in radius to the length of the old line, one will have transferred the line (with 360 degrees of freedom) to the arbitrary point. However, one sacrifices, the triple-check which the equilateral triangle method used in Euclid's proof provides.
• Since side s^3 = a cube, and sqrt(3) * sqrt(3) = 3, it follows that we may say that s^(sqrt(3)^2) = a cube. In other words, by multiplying two cross diagonals together in the exponent we obtain a cube.
Advertisement
Submit a Tip
All tip submissions are carefully reviewed before being published
Thanks for submitting a tip for review!
Related wikiHows
How to
Find the Longest Internal Diagonal of a Cube
How to
Multiply and Divide Geometrically Like Mother Nature
How to
Prove the Intersecting Chords Theorem of Euclid
How to
Do Garfield's Proof of the Pythagorean Theorem
How to
Calculate the Circumference of a Circle
How to
Calculate the Radius of a Circle
How to
Find the Height of a Triangle
How to
Calculate the Diameter of a Circle
How to
Visualize Square Feet
How to
Find the Width of a Rectangle
How to
Calculate Angles
How to
Determine if Three Side Lengths Are a Triangle
How to
Find Arc Length
How to
Find the Center of a Circle
Advertisement
References
1. The source workbook used for this article is "SQRT 3 WORKBK.xlsx"
About This Article
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 11 people, some anonymous, worked to edit and improve it over time. This article has been viewed 45,356 times.
8 votes - 72%
Co-authors: 11
Updated: October 8, 2020
Views: 45,356
Categories: Geometry
Thanks to all authors for creating a page that has been read 45,356 times.
Did this article help you?
Advertisement
|
__label__pos
| 0.881783 |
7 Kode Kotak Pencarian (Search Box) Keren Sidebar Blogger
7 Kode Kotak Pencarian (Search Box) Keren Sidebar Blogger dan Cara Memasangnya.
Kotak Pencarian (Search Box) Keren untuk Sidebar Blogger
Kotak Pencarian (Search Box) merupakan elemen penting yang harus dipasang di blog.
Selain menjadikan blog ramah pengguna (user friendly) karena memudahkan pengunjung mencari info yang dibutuhkan, kotak pencarian juga bisa meningkatkan page views.
Form pencarian konten blog ini juga bagian dari navigasi yang dianjurkan Google. Kotak pencarian harus simple dan visible atau mudah dilihat.
Blogger sudah menyediakan widget search. Kita tinggal klik Add a Gadget di sidebar blog, lalu pilih seach box.
Namun, widget kotak pencarian bawaan Blogger terlalu sederhana sehingga kurang menarik dari sisi desain.
Banyak kode kotak pencarian blogger yang mudah ditemukan di Google yang dibagikan para pakar kode CSS dan HTML.
Cara Memasang Kotak Pencarian (Search Box)
1. Di dashboard blog, klik "Layout" > Add a Gadget > pilih "HTML/Javascript"
Add a Gadget HTML/Javascript
2. Copas salah satu kode di bawah ini.
3. Save!
7 Kode Kotak Pencarian (Search Box) Keren di Sidebar Blogger
Berikut ini 7 kode kotak pencarian terbaik menurut admin Blogger Bandung.
Kotak Pencarian Blog Keren
KODE1
<style>
.cf:before, .cf:after{
content:"";
display:table;
}
.cf:after{
clear:both;
}
.cf{
zoom:1;
}
/* Form wrapper styling */
.search-wrapper {
width:100%;
margin:10px 0;
box-shadow: 0 1px 1px rgba(0, 0, 0, .4) inset, 0 1px 0 rgba(255, 255, 255, .2);
}
/* Form text input */
.search-wrapper input {
width: 222px;
height: 40px;
padding: 10px;
float: left;
font: bold 14px 'lucida sans', 'trebuchet MS', 'Tahoma';
border: 0;
background: #EEE;
border-radius: 3px 0 0 3px;
}
.search-wrapper input:focus {
outline: 0;
background: #fff;
box-shadow: 0 0 2px rgba(0,0,0,.8) inset;
}
.search-wrapper input::-webkit-input-placeholder {
color: #999;
font-weight: normal;
font-style: italic;
}
.search-wrapper input:-moz-placeholder {
color: #999;
font-weight: normal;
font-style: italic;
}
.search-wrapper input:-ms-input-placeholder {
color: #999;
font-weight: normal;
font-style: italic;
}
/* Form submit button */
.search-wrapper button {
overflow: visible;
position: relative;
float: right;
border: 0;
padding: 0;
cursor: pointer;
height: 40px;
width: 78px;
font: bold 14px/35px 'lucida sans', 'trebuchet MS', 'Tahoma';
color: white;
text-transform: uppercase;
background: #D83C3C;
border-radius: 0 3px 3px 0;
text-shadow: 0 -1px 0 rgba(0, 0, 0, .3);
}
.search-wrapper button:hover{
background: #e54040;
}
.search-wrapper button:active,
.search-wrapper button:focus{
background: #c42f2f;
outline: 0;
}
.search-wrapper button:before { /* left arrow */
content: '';
position: absolute;
border-width: 8px 8px 8px 0;
border-style: solid solid solid none;
border-color: transparent #d83c3c transparent;
top: 12px;
left: -6px;
}
.search-wrapper button:hover:before{
border-right-color: #e54040;
}
.search-wrapper button:focus:before,
.search-wrapper button:active:before{
border-right-color: #c42f2f;
}
.search-wrapper button::-moz-focus-inner { /* remove extra button spacing for Mozilla Firefox */
border: 0;
padding: 0;
}
</style>
<form action="/search" class="search-wrapper cf">
<input type="text" method="get" name="q" placeholder="Search here..." required="" />
<button type="submit">Search</button>
</form>
Kotak Pencarian Blog Keren
KODE2
<style>
.serching{margin:0;width:100%;}.serching form{border:1px solid #ddd;-moz-border-radius:3px;-webkit-border-radius:3px;font-size:14px}.serching form input{display:block!important;margin:0;border:0;padding:5px 0;outline:0;height:20px;line-height:20px;font-size:13px;border-radius:0!important}.serch{float:left;width:85%!important;text-indent:10px}.serchg{float:right;width:15%!important;height:30px!important;padding:0!important;background:gray;color:#fff;border:0!important;font-size:12px!important}
</style>
<div class='serching'><form action='/search?q='><input class='serch' name='q' placeholder='Cari...' title='Cari Tulisan di Sini' type='text'/><button class='serchg' type='submit'> GO </button><span style='clear: both;display:block'/></span></form></div>
Kotak Pencarian Blog Keren
KODE3
style>
#searchbox {
background: #d8d8d8;
border: 4px solid #e8e8e8;
padding: 20px 10px;
width: 250px;
}
input:focus::-webkit-input-placeholder {
color: transparent;
}
input:focus:-moz-placeholder {
color: transparent;
}
input:focus::-moz-placeholder {
color: transparent;
}
#searchbox input {
outline: none;
}
#searchbox input[type="text"] {
background: url(http://2.bp.blogspot.com/-xpzxYc77ack/VDpdOE5tzMI/AAAAAAAAAeQ/TyXhIfEIUy4/s1600/search-dark.png) no-repeat 10px 6px #fff;
border-width: 1px;
border-style: solid;
border-color: #fff;
font: bold 12px Arial,Helvetica,Sans-serif;
color: #bebebe;
width: 55%;
padding: 8px 15px 8px 30px;
}
#button-submit {
background: #6A6F75;
border-width: 0px;
padding: 9px 0px;
width: 23%;
cursor: pointer;
font: bold 12px Arial, Helvetica;
color: #fff;
text-shadow: 0 1px 0 #555;
}
#button-submit:hover {
background: #4f5356;
}
#button-submit:active {
background: #5b5d60;
outline: none;
}
#button-submit::-moz-focus-inner {
border: 0;
}
</style>
<form id="searchbox" method="get" action="/search">
<input name="q" type="text" size="15" placeholder="Type here..." />
<input id="button-submit" type="submit" value="Search" />
</form>
Kotak Pencarian Blog Keren
KODE4
<form id="searchbox" method="get" action="/search" autocomplete="off">
<input name="q" type="text" size="15″ placeholder="Enter keywords here" />
<input id="button-submit" type="submit" value=" “/>
</form>
<style>
#searchbox {
width: 240px;
}
#searchbox input {
outline: none;
}
input:focus::-webkit-input-placeholder {
color: transparent;
}
input:focus:-moz-placeholder {
color: transparent;
}
input:focus::-moz-placeholder {
color: transparent;
}
#searchbox input[type="text"] {
background: url(https://2.bp.blogspot.com/-xpzxYc77ack/VDpdOE5tzMI/AAAAAAAAAeQ/TyXhIfEIUy4/s1600/search-dark.png) no-repeat 10px 13px #f2f2f2;
border: 2px solid #f2f2f2;
font: bold 12px Arial,Helvetica,Sans-serif;
color: #6A6F75;
width: 160px;
padding: 14px 17px 12px 30px;
-webkit-border-radius: 5px 0px 0px 5px;
-moz-border-radius: 5px 0px 0px 5px;
border-radius: 5px 0px 0px 5px;
text-shadow: 0 2px 3px #fff;
-webkit-transition: all 0.7s ease 0s;
-moz-transition: all 0.7s ease 0s;
-o-transition: all 0.7s ease 0s;
transition: all 0.7s ease 0s;
}
#searchbox input[type="text"]:focus {
background: #f7f7f7;
border: 2px solid #f7f7f7;
width: 200px;
padding-left: 10px;
}
#button-submit{
background: url(https://4.bp.blogspot.com/-slkXXLUcxqg/VEQI-sJKfZI/AAAAAAAAAlA/9UtEyStfDHw/s1600/slider-arrow-right.png) no-repeat;
margin-left: -40px;
border-width: 0px;
width: 43px;
height: 45px;
}
</style>
Kotak Pencarian Blog Keren
KODE5
<form id="searchbox" method="get" action="/search" autocomplete="off">
<input name="q" type="text" size="15″ placeholder="Enter keywords here" />
<input id="button-submit" type="submit" value=" “/>
</form>
<style>
#searchbox {
width: 280px;
background: url(https://1.bp.blogspot.com/-dwLNyhnHlTg/VEQZwf9drLI/AAAAAAAAAlg/rbd0HN4EZrY/s1600/search-box.png) no-repeat;
}
#searchbox input {
outline: none;
}
input:focus::-webkit-input-placeholder {
color: transparent;
}
input:focus:-moz-placeholder {
color: transparent;
}
input:focus::-moz-placeholder {
color: transparent;
}
#searchbox input[type="text"] {
background: transparent;
border: 0px;
font-family: “Avant Garde", Avantgarde, “Century Gothic", CenturyGothic, “AppleGothic", sans-serif;
font-size: 14px;
color: #f2f2f2 !important;
padding: 10px 35px 10px 20px;
width: 220px;
}
#searchbox input[type="text"]:focus {
color: #fff;
}
#button-submit{
background: url(https://4.bp.blogspot.com/-4MYBYE0i0Xk/VEQYlljvriI/AAAAAAAAAlQ/_TRkRG5EX1c/s1600/search-icon.png) no-repeat;
margin-left: -40px;
border-width: 0px;
width: 40px;
height: 50px;
}
#button-submit:hover {
background: url(https://4.bp.blogspot.com/-6S4K8eHPM-c/VEQdf7l84GI/AAAAAAAAAls/j7_kGSpkIfg/s1600/search-icon-hover.png);
}
</style>
Kotak Pencarian Blog Keren
KODE6
<style>
#search-box {
position: relative;
width: 100%;
margin: 0;
}
#search-form
{
height: 40px;
border: 1px solid #999;
-webkit-border-radius: 5px;
-moz-border-radius: 5px;
border-radius: 5px;
background-color: #fff;
overflow: hidden;
}
#search-text
{
font-size: 14px;
color: #ddd;
border-width: 0;
background: transparent;
}
#search-box input[type="text"]
{
width: 90%;
padding: 11px 0 12px 1em;
color: #333;
outline: none;
}
#search-button {
position: absolute;
top: 0;
right: 0;
height: 42px;
width: 80px;
font-size: 14px;
color: #fff;
text-align: center;
line-height: 42px;
border-width: 0;
background-color: #4d90fe;
-webkit-border-radius: 0px 5px 5px 0px;
-moz-border-radius: 0px 5px 5px 0px;
border-radius: 0px 5px 5px 0px;
cursor: pointer;
}
</style>
<div id='search-box'>
<form action='/search' id='search-form' method='get' target='_top'>
<input id='search-text' name='q' placeholder='Search' type='text'/>
<button id='search-button' type='submit'><span>Search</span></button>
</form>
</div>
Kotak Pencarian
KODE7
<style>
#search-box {
position: relative;
width: 100%;
margin: 0;
}
#search-form
{
height: 40px;
border: 1px solid #999;
-webkit-border-radius: 5px;
-moz-border-radius: 5px;
border-radius: 5px;
background-color: #fff;
overflow: hidden;
}
#search-text
{
font-size: 14px;
color: #ddd;
border-width: 0;
background: transparent;
}
#search-box input[type="text"]
{
width: 90%;
padding: 11px 0 12px 1em;
color: #333;
outline: none;
}
#search-button {
position: absolute;
top: 0;
right: 0;
height: 42px;
width: 80px;
font-size: 14px;
color: #fff;
text-align: center;
line-height: 42px;
border-width: 0;
background-color: #db4437;
-webkit-border-radius: 0px 5px 5px 0px;
-moz-border-radius: 0px 5px 5px 0px;
border-radius: 0px 5px 5px 0px;
cursor: pointer;
}
</style>
<div id='search-box'>
<form action='/search' id='search-form' method='get' target='_top'>
<input id='search-text' name='q' placeholder='Search Customize Blogging' type='text'/>
<button id='search-button' type='submit'><span>Search</span></button>
</form>
</div>
Kotak Pencarian Simple
Kotak Pencarian Simple
KODE
<style>
#search {
border: 1px solid #BDBDBD;
background: white url(https://lh4.googleusercontent.com/-pVUC_2t4N3Q/VHUyuRgha5I/AAAAAAAAC6g/Wm6jR3X_21U/h120/search3.png) 98% 50% no-repeat;
text-align: left;
padding: 8px 24px 6px 6px;
width: 85%;
height: 18px; mouse:pointer:
}
#search #s {
background: none;
color: #BDBDBD;
font-family: verdana;
font-size: 11px;
border: 0;
width: 100%;
padding: 0;
margin: 0;
outline: none;
}
</style>
<div id="search" title="Tulis lalu tekan Enter">
<form action="/search" id="searchform" method="get"> <input id="s"
name="q" onblur='if (this.value == "") {this.value = "Search";}'
onfocus='if (this.value == "Search") {this.value = "";}' value="Search"
type="text"> </form>
</div>
» Thanks for reading: 7 Kode Kotak Pencarian (Search Box) Keren Sidebar Blogger
Tags: ,
Comments
Copyright © 2020 Blogger Bandung All Right Reserved
About | Kontak | Disclaimer | Sitemap
|
__label__pos
| 0.988819 |
1
I'm new to solidity development so maybe this isn't as difficult as it is right now for me. I created a smart contract that can receive and send ether to different accounts. If I connect ganache to metamask, I can see it changes the account balance when these functions are called. I use "send" method to send ether to an account and use a "depositToThisContract" function as payable to receive ether in the smart contract.
My question is ¿Is it possible for a smart contract to be written so it can receive ether directly from metamask platform? That is without calling these functions through metamask API or using web3. Directly from metamask, just like you would send ether to any other account ¿Is there a class or other type of contract that I can import so my contract recognize transactions coming from metamask platform and updates the balance? ¿Or maybe override an existing default function?
Thanks
PS: This is my first question ever at a development forum. I would really appreciate if you could tell me how to make this better.
1 Answer 1
0
What you're looking for is the special receive function (docs). This function is executed when someone sends ETH to the contract without any data (i.e. they're not calling a function).
receive() external payable {
// Do something here
}
This would trigger when someone sends ETH directly to the contract from Metamask without calling any functions
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.827186 |
104 Million in Numbers
104 million in numbers, or in numeric form, is written as 104,000,000. It is also expressed as 104M.
So, now you know what 104 million looks like in numbers.
How to Write 104 Million in Numbers?
Use this free online millions calculator to convert any number in word form to numeric form.
Explanation on How to Write 104 Million in Numbers
First of all, 1 million in numbers = 1,000,000.
Therefore, to find X million in numbers, we just need to multiply 1,000,000 by X.
So, X million = X * 1,000,000 = X million.
Hence, 104 million = 104 * 1,000,000 = 104,000,000
So, 104 million in number is 104,000,000.
Millions in Numbers Conversion Table
The following table contains some close variations of 104 million:
Millions in WordsMillions in Numbers
103.6103,600,000
103.7103,700,000
103.8103,800,000
103.9103,900,000
104.1104,100,000
104.2104,200,000
104.3104,300,000
104.4104,400,000
104.5104,500,000
104.6104,600,000
Some More Insights on 104 Million
* There are exactly 5 zeros in 104 million..
* 104 million has 7 digits in total.
* The scientific expression of 104 million is 104 x 10^6.
* 104 million can also be written in exponential notation as 104E + 6.
* 104 million can also be expressed using e-notation as 104e6.
Find Out More Million to Number Conversions
|
__label__pos
| 0.679394 |
Table of Contents
Search
1. Preface
2. Introduction to Big Data Streaming
3. Big Data Streaming Administration
4. Sources in a Streaming Mapping
5. Targets in a Streaming Mapping
6. Streaming Mappings
7. Window Transformation
8. Appendix A: Connections
9. Appendix B: Sample Files
JMS Data Objects
JMS Data Objects
A JMS data object is a physical data object that accesses a JMS server. After you configure a JMS connection, create a JMS data object to write to JMS targets.
JMS providers are message-oriented middleware systems that send JMS messages. The JMS data object connects to a JMS provider to write data.
The JMS data object can write JMS messages to a JMS provider. When you configure a JMS data object, configure properties to reflect the message structure of the JMS messages. The input ports and output ports are JMS message headers.
When you configure the write data operation properties, specify the format in which the JMS data object writes data. You can specify XML, JSON, or Flat as format. When you specify XML format, you must provide an XSD file. When you specify JSON or Flat format, you must provide a sample file.
You can pass any payload format directly from source to target in Streaming mappings. You can project columns in binary format pass a payload from source to target in its original form or to pass a payload format that is not supported.
Streaming mappings can read, process, and write hierarchical data. You can use array, struct, and map complex data types to process the hierarchical data. You assign complex data types to ports in a mapping to flow hierarchical data. Ports that flow hierarchical data are called complex ports.
For more information about processing hierarchical data, see the
Informatica Big Data Management User Guide
.
0 COMMENTS
We’d like to hear from you!
|
__label__pos
| 0.9936 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks Joe
Welcome to the Monastery
PerlMonks
Re: regex help
by AnomalousMonk (Monsignor)
on Oct 06, 2013 at 04:39 UTC ( #1057110=note: print w/ replies, xml ) Need Help??
in reply to regex help
... must contain a mix of both letters and numbers.
... good words are the words that are a mix of letters and numbers.
The specification and example in the OP is a bit unclear to me, but, taken with some of the other replies, leads me to think that a "word" is a string that either:
1. must contain only alphanumeric characters, with at least one alphabetic character and at least one numeric character; or
2. may contain any characters, but with at least one alphabetic character and at least one numeric character; or
3. may contain any characters, but with at least one contiguous alphabetic and numeric character pair in any order.
The other replies seem to lean toward alternatives 2 and 3 above. My own first guess was for alternative 1, as in the last code examples below:
>perl -wMstrict -le "my @lines = qw(abc 345 a1 1a a1a 1a1 abc1 1abc a1==a1 a==1); printf '@lines: '; printf qq{'$_' } for @lines; print qq{\n}; ;; printf 'and 1: '; printf qq{'$_' } for grep { /[[:alpha:]]/ && /\d/ } @lines; print ''; ;; printf 'regex 1: '; printf qq{'$_' } for grep m{ [[:alpha:]] \d | \d [[:alpha:]] }xms, @l +ines; print qq{\n}; ;; ;; printf 'and 2: '; printf qq{'$_' } for grep { !/[^[:alnum:]]/ && /[[:alpha:]]/ && /\d/ +} @lines; print ''; ;; my $al_num = qr{ [[:alpha:]] \d | \d [[:alpha:]] }xms; printf 'regex 2: '; printf qq{'$_' } for grep m{ \A [[:alnum:]]* $al_num [[:alnum:]]* \z +}xms, @lines; print qq{\n}; ;; ;; printf '@lines as was: '; printf qq{'$_' } for @lines; " @lines: 'abc' '345' 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' 'a1==a1' 'a==1 +' and 1: 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' 'a1==a1' 'a==1' regex 1: 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' 'a1==a1' and 2: 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' regex 2: 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' @lines as was: 'abc' '345' 'a1' '1a' 'a1a' '1a1' 'abc1' '1abc' 'a1==a1 +' 'a==1'
Comment on Re: regex help
Download Code
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1057110]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (11)
As of 2014-03-10 17:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Have you used a cryptocurrency?
Results (217 votes), past polls
|
__label__pos
| 0.794905 |
« Quantum Man | Main | New CTO at GalaxyGoo »
Discrepancy in the Glow Filter in Flash8?
I never use the filters that Flash8 has inside the authoring environment. But after Kristin pointed out to me she had problems reproducing a Glow Filter in AS that she had made in the authoring environment I checked it out.
The default Glow Filter settings inside the authoring environment are as follows: color is #FF0000 (red) at alpha 100%, blurX & blurY are 5, strength is 100%, quality is low and knock out and inner glow are unchecked.
To recreate this using AS you need the same settings as above except the strength needs to be set to 1 and alpha needs to be set to 1. The alpha parameter in AS goes from 0 to 1 which makes enuf sense. However according to the docs the strength goes from 0 to 255. So how does this relate to the strength settings inside the authoring environment which can be set from 0% to 1000%? 100% relates to 1 apparently. So there doesn't seem to be a linear transformation. Which seems odd.
I haven't checked any of the other filters if there are odd differences between filters applied inside the authoring environment and those applied in AS. Anybody?
|
__label__pos
| 0.925016 |
1. CSS
2. Flash
3. HTML
4. Illustrator
5. Java
6. JavaScript
7. Maya
8. Photography
9. Photoshop
10. PHP
11. Ruby
12. Ruby on Rails
13. 3ds Max
PHP: Xampp & Wordpress
1. Clicks today:
0
2. Clicks this month:
0
3. Overall rating:
4.00/5
PHP » Database Interactions — over 12 years ago
Learn how to use wordpress with a server package called xampp. Tutorial includes screencast aswell as written version
Comments
avatarhayedia 3 months ago
wonderful points altogether, you simply gained emblem new|a new} reader. What might you recommend in regards to your submit that you made a few days ago? Any certain? More Info At: קידום אתרים
Your Comment
You must be logged in to post a comment.
|
__label__pos
| 0.894218 |
You are on page 1of 11
ALGEBRA - secundaria
FACTORIZACIÓN I
OBJETIVO.
• Descomponer un polinomio en factores.
• Identificar el factor primo.
• Factorizar un polinomio aplicando el factor común (monomio, polinomio) y agrupando términos.
FACTORIZACIÓN
Si efectuamos la multiplicación de uno en uno para poder encontrar el resultado de
521 x 321 + 521 x 146 + 521 x 533
Sería muy laborioso, pero si analizamos con mucho cuidado el problema existe un
número que es común para todos, extraemos 521 de la siguiente manera:
521(321 + 146 + 533)
521(1000)
521000
¡OJITO, OJITO!
Sea el polinomio:
P(x) = (x + 1)(x + 3)2, entonces sus factores son:
(x + 1)
(x + 3)
2
(x + 1)(x + 3) (x + 1)(x + 3)
(x + 3) 2
2
(x + 1)(x + 3)
I. Concepto de factorización
La factorización es un proceso mediante el cual un polinomio es expresado como producto de factores primos
Multiplicación
(x + 2)(x + 3)
x 2 + 5x + 6
Factores primos
Factorización
Podrás observar a continuación distintos casos de polinomios que han sido factorizados.
• xa + xb = x(a + b)
• 25x2 - 49 = (5x + 7) (5x - 7)
• m2 + 6m + 9 = (m + 3) (m + 3) = (m + 3)2
• ab - ay + bx - xy = a(b - y) + x(b - y) = (b - y) (a + x)
ALGEBRA - secundaria
II. Polinomio primo
Un polinomio P(x) se llama primo o irreductible cuando no se puede descomponer en un producto de polinomios.
• Ejemplo 1
Factorizar: P(x) = x2 + 9x; indicar la suma de factores primos.
Resolución:
P(x) = x2 + 9x P(x) = x(x + 9)
Luego la suma de factores primos será: x + (x + 9) = 2x + 9
• Ejemplo 2
Factorizar: P(x) = x2 - 16; indicar la suma de factores primos.
Resolución:
P(x) = x2 - 16P(x) = (x + 4) (x - 4)
Los factores primos son: (x + 4) ; (x - 4)
Luego la suma de los factores primos será: (x + 4) + (x - 4) = 2x
• Ejemplo 3
Indicar la suma de factores primos del siguiente polinomio:
P(x) = (x + 1)2(x - 1)5(x + 2)3
Resolución:
(x + 1) es factor primo que se repite 2 veces.
2 5 3
(x + 1) (x - 1) (x + 2) (x - 1) es factor primo que se repite 5 veces.
(x + 2) es factor primo que se repite 3 veces.
Luego la suma de factores primos será:
(x + 1) + (x - 1) + (x + 2) = 3x + 2
III. Criterios de factorización: • Ejemplo 1
Factorizar: P(x;a) = 4x(a - 2) + (x + 2) (a - 2)
A. Factor común monomio Resolución:
Factor común polinomio: (a - 2)
Se halla el MCD de los coeficientes y se toman Luego :
las variables comunes con su menor exponente, 4x(a - 2) + (x + 2) (a - 2) = (a - 2) [4x + x + 2]
el otro factor resulta de dividir cada término = (a - 2) (5x + 2)
del polinomio entre el factor común. • Ejemplo 2
• Ejemplo 1
Factorizar: P(x;a) = 12a2x + 4a2y Factorizar: P(x;a) = (4a + 6) (x - 4) - x + 4
Resolución:
Resolución:
Escribimos (-x + 4) como: -(x - 4)
MCD (12;4) = 4 (4a + 6) (x - 4) - (x - 4)
Variable común: a2
Factor común polinomio: (x - 4)
Luego: 12a2x + 4a2y = 4a2(3x + y)
Luego:
• Ejemplo 2 (4a + 6) (x - 4) - (x - 4) = (x - 4) [4a + 6 - 1]
Factorizar: P(x;y;b) = 25b3x2 + 10b2 - 15by2 = (x - 4) (4a + 5)
Resolución :
Factor común monomio: 5b
Luego: 25b3x2 + 10b2 - 15by2 = 5b[5b2x2 + 2b - 3y2]
B. Factor común polinomio
Es un polinomio que se repite como factor en
cada uno de los términos de un polinomio.
ALGEBRA - secundaria
TALLER DE APRENDIZAJE Nº 02
1. Factorizar: ab + b
7. Factorizar: ax + bx
2. Factorizar: m2 + 2m
8. Factorizar: mnp + anp
3. Factorizar: 3x2 + 6x
9. Factorizar: 2x3 + 4x4
4. Factorizar: n + nm
10. Factorizar: 3a2b + 3ab2
5. Factorizar: am + an + ap
6. Factorizar: t(t + 1) + (t +1)
ALGEBRA - secundaria
TAREA DOMICILIARIA Nº 02
Factorizar: x2 + x Factor común : ___________
El factor común es: x
Luego la factorización es: x(x + 1) Luego la factorización es: ____________________
1. Factorizar: (x - 2)y - (x - 2)z Factorizar: x3y4(a2 - c) + x2y2(a2 - c) - xy4(a2 - c)
Factor común : ___________ El factor común monomio es: xy2
Luego la factorización es: ____________________ El factor común polinomio es: (a2 - c)
2. Factorizar: mx2 - nx2 Luego la factorización es: xy2(a2 - c)[x2y2 + x - y2]
Factor común : ___________
9. Factorizar: x2y3(a + b) + xy2(a + b)
Luego la factorización es: ____________________
Factor común monomio : ________________
3. Factorizar: x(x + 5) + y(x + 5) - z(x + 5)
Factor común polinomio : ________________
Factor común : ___________
Luego la factorización es: _______________
Luego la factorización es: ____________________
4. Factorizar: x4(2a - 5b) + x(2a - 5b) - (2a - 5b)
10. m4n3(x + y - 2) - m14y8(x + y - 2)
Factor común : ___________
Factor común monomio : ________________
Luego la factorización es: ____________________
Factor común polinomio : ________________
5. Factorizar: a(p + q) + b(p + q) + c(p + q)
Luego la factorización es: _______________
Factor común : ___________
Luego la factorización es: ____________________ 11. 7a4b3(m2 - n3) + 14a2b3(m2 - n3) - a5b2(m2 - n3)
Factor común monomio : ________________
Factorizar: m4 + 6m3 + 4m2
Factor común polinomio : ________________
El factor común es: m2
La factorización es: m2[m2 + 6m + 4] Luego la factorización es: _______________
6. Factorizar: c4 + 10c2 + 20c5 C. Factor común por agrupación de términos
Factor común : ___________ Cuando todos los términos de un polinomio
no tienen la misma parte literal, se
Luego la factorización es: ____________________ agrupan los términos que sí la tienen y se
hallan los respectivos factores comunes.
7. Factorizar: x2y - 12xy + 10x3y4
Factor común : ___________
Luego la factorización es: ____________________
8. Factorizar: m2n4 - 14m3n3 + m5n6
ALGEBRA - secundaria
PROBLEMAS PARA LA CLASE
Factorizar:
1. mx + m2 + xy + my a) (a + b)(b + 1)(c + 1) b) (c + 1)(b + 1)(a + 1)
c) (a + b)(a + b + c) d) (a + 1)(a + b)
a) (x + m)(m + y) b) (x + y)(x + m) e) (c + a)(c + b)(a + 1)
c) (x + y + m)(x - m)
12. Factorizar: 3b(2x + 3) + 2x + 3
2. ax + x2 + ab + bx
a) (2x - 3)(b3 + 1) b) (2x + 3)2(3b + 1)
a) (a + x)(x + b) b) (a + x)(ax + b) c) (2x + 3)(3b + 1) d) (2x + 3)(3b)
c) (a + b)(x + b) e) (2x - 3)(3b + 1)
3. ax + bx + cx + ay + by + cy 13. Factorizar: x2 + y2 - 5y(x2 + y2)
a) (a + b + c)(x + a) b) (a + b + c)(x + b) a) (x + y)2(1 - y5) b) (x - y)2(1 - 5y)
c) (a + b + c)(x + y)
c) (x + y)2(1 - 5y) d) (x2 + y2)(-4y)
e) (x2 + y2)(1 - 5y)
4. m2 - mn + mp - np
14. Factorizar: 1 - x - 2y(1 - x)
a) (m - n)(m - p) b) (m - n)(m + p)
c) (m + n)(m + p)
a) (1 - x)2(1 - 2y) b) (1 - x)(1 - 2y)
c) (-x)(1 - 2y) d) (1 - x)(1 + 2y)
5. x2y2 + x3y3 + x5 + y5
e) (1 + x)(1 - 2y)
a) (x3 + y2)(x2 + y3) b) (x2 + y2)(x + y) 15. Factorizar: _ a _ b + 2(a + b)y
c) (x3 + y2)(x2 - y3)
a) (a + b + y)(a - 1) b) (a - 1)(b + 2y)
6. x7 - x4y4 - x3y3 + y7 c) (a + b)(1 + 2y) c) (a + b)(1 + 2y)
e) (a + b)(-1 + 2y)
a) (x3 + y4)(x4 + y3) b) (x3 - y4)(x4 + y3)
c) (x3 - y4)(x4 - y3) 16. Expresar x3 - x2 + x - 1 como producto de 2 factores.
7. m2 + mn + mp + np a) (x - 1)3 b) (x - 1)(x2 + 1)
c) (x + 1)(x2 + 1) d) (x - 1)(x + 1)
a) (m + n)(m + p) b) (m + n)(n + p)
c) (m + n)(mp + n) e) (x + 1)3
8. m3 + m2 + m + 1 17. Factorizar: 5y5 - 15y2z + y3 - 3z
a) (m - 1)(m2 + 1) b) (m + 1)3 a) (y3 - 3z)(5y2 + 1) b) (y3 + 3z)(5y2 + 1)
c) (m + 1)(m2 + 1) c) (y3 - 3z)(y2 + 1) d) (y3 - 3z)(5y + 1)
e) (5y - z)3
9. Factorizar: - m - n + x(m +n)
18. Al Factorizar a5 - a4 + a - 1 se obtiene un polinomio de
a) (m + n)(x - 1) b) (m + n)(x + 1)
la forma: (a- 1)(a + 1)
c) (m - n)(x - 1) d) (m - n)(x)
Indicar:
e) (m - x)(n - 1)
a) 2 b) 3 c) 4
10. Factorizar: x(3a - 2b) - 3a + 2b
d) 5 e) 6
a) (a - b)(3x - 1) b) (3a - x)(2b - 1)
c) (3a + 2b)(x - 1) d) (3a - 2b)(x + 1)
e) (3a - 2b)(x - 1)
11. Factorizar: (c + 1)(ab + 1) + (a + b)(c + 1)
ALGEBRA - secundaria
FACTORIZACIÓN II
2
OBJETIVO P (7) = 7 - 5(7) - 14
• Factorizar un polinomio utilizando productos notables. 1 2 Cuenta el
• Aplicar el aspa simple a polinomios cuadráticos. número de
= 49 - 35 - 14 operaciones
3 para obtener el
resultado.
Con el transcurrir de los años, el hombre ha tratado de = 49 - 49
4
mejorar su estilo de vida, creando cosas para su comodidad,
su beneficio o para su ahorro (sea de tiempo o económico). = 0
Por ejemplo, Leibnitz creó una pequeña "máquina de • Por otro lado, utilizando la factorización tenemos:
cálculos aritméticos" (ahora conocida como calculadora)
con la cual, su trabajo se volvió más sencillo y breve. x2 - 5x - 14 = (x - 7)(x + 2)
Posterior a esto, (a inicios del siglo XX) se desarrolla la
primera computadora llamada ENIAC, la cual facilitaba el Así: P(x) = (x - 7)(x + 2)
trabajo científico al realizar muchos cálculos en segundos;
pero aún así, poseía un sistema de manejo complejo y Reemplazando el valor indicado se tiene:
laborioso (se utilizaban tarjetas perforadas por cada
operación), motivo por el cual este invento fue cambiando P (7) = (7 - 7)(7 + 2)
poco a poco. Imagínate, que ésta primera computadora 1 2 Cuenta el
número de
tenía las dimensiones de dos salones juntos!! operaciones
= 0 . 9
3 para obtener el
resultado.
= 0
En el primer caso fueron necesarias cuatro operaciones
para obtener el resultado, mientras que en el segundo caso
se hicieron sólo tres.
"MORALEJA".- En una expresión factorizada se ahorra
Luego, durante las últimas tres décadas del siglo pasado
tiempo al calcular su valor numérico.
y hasta la fecha, la evolución de las computadoras ha sido
tremenda. Ahora hay computadoras del tamaño de un
I. IDENTIDADES.- Es la aplicación inmediata de algunos
portafolio, del tamaño de un cuaderno, en el celular o hasta
productos notables como:
en el reloj!!...
A. Trinomio cuadrado perfecto
¡Imagina eso!
A2 2AB B2 (A B)2
En resumen, el hombre ha logrado muchísimo en busca
de su comodidad, beneficio y ahorro. A2 2AB B2 (A B)2
Con respecto a esta última palabrita: AHORRO quiero
• Ejemplo 1:
contarte que, aunque no lo creas, la FACTORIZACIÓN nos
permite ahorrar tiempo al hacer cálculos aritméticos. "¿Y
cómo así?"... dirás tú; observa: Factorizar: x2 + 6x + 9
Resolución:
• Si tenemos el polinomio: P(x) = x2 - 5x - 14
y queremos hallar: P(7)
x2 + 6x + 9 = x2 + 2(x)(3) + 32
Entonces reemplazamos: x = 7
= (x + 3)2
Así: P(7) = 72 - 5(7) - 14
Entonces:
• Ejemplo 2:
Factorizar: 25x4 - 20x2 + 4
ALGEBRA - secundaria
Multiplicamos en aspa, tenemos por resultado 4x y 3x.
Resolución: Si sumamos ambos resultados tenemos:
25x4 - 20x2 + 4 = (5x2)2 - 2(5x2)(2) + 22 4x + 3x = 7x
= (5x2 - 2)2
Justamente el término lineal
Así que el resultado será:
B. Diferencia de cuadrados
(x + 3)(x + 4)
A2 B2 (A B)(A B)
Nota que el resultado consiste
en escribir cada línea horizontal
del desdoblamiento.
• Ejemplo 1:
Factorizar: 4x2 - 49 • Ejemplo 2:
Resolución: Factorizar: 3x2 - 5x - 2
Transformando a una diferencia de cuadrados Resolución:
3x2 - 5x - 2
4x2 - 49 = (2x)2 - (7)2
= (2x + 7)(2x - 7) 3x +1
x -2
• Ejemplo 2:
Multiplicando en aspa y verificando se tiene:
Factorizar (x + 5)2 - 81
- 6x
Resolución: + x
- 5x (término lineal)
(x + 5)2 - 81 = (x + 5)2 - 92
= (x + 5 + 9)(x + 5 - 9) Así que el resultado es: (3x + 1)(x - 2)
= (x + 14)(x - 4)
EJERCICIOS RESUELTOS
C. Aspa simple
1. Factorizar: x2 + 12x + 36
Este criterio lo aplicaremos a polinomios de 2do
grado de la forma. Resolución:
Ax2 + Bx + C ; A 0 x2 + 12x + 36 = x2 + 2(x)(6) + 62
Es un trinomio cuadrado perfecto
Desdoblemos en factores los términos cuadráticos
= (x + 6)2
e independiente, de tal manera que al multiplicar en
aspa (de allí el nombre del criterio) la suma de sus
resultados de el término lineal. 2. Factorizar: 16x2 - 25 , luego calcular la suma de sus
factores primos.
• Ejemplo 1:
Resolución:
Factorizar: x2 + 7x + 12
Transformando a una diferencia de cuadrados
Resolución: 16x2 - 25 = (4x)2 - 52
= (4x - 5)(4x + 5)
2
Tenemos: x + 7x + 12
Ahora sumemos los factores primos:
x 3 (4x - 5) + (4x + 5)
x 4 = 8x
TALLER DE APRENDIZAJE Nº 03
ALGEBRA - secundaria
1. Factorizar: P(m) = m2 + 2m + 1
7. Factorizar: A(x) = x2 - 25
2. Factorizar: F(m) = m2 + 3m + 2
8. Factorizar: P(x) = 6x2 + 13x + 6
3. Factorizar: R(m) = m2 - 1
9. Factorizar: P(x;y) = 2x2 - 15xy + 7y2
4. Factorizar: F(m) = m2 - 4
10. Factorizar: P(x;y) = x4 - 16a4
5. Factorizar: F(x) = x2 + 5x + 6
6. Factorizar: R(x) = x2 + 3x - 4
ALGEBRA - secundaria
PROBLEMAS PARA LA CLASE
Expresar los trinomios como producto de dos factores. a) (a + c + b)(a + c - b) b) (a + c + b)2
c) (a + c - b)2
1. x2 + 8x + 15
11. Factorizar: P(x) = (x + 1)2 - 2(x + 1) - 3
a) (x + 5)(x + 3) b) (x + 15)(x + 1)
Señalar un factor primo.
c) (x + 5)(x - 3)
a) x + 1 b) x - 1 c) x + 2
2. x2 - 6x - 7
d) x - 3 e) x + 3
a) (x - 7)(x - 1) b) (x + 7)(x - 1)
• Ordena y factoriza los trinomios. (de la 13 a la 18)
c) (x - 7)(x + 1)
13. 3 - 5x + 2x2
3. x2 - 21x + 20
a) (2x + 3)(x + 1) b) (2x - 1)(x - 3)
a) (x - 20)(x + 1) b) (x - 20)(x - 1)
c) (2x - 3)(x - 1)
c) (x + 20)(x + 1)
14. 8 - 14x + 3x2
4. y2 + 15y + 50
a) (3x - 2)(x - 4) b) (3x - 4)(x - 2)
a) (y + 10)(y + 5) b) (y + 50)(y + 1)
c) (3x + 2)(x + 4)
c) (y + 5)(y - 10)
15. y + 15y2 - 6
5. z2 + 9z + 14
a) (5y + 3)(3y - 2) b) (5y + 2)(3y - 3)
a) (z + 7)(z - 2) b) (z + 7)(z + 2)
c) (5y - 3)(3y + 2)
c) (z + 14)(z + 1)
6. 1 + 20a2 + 12a
6. z2 - z - 2
a) (10a + 1)(2a + 1) b) (a + 1)(2a + 1)
a) (z - 2)(z - 1) b) (z + 2)(z - 1)
c) (10a - 1)(a - 2)
c) (z - 2)(z + 1)
17. 6x2 - 21 + 5x
7. w2 - 3w - 28
a) (3x - 7)(2x + 3) b) (3x + 7)(2x + 3)
a) (w - 7)(w + 4) b) (w + 7)(w - 4)
c) (3x + 7)(2x - 3)
c) (w - 7)(w - 4)
18. 7x + 4x2 - 15
8. z2 - 10z - 24
a) (4x - 5)(x + 3) b) (4x + 3)(x - 5)
a) (z - 6)(z + 4) b) (z - 6)(z - 4)
c) (4x + 5)(x - 3)
c) (z - 12)(z + 2)
19. Factorizar: x2(a -1) - y2(a - 1)
9. y2 - 5y - 6
a) (y - 6)(y + 1) b) (y - 6)(y - 1) a) (a - 1)(x + y)(x - y) b) (a - 1)(x - y)2
c) (y + 6)(y - 1) c) (a + 1)(x2 - y2) d) (a - 1)(x + y)2
e) (a - 1)(2x - 2y)
10. x2 + 9x + 18
a) (x + 18)(x + 1) b) (x + 9)(x + 2)
c) (x + 6)(x + 3)
11. Factorizar: a2 + c2 - b2 + 2ac
TAREA DOMICILIA
ALGEBRA - secundaria
1. Factorizar: x2 - 7x - 8, indicar la suma de factores primos.
2. Factorizar: x2 + 6x + 5
3. Factorizar: x2 + 10x + 21
4. Factorizar: x2 + 7x + 12, indicar la suma de sus factores Primos.
5. Factorizar: x2 - 2ax + a2 - 1
6. Factorizar: x2 - b2 - x - b
7. Factorizar: x3 - 4x
8. Factorizar: x3 + 3x2 - x - 3
9. Factorizar: 3x2 + 10x + 3
10. Factorizar: (x + y)2 - (x + y) - 2
11. Factorizar: x2 - 4xy - 5y2
12. Factorizar: 6x2 - 11x + 4
13. Factorizar: 3x2 - x - 2
14. Factorizar: 25n2 + 20n + 4
15. Factorizar: 4x2 - 12x + 9
16. Factorizar: ax2 + 11ax + 28a
17. Factorizar: a2 - 2ab + b2 - ac + bc
18. Factorizar: px2 - p
19. Factorizar: 2b2 + 5b – 3
20. Factorizar: a2 + 10a + 25
21. Factorizar: P(x) = x2(x4 - 1) + 2x(x4 - 1) + (x4 - 1)
22. Factorizar: 6x2n +1 + 5xn + 1 - 6x
ALGEBRA - secundaria
23. Factorizar: (x - y)3 - (x - y)2 - 2(x - y)
Indicando un factor primo.
24. Factorizar: a2x + 2abx + b2x + a + b
25. Factorizar: x2 - y2 + xz + yz
26. Factorizar: x2 + 4xy + 4y2 - z2
27. Factorizar: P(x) = x2 + 2xy + y2 + x + y
28. Factorizar:
(4x2 - 25)(x2 + 2xy + y2)(x2 - 2xy + y2)(x + y)(x - y)
Indique si es verdadero (V) o falso (F)
I. Al factorizar obtenemos 5 factores primos.
II. Existen 4 factores primos.
III. La suma de todos sus factores primos es 12x.
29. Después de factorizar el polinomio
3x2 - 3x4 + y2 - x2y2 , se obtiene:
30. Señale el factor primo de mayor grado contenido en:
P(x; y) = x2 + x4y2 - y4 - x2y6
|
__label__pos
| 0.966064 |
Get-Credential
Get a security credential object based on a user name and password.
Syntax
Get-Credential [-credential] PSCredential [CommonParameters]
Get-Credential [[-UserName] String] -Message String
Key
-credential
A user name e.g."User01" or "Domain01\User01"
When you submit the command, you are prompted for a password.
Starting in Windows PowerShell 3.0, if you enter a user name without a domain, Get-Credential no longer
inserts a backslash before the name.
If you omit this parameter, you are prompted for a user name and a password.
-Message String
A message to appear in the authentication prompt.
This parameter is designed for use in a function or script.
Use the message to explain to the user why you are requesting credentials and how they will be used.
(PowerShell 3.0+)
-UserName String
A user name. The authentication prompt will then request a password for the user name.
By default, the user name is blank and the authentication prompt requests both a user name and password.
When the authentication prompt appears in a dialog box, the user can edit the specified user name.
However, the user cannot change the user name when the prompt appears at the command line.
When using this parameter in a shared function or script, consider all possible presentations.
(PowerShell 3.0+)
When you enter the command, you will be prompted for a password.
If you omit PSCredential, you will be prompted for a user name and a password.
PowerShell can store passwords in 3 different forms:
String - Plain text strings are stored in memory as unsecure plain text and most cmdlets will not accept passwords in this form.
SecureString - This type is encrypted in memory. It uses reversible encryption so the password can be decrypted when needed, but only by the same user principal that encrypted it. [System.Security.SecureString]
A SecureString can be read in from the terminal with Read-Host -AsSecureString
PSCredential - This class is composed of username (string) plus a password (SecureString). This is the type that most cmdlets require for specifying credentials. [System.Management.Automation.PSCredential]
Whenever possible do not ask users for a password, use integrated Windows authentication instead.
Passwords should not be saved to disk or the registry as plain text. Use a plaintext representation of SecureString.
Examples
Get a credential and save into a variable:
PS C:\> $ss64Cred = Get-Credential -Message 'Enter a credential for this SS64 demo script'
Use this credential (stored in the variable $ss64Cred) to run a Get-CimInstance command:
PS C:\> Get-CimInstance Win32_DiskDrive -ComputerName Server64 -Credential $ss64Cred
An alternative is to embed the Get-Credential cmdlet in an expression:
PS C:\> Get-CimInstance Win32_DiskDrive -ComputerName Server64 -Credential (get-credential Domain01\User64)
Create PSCredentials for the user (user64) with the (SecureString) password held in the variable $sec_password:
$UserName = "Domain\User64"
$Credentials = New-Object System.Management.Automation.PSCredential `
-ArgumentList $UserName, $sec_password
Display the password from a PSCredential object using the GetNetworkCredential() method:
PS C:\> $PlainPassword = $Credentials.GetNetworkCredential().Password
#Please allow me to introduce myself I'm a man of wealth and taste# ~ The Rolling Stones
Related PowerShell Cmdlets
ConvertFrom-SecureString - Convert a secure string into an encrypted standard string.
ConvertTo-SecureString - Convert an encrypted standard string into a secure string.
Get-AuthenticodeSignature - Get the signature object associated with a file.
Copyright © 1999-2024 SS64.com
Some rights reserved
|
__label__pos
| 0.538375 |
Community Articles
Find and share helpful community-sourced technical articles.
Announcements
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.
Labels (1)
Usually service can be removed using API calls, but if the service is inconsistent state then API's does not work.
so only way to delete is by running SQL queries. here is the list of steps to delete KNOX service.
1. delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name like '%KNOX%')
2. delete from confgroupclusterconfigmapping where config_type like '%knox%'
3. delete from clusterconfig where type_name like '%knox%'
4. delete from clusterconfigmapping where type_name like '%knox%'
5. delete from serviceconfig where service_name = 'KNOX'
6. delete from servicedesiredstate where service_name = 'KNOX'
7. delete from hostcomponentdesiredstate where service_name = 'KNOX'
8. delete from hostcomponentstate where service_name = 'KNOX'
9.delete from servicecomponentdesiredstate where service_name = 'KNOX'
10.delete from clusterservices where service_name = 'KNOX'
11. DELETE from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX')
12.DELETE from alert_notice where history_id in ( select alert_id from alert_history where alert_definition_id in ( select definition_id from alert_definition where service_name = 'KNOX'))
13.DELETE from alert_definition where service_name like '%KNOX%'
Note1: I have tried and tested this in Ambari 2.4.x
Note2: Above queries are case sensitive - so use Upper/Lower case for service name.
1,890 Views
Comments
Super Mentor
Wonderful.
Contributor
Very useful information.
Don't have an account?
Version history
Last update:
01-24-2017 05:48 PM
Updated by:
Contributors
Top Kudoed Authors
|
__label__pos
| 0.949111 |
you are viewing a single comment's thread.
view the rest of the comments →
[–]Taneb 6 points7 points (11 children)
sorry, this has been archived and can no longer be voted on
"One consequence of this requirement is that a Traversal needs to leave the same number of elements as a candidate for subsequent Traversal that it started with." from the documentation for Control.Lens.Traversal
> [True, False] ^.. traverse.filtered id
[True]
> [True, False] & traverse.filtered id %~ not
[False, False]
> [False, False] ^.. traverse.filtered id
[]
It only obeys this law when you do not modify whether the traversed elements succeed the predicate.
[–]hiptobecubic 3 points4 points (5 children)
sorry, this has been archived and can no longer be voted on
I'm sorry, I don't see which part of this doesn't make sense.
[–]Taneb 8 points9 points (4 children)
sorry, this has been archived and can no longer be voted on
Compare:
[True, False] & traverse.filtered id %~ not . not
[True, False] & traverse.filtered id %~ not & traverse.filtered id %~ not
These are completely different when you'd expect them to be the same.
On the other hand, filtered is VERY useful a lot of the time. For a start, you can't make invalid folds with it. Second, if you know that you aren't affecting whether the predicate will succeed when you traverse over it, as is the case in the tutorial, filtered is absolutely fine.
[–]hiptobecubic 1 point2 points (3 children)
sorry, this has been archived and can no longer be voted on
Aha. Ok. So the first traversal affects the result of the second traversal and then everything falls apart. This sounds bad, but how bad is it in practice? Gabriel's example looks like exactly why this kind of thing would exist.
[–]Taneb 4 points5 points (0 children)
sorry, this has been archived and can no longer be voted on
If you export a traversal that uses "filtered" without warning people, it could very, very easily blow up in your library's user's faces. If you're just using it yourself, and you know what you're doing, everything will be perfectly fine.
[–]Davorak 0 points1 point (1 child)
sorry, this has been archived and can no longer be voted on
I know they seem really useful, if we stop pretending there're lens can we give them another home, maybe with some associated laws, so we can continue using them?
[–]edwardkmett 4 points5 points (0 children)
sorry, this has been archived and can no longer be voted on
I'm perfectly happy to continue housing it in lens. It doesn't claim to be a valid Traversal, the types merely work out to let it compose with them and "do what you mean".
[–]gasche 0 points1 point (4 children)
sorry, this has been archived and can no longer be voted on
Wouldn't it make sense to add a dynamic check (a contract, morally, for a property we cannot check statically), then?
[–]Taneb 3 points4 points (3 children)
sorry, this has been archived and can no longer be voted on
I can't see how this can be built into the code efficiently. The obvious way to do it would be to count the number of elements in a traversal before and after traversing through it, and that would be O(n) in the best case, and I do not want to be the one adding that to something which is perfectly valid when used correctly (and it is very easy to use it correctly, just don't modify what you check), and is a perfectly valid Fold no matter what!
[–]gasche 1 point2 points (1 child)
sorry, this has been archived and can no longer be voted on
I'm not familiar with these issues (or the advanced uses of Lenses in any general case), so pardon me if this is dumb, but:
• you could maybe have a check only on filtered rather than all basic combinators (which makes it less costly)
• you could provide a filteredUnsafe method for people that think they know what they're doing; but my very own guess would be that the performance difference wouldn't be that big in the first place
• of course you could expose different functions to return either a Fold or a Traversal, and have the dynamic check on only the Traversal one
[–]edwardkmett 2 points3 points (0 children)
sorry, this has been archived and can no longer be voted on
You don't get enough control with the types in lens to add such a check.
[–]gasche 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Aside: you could either count the number of elements, or remember the selection function that was used, and add a hook on modification to check that this selection function still holds. That may be slower for complex selection function, but potentially better behaved with respect to laziness, etc.
|
__label__pos
| 0.666366 |
Fetch Api Javascript Url
Fetch Api Javascript Url
3 min read Jun 14, 2024
Fetch Api Javascript Url
Fetch API: Mengambil Data dari URL di JavaScript
Fetch API adalah fitur JavaScript yang memungkinkan Anda untuk mengambil data dari server menggunakan URL. Ini adalah cara standar untuk melakukan permintaan HTTP di JavaScript, menggantikan metode lama seperti XMLHttpRequest.
Keuntungan Menggunakan Fetch API
Berikut beberapa keuntungan menggunakan Fetch API:
• Mudah digunakan: Fetch API memiliki API yang sederhana dan mudah dipahami, membuatnya lebih mudah untuk dipelajari dan digunakan.
• Promise-based: Fetch API menggunakan Promises, yang memungkinkan Anda untuk menangani respon dari server secara asinkron.
• Modular: Fetch API dirancang untuk menjadi modular, memungkinkan Anda untuk menambahkan fitur tambahan seperti interceptors dan cache.
• Dukungan Browser yang luas: Fetch API didukung oleh sebagian besar browser modern, sehingga Anda dapat yakin bahwa aplikasi Anda akan berjalan di berbagai perangkat.
Cara Menggunakan Fetch API
Berikut contoh sederhana bagaimana menggunakan Fetch API untuk mengambil data dari URL:
fetch('https://example.com/data.json')
.then(response => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then(data => {
// Proses data di sini
console.log(data);
})
.catch(error => {
console.error('There has been a problem with your fetch operation:', error);
});
Penjelasan:
• fetch(): Fungsi ini mengambil URL sebagai argumen dan mengembalikan Promise yang menyelesaikan dengan respon dari server.
• .then(): Metode ini dipanggil ketika Promise terpenuhi dan mengambil respon dari server sebagai argumen.
• .json(): Metode ini mengonversi respon menjadi objek JSON.
• .catch(): Metode ini dipanggil ketika terjadi kesalahan saat mengambil data dari server.
Modifikasi Permintaan Fetch
Anda dapat memodifikasi permintaan Fetch dengan menambahkan opsi ke fungsi fetch():
• method: Metode HTTP yang akan digunakan (misalnya, GET, POST, PUT, DELETE).
• headers: Objek yang berisi header tambahan untuk permintaan.
• body: Data yang akan dikirimkan ke server.
Contoh:
fetch('https://example.com/data', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ name: 'John Doe' })
})
.then(response => {
// ...
})
.catch(error => {
// ...
});
Kesimpulan
Fetch API adalah cara yang powerful dan mudah untuk mengambil data dari server di JavaScript. Dengan pemahaman dasar tentang cara kerjanya, Anda dapat dengan mudah menggabungkan Fetch API ke dalam aplikasi Anda untuk membuat permintaan HTTP yang efisien dan mudah dikelola.
|
__label__pos
| 0.998575 |
用python wxpy管理微信公众号,并利用微信获取自己的开源数据。
1. 云栖社区>
2. 博客>
3. 正文
用python wxpy管理微信公众号,并利用微信获取自己的开源数据。
优惠码发放 2019-07-30 11:16:29 浏览4051
展开阅读全文
用python wxpy管理微信公众号,并利用微信获取自己的开源数据。
之前了解到itchat 乃至于 wxpy时 是利用tuling聊天机器人的接口。调用接口并保存双方的问答结果可以作为自己的问答词库的一个数据库累计。这些数据可以用于自己训练。
而最近希望获取一些语音资源,用于卷积神经网络的训练。。
首先wxpy是itchat的升级版,通过wxpy bot.core即可原封不动的调用itchat的指令。
可以实现的简单功能:
1. 调取所有微信好友的信息,包括头像,签名,地区,等信息。
复制代码
-- coding: utf-8 --
"""
Created on Fri Jul 19 17:10:01 2019
@author: wenzhe.tian
"""
import wxpy as wp
from collections import defaultdict
import pandas as pd
from tkinter import messagebox
import os
初始化机器人,选择缓存模式(扫码)登录
bot = wp.Bot(cache_path=True)
friend = bot.core.get_friends(update=True)[0:]
num = 0
for f in friend:
image = bot.core.get_head_img(userName=f["UserName"]) #用 itchat.get_head_img(userName=None)来爬取好友列表的头像
fileImage = open(str(num) + ".jpg",'wb') #将好友头像下载到本地
fileImage.write(image)
fileImage.close()
num += 1
friend=pd.DataFrame(friend)
friend.to_excel('friend.xlsx',sheet_name='Friend_Info')#所有朋友相关资料存为excel
复制代码
根据上面可以做一些头像集合,或者微信好友的动态图表统计。
1. 消息回复
bot.friends().search('老九门里排第十')[0].send('[强]') #表示回复给 '老九门里排第十' 点赞符号 相当于 friends.search()
然而这些不能满足一些高级的需求比如:
1.可以自动将录音,视频,图像,聊天记录等按照档案记录下来。(此处修改可操作连接图灵机器人,将提问和回答的信息记录下来。作为自己训练的原始数据集)
其中 全局参数 global temp:
temp是一个list,可以将最近的消息加入list,该消息信息是字典格式的所有消息包含的原始数据。 比如消息类型,发送人,接收人等等。
复制代码
-- coding: utf-8 --
"""
Created on Fri Jul 19 13:10:01 2019
@author: wenzhe.tian
"""
import wxpy as wp
from collections import defaultdict
import pandas as pd
from tkinter import messagebox
import os
bot = wp.Bot(cache_path=True)
chats=bot.chats() # 所有开启了聊天窗口的对象
groups=bot.groups() # 所有群的对象
friends=bot.friends() # 所有好友的对象
mps=bot.mps() # 所有公众号的对象
@bot.register()
def print_messages(msg):
print(msg.create_time,msg)
global temp
if msg.sender.nick_name in message.keys():
message[msg.sender.nick_name].append(msg.raw)
else:
message[msg.sender.nick_name]=[]
message[msg.sender.nick_name].append(msg.raw)
path='C:\\Users\\wenzhe.tian\\Desktop\\send_mail\\wechat_infomation\\'; # 修改为希望存储聊天,推送,图片,视频,音频等信息的地址
if os.path.exists(path+msg.sender.nick_name)==False:
os.makedirs(path+msg.sender.nick_name)
if msg.type=='Text':
f = open(path+msg.sender.nick_name+'\\message.txt','a+',encoding='utf-8')
f.read()
f.write('\n')
f.write(str(msg.create_time)+msg.text)
f.close()
else:
print('非文字消息,已存储')
if '.' in msg.file_name:
msg.get_file(save_path=path+msg.sender.nick_name+'\\'+msg.file_name)
else:
msg.get_file(save_path=path+msg.sender.nick_name+'\\'+msg.file_name+'.txt')
复制代码
如上图: 所有新回复的消息会自动创建 代码里path下的文件夹,视频,录音等非文字内容会直接下载,推送等html格式的消息会当做文字和聊天记录一同保存下来记录到message.txt里
文字可以用于训练,亦可用于词云等生成。
2.可以管理微信公众号,比如根据对方的输入自动从调取数据回复,或者固定转发某些来源的推送。
复制代码
定位公司群
company_group = ensure_one(bot.groups().search('公司微信群'))
定位老板
boss = ensure_one(company_group.search('BOSS'))
将老板的消息转发到文件传输助手
@bot.register(company_group)
def forward_boss_message(msg):
if msg.member == boss:
msg.forward(bot.file_helper, prefix='BOSS')
复制代码
3. 针对某人的自动回复,此处可设定词汇和回复内容(比如html格式的推送或者链接)来实现微信公众号的一些运营。
具体用法如下:
@ bot.register() 的括号内必须为一个对象,比如上文是一个公司的群,是从所有群中搜索名字 '公司微信群' ,我们之前定义的
groups=bot.groups() # 所有群的对象
friends=bot.friends() # 所有好友的对象
mps=bot.mps() # 所有公众号的对象
均是对象的集合,从中筛选即可,
比如想要自动回复 老九门里排第十 这个人的所有text类型的消息。即
laojiu= friends.search('老九门里排第十')[0] # 这里其实默认搜索的是nickname
然后:
复制代码
@bot.register([laojiu, groups], TEXT) # 此处表示对laojiu 和所有groups里的对象的text类型的消息做操作
def auto_reply(msg):
# 如果是群聊,但没有被 @,则不回复
if isinstance(msg.chat, Group) and not msg.is_at:
return
else:
# 回复消息内容和类型
return '收到消息: {} ({})'.format(msg.text, msg.type)
复制代码
暂时更新到这里,以上。
原文地址https://www.cnblogs.com/techs-wenzhe/p/11264012.html
网友评论
作者关闭了评论
优惠码发放
+ 关注
|
__label__pos
| 0.89307 |
Q
Manage Learn to apply best practices and optimize your operations.
This article is part of our Essential Guide: Stay connected with tips and trends in vSphere networking
What are the options for handling VMs in a VMware vDS migration?
How do you handle virtual machines when making a transition from a standard switch to a virtual distributed switch?
When migrating from a standard switch to a VMware vDS, how do we handle Link Aggregation Control Protocol (LACP)...
if we are using Virtual Switching System (VSS) uplinks to connect to physical switches?
When you want to perform a live migration of virtual machines (VMs) from a standard switch to a VMware vDS with no downtime, there a few extra steps to take.
If you can allow an interruption on the VMs, this will simplify the process. Migrating the VMs to another host won't help because the port group will not exist when you try to bring the VMs back with vMotion.
First, let's clarify that the load-balancing algorithm used in this case is "Route based on IP hash." The image below shows where this is configured on a standard virtual switch.
Load-balancing settings in a standard virtual switch.Where to configure load balancing on a standard virtual switch.
If VMs can be powered down
When downtime is possible, you don't have to change Link Aggregation Groups (LAG) on your physical switch. Let's say that vmnic0, vmnic1 and vmnic2 are active adapters used in your standard switch and are in an LAG. Remove them from the standard switch and add them to the distributed switch; this is when the VMs will no longer be connected to the standard switch on this host. When the three physical network interface cards (NICs) are connected to the distributed switch, the virtual machines can be connected to the port group on the distributed switch, and they will be on the network again.
If VMs need to stay powered on
If the VMs need to keep running, then I would start by disabling the IP hash algorithm on the vSwitch -- there is no need to change the physical switch -- then remove two NICs from the switch and leave at least one physical NIC attached to the switch. This will affect available bandwidth, so you should perform this procedure during off-peak hours. Add the two NICs to the distributed switch -- which is also in the default load balancing, no LACP -- and connect the VMs from the old port group to the new port group. When they are all moved to the distributed switch, remove the last NIC and add it to the distributed switch and configure LACP and the IP Hash load-balancing algorithm on the distributed switch.
This was last published in March 2014
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.
You will be able to add details on the next page.
Start the conversation
Send me notifications when other members comment.
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Please create a username to comment.
-ADS BY GOOGLE
SearchServerVirtualization
SearchVirtualDesktop
SearchDataCenter
SearchCloudComputing
Close
|
__label__pos
| 0.573154 |
Ask a Question
GoPro Support Hub Ask a question. Share an answer. Find a solution. Stay stoked.
Announcements
Is your GoPro gear up to date? Check to see If it is on our Update page.
GoPro Apps for Mobile
Reply
Tourist
Posts: 9
Accepted Solution
Why does the Android app require location permissions?
I don't understand this. And also I want to know how we can be sure our movies and pics aren't getting uploaded to some server.
I bought this camera for making private media of me and my boyfriend.
Agata
Accepted Solutions
GoPro
Posts: 8,158
Re: Why does the Android app require location permissions?
Hello @agataw326. If you are using an Android device, the only way to make the GoPro app work is by turning on location services. Such is not the case for iOS devices though. For iOS devices, you can run the app without enabling location service. Hope this helps.
View solution in original post
All Replies
GoPro
Posts: 10,213
Re: Why does the Android app require location permissions?
Hi @agataw326
All versions of Android currently running the GoPro App must have Location Services enabled on your phone/device in order for the GoPro App to successfully scan for cameras. Enabling location services does not collect user data, it's only use is to assist in helping your camera locate your mobile device so they can establish a connection. Location services doesn't just work for GPS either, as it can be used for WiFi, sensors and mobile networks as well.
Thanks!
Ej
Tourist
Posts: 9
Re: Why does the Android app require location permissions?
Hi. That doesn't make any sense. Doesn't it also connect via bluetooth?
I have location off all the time. I don't want to turn it on just for this.
Are you saying there is no way to use the mobile app without location on?
GoPro
Posts: 8,158
Re: Why does the Android app require location permissions?
Hello @agataw326. If you are using an Android device, the only way to make the GoPro app work is by turning on location services. Such is not the case for iOS devices though. For iOS devices, you can run the app without enabling location service. Hope this helps.
Tourist
Posts: 9
Re: Why does the Android app require location permissions?
Thanks for the info. I don't accept this. They should change it.
I'm not going to use the phone app and I will look into other brands from now on.
Sightseer
Posts: 2
Re: Why does the Android app require location permissions?
Also the GoPro Android app takes your location even when the app is not running. Android 10 popped up a warning about the GoPro app stating exactly that. It was the only app that I run that uses the location service when app not even running.
GoPro
Posts: 8,158
Re: Why does the Android app require location permissions?
Hello @sortd. The location services only need to be enabled when the device is being paired to the camera. Opening the GoPro app only to browse media, for example, will not require the location service to be enabled. Would you have a screenshot showing the messaging that you got? Thanks!
Sightseer
Posts: 2
Re: Why does the Android app require location permissions?
That doesn't seem to be a case. After upgrading to Android 10, Android keeps on popping up a notification saying that the GoPro app accessed your location while in the background (I haven't run the app). It then gives me the option to prevent the app from accessing while in the background.
Little bit creepy, hope you guys can prevent it from getting the location while in the background.
AndroidWarningGoProAccessedLocationWhileInBackground.png
|
__label__pos
| 0.978653 |
hi,
i put together this code to read a text file line by line, split the resulting string into the 4 values and send these values to four text-boxes. during each cycle of that while loop it's suppose to initialize the backgroundworker_dowork, which will read the values and use them(and start a long loop and stuff).
The first and obvious problem i have is that when i run this i get an error saying that this backgroundworker is already busy and can't handle another task.
The part that's supposed to read text works as far as i can tell from using breakpoints while debugging but i'm not sure if it behaves like i think it does. (only the last set of values appears on the form, i'm not sure if the rest ever get painted to the form, i know however that they are read from using a breakpoint where this happens)
- I would like some advice on how to get this thing to work. Every set of values should be displayed, the backgroundworker initated, when it finishes it's job the next values read, displayed and so on.
This is the first app in which i use the bgworker, so i dunno much about it yet. I think it has some way of reporting that it has finished it's work. Should i use that as a condition to move on with the while loop?
sorry if what i wrote didn't make sense in some places, pls lemme know so i can try to explain better. i'm using VS 2010 express, c++, not total beginner but not too much experience:P
thanks for any help you can give with this:)
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
String^ path = "<path>\\My_file.txt";
StreamReader^ sr = gcnew StreamReader( path );
array<String ^> ^ values = gcnew array<String ^>(4);
int i2=0;
System:String^ Input= gcnew System::String(" ");// input;
try
{
while ((Input = sr->ReadLine()) != ".")//read until it reaches the . on the final row
{
input_box->Text = Input; //the string from the file is displayed as is
i2=i2+1; //this will count the rows that are read
counter_box->Text = i2.ToString(); //
//------------------------------------------
int j2=0;
int l2=0;
int a2=Input->Length;
values[0] = ""; //these will hold the separated numbers
values[1] = "";
values[2] = "";
values[3] = "";
//this next loop will split the string into the 4 values
for (l2 = 0; l2 < a2; l2++)
{
if (Input[l2] == '\040') //check if a space is found
{
j2++; // move on to the next word
l2++; //skip over blank space
}
values[j2] += Input[l2]; //increment the current value with the next digit
}
text_1->Text = values[0]; //send the 4 values to the form
text_2->Text = values[1];
text_3->Text = values[2];
text_4->Text = values[3];
backgroundWorker2->RunWorkerAsync();//call the DoWork event
}
}
finally
{
delete sr;
}
}
};
Recommended Answers
All 4 Replies
Umm I do not see any code for your worker.. it doesn't seem to have any work to do :S
Your program itself seems to be doing the work and then you call backgroundworker() which isn't shown here..
Backgroundworker is a thread.. where you give it work to do while the program does something else.. then the program gathers the work while the thread sleeps..
When the thread is done working, there is a command to tell the program it is finished and then it *joins* and sleeps.
Calling RunWorkerAsync when the work is already being done will result in *InvalidOperationException*
This Code would definitely get around that problem.. But it only allows you to run whatever is in the backgroundWorker_DoWork Function..
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
backgroundWorker1->DoWork();
//Program does other stuff here while backgrounder worker does other stuff.
}
private: System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e) {
//Does calculations and all the work here.
Thread::Sleep(1000); //Thread must sleep or else u may find a huge increase in CPU usage.
}
For Running Worker Asynchronously, you definitely have to do something like this:
http://msdn.microsoft.com/en-us/library/waw3xexc.aspx
What happens is that they call runworkerasync() and it does the work specified.. then it reports when its completed or if its cancelled and then the thread sleeps or is given more work to do. RunWorkerAsync can be given more work where as DoWork() just does work that is already pre-defined.. thats the only differences.
Oh yeah and you might want to check out OnRunWorkerCompleted event.. that will get it to do other stuff when the work is finally completed.
Examples of differences:
RunWorkerAsync() = like a telephone because both users talk at the same time without interupting eachother.. and they both hear eachother at the same time if they talk at the same time.
DoWork() = Train that never stops.. just continues until it reaches its stop, you get off.. and it leaves and starts over if requested.
Uhh Iunno what I was thinking above.. the Code is right.. but I meant to say that there are no differences as Async only raises the DoWork Event.. I was thinking async versus sync..
Hate the stupid 30 min edit timing..
line 7, forgot a :
thanks for the replies.
yup i didn't post the dowork part because i already got it working in a previous version of the code with no reading from file stuff, just calling runworkerasync.
i'll try OnRunWorkerCompleted, that might make a difference, thanks for mentioning it.
But i still dunno why it doesn't seem to run the _DoWork even when it reads the first line of text, i checked if it ever gets to "runworkerasync" and it does, just seems to pass right over it, never enters backgroundworker_dowork at all.
corrected the missing :, thanks:P
i'm working on it now but any new hints could help:)
Be a part of the DaniWeb community
We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.
|
__label__pos
| 0.819364 |
Quantitative Ability | Quantitative Reasoning
The best way to study Quantitative for the test is to make a list of rules and principles given in the review of quantitative section and explanation of the answers to the solved examples/practice tests. Keep a sheet of paper handy, as you are reviewing these tests; jot down any principles, rules, definitions or formulas that are unfamiliar to you. Then take time to memorize those rules. It is critical that you be able to recall automatically a formula, a shortcut tip or suggestion so you donot spend valuable time on the exam working out a lengthy problem or reinventing the wheel each time. Use the review section for added reinforcement.
Four Step Strategy
FIRST: Read the entire problem. Do not start doing calculations until you have read the problem from start to finish; you may find that you do not need to do all the work.
For example, if the problem states to find the value of the expression
(n-13)(n-52)(n-33)(n-4), for n = 4
You know automatically that answer is 0. This is because in the fourth parentheses the vale of n-4 for n = 4 is 0 and anything multiplied by 0 is 0. You saved all the calculations involved.
SECOND: Read the answer choices. Often you can narrow your answer down by estimating or by looking carefully at the units or values.
For example, you may be asked the number of soldiers in a platoon as a captain asks to stand all his soldiers in the form of a square. Looking at your answer choices, you see 108, 196, 224 and 87. Only the second answer choice could be correct, since it is a complete square.
THIRD: Make absolutely certain. That you know exactly what the question is calling for. Many careless errors are made when a person solves for x, not noting that the question asks for y, or for 2x.
You should take at least as much time reading and thinking about the problem as you actually do on calculations.
For example, a car starts from city M to city K 200 miles apart, with an average speed of 50miles per hour. How many miles the car will be from city K after 3 hours?
1. 100 miles
2. 20 miles
3. 50 miles
4. 150 miles
The car travels 150 miles in 3 hours (50 x 3). So it will be 150 miles from city M but the question asks the distance from city K. The right choice is 50 miles (C).
FOURTH: If none of known methods works, try grabbing an answer from the four options and plugging into the question.This will often lead you to the right answer quickly.
For example, during the first month after the opening of a new shopping home, sales were $72 million. Each subsequent month, sales declined by the same fraction. If the sales during the third month after the opening totaled $18 million, what is that fraction?
1. 1/4
2. 3/4
3. 1/2
4. 2/3
The fastest way to a solution is to plug in answer. Try choice (A). If sales in the second month after opening are declined by 1/4, it gives the sales of $18 million but this figure is for third month after the opening, so this is not the right choice. Drop it. Only the (C) choice gives the right sale figure during the third month. Therefore, the right choice is (C).
Four Step Strategy
|
__label__pos
| 0.985522 |
Javascript Array every()
The JavaScript Array every() method checks if all the array elements pass the given test function.
The syntax of the every() method is:
arr.every(callback(currentValue), thisArg)
Here, arr is an array.
every() Parameters
The every() method takes in:
• callback - The function to test for each array element. It takes in:
• currentValue - The current element being passed from the array.
• thisArg (optional) - Value to use as this when executing callback. By default, it is undefined.
Return value from every()
• Returns true if all array elements pass the given test function (callback returns a truthy value).
• Otherwise, it returns false.
Notes:
• every() does not change the original array.
• every() does not execute callback for array elements without values.
Example: Check Value of Array Element
function checkAdult(age) {
return age >= 18;
}
const ageArray = [34, 23, 20, 26, 12];
let check = ageArray.every(checkAdult); // false
if (!check) {
console.log("All members must be at least 18 years of age.")
}
// using arrow function
let check1 = ageArray.every(age => age >= 18); // false
console.log(check1);
Output
All members must be at least 18 years of age.
false
Recommended Reading: JavaScript Array some()
|
__label__pos
| 0.97823 |
Knowledgebase: Data & Table
[QODBC-Online] Sample Code for Inserting InvoiceLine to existing Invoice using PHP
Posted by Jack - QODBC Support on 24 March 2017 11:05 AM
Sample Code for Inserting InvoiceLine to existing Invoice using PHP
Sample Application:
Please click here to download Sample Code.
Please refer below steps for using an application for Inserting InvoiceLine to existing Invoice using PHP.
Run application.
The application has two functionality:
1. Append the existing Invoice with a new Description Line which will add a new Description Line to the existing Invoice.
You need to insert the RefNumber (i.e., Invoice#) of the existing Invoice & description which you want to enter and click on the "Insert New Invoice Line (Description Only)" button.
New Description Line is added to the existing Invoice.
Result in QuickBooks Online.
2. Append the existing Invoice with a new ItemInventory/ItemService Line, adding a new ItemInventory/ItemService Line to the existing Invoice.
You need to insert the RefNumber (i.e., Invoice#) of the existing Invoice, the Item Full Name, Quantity, Rate & Description which you want to enter and click on the "Insert New Invoice Line (Inventory/Service)" button.
New Item Line is added to the existing Invoice.
Result in QuickBooks Online.
Application Source Code:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>QODBC PHP Script To Display SQL Results</title>
</head>
<body topmargin="3" leftmargin="3" marginheight="0" marginwidth="0" bgcolor="#ffffff" link="#000066" vlink="#000000" alink="#0000ff" text="#000000">
<table>
<tr>
<td>
Append existing Invoice with a new Description line
<form name="frmDescriptionLine" id="frmDescriptionLine" method="post">
<table>
<tr>
<td>Enter Invoice No. (RefNumber)</td>
<td><input type="text" name="txtInvoiceNo" id="txtInvoiceNo" style="width:250px;"/></td>
</tr>
<tr>
<td>Enter New Item Description</td>
<td><input type="text" name="txtDescription" id="txtDescription"style="width:250px;"/></td>
</tr>
<tr>
<td colspan="2"><input type="submit" name="btnDescriptionLine" id="btnDescriptionLine" value="Insert New Invoice Line (Description Only)" onclick="InsertRecord()"/></td>
</tr>
</table>
</form>
</td>
<td>
Append existing Invoice with a new ItemInventory/ItemService Line
<form name="frmItemLine" id="frmItemLine" method="post">
<table>
<tr>
<td>Enter Invoice No. (RefNumber)</td>
<td><input type="text" name="txtItemInvoiceNo" id="txtItemInvoiceNo" style="width:250px;"/></td>
</tr>
<tr>
<td>Enter Item FullName</td>
<td><input type="text" name="txtItemRefDescription" id="txtItemRefDescription" style="width:250px;"/></td>
</tr>
<tr>
<td>Quantity</td>
<td><input type="text" name="txtItemQuantity" id="txtItemQuantity" value="1"/></td>
</tr>
<tr>
<td>Rate</td>
<td> <input type="text" name="txtItemRate" id="txtItemRate" value="1"/></td>
</tr>
<tr>
<td>Enter Description</td>
<td><input type="text" name="txtItemDescription" id="txtItemDescription" style="width:250px;"/></td>
</tr>
<tr>
<td colspan="2"><input type="submit" name="btnItemLine" id="btnItemLine" value="Insert Invoice Line (Inventory/Service)" /></td>
</tr>
</table>
</form>
</td>
</tr>
</table>
</body>
</html>
<?php
if(isset($_POST['btnDescriptionLine']))
{
$invoiceNo= $_POST['txtInvoiceNo'];
$invoiceDescr = $_POST['txtDescription'];
if($invoiceNo == "" || $invoiceDescr == "" )
{
echo '<script language="javascript">';
echo 'alert("Enter Invoice No. and Invoice Line Description both.")';
echo '</script>';
}
else
{
set_time_limit(120);
$oConnect = odbc_connect("QuickBooks Online Data QRemote", "", "");
$sSQL = "select txnid from InvoiceLine where RefNumber='$invoiceNo'";
$oResult = odbc_exec($oConnect, $sSQL);
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldName = odbc_field_name($oResult, $lFldCnt);
//print(" $sFieldName\n");
}
$lRecCnt = 0;
while(odbc_fetch_row($oResult)) {
$lRecCnt++;
//print("$lRecCnt");
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldValue = trim(odbc_result($oResult, $lFldCnt));
If ($sFieldValue == "") {
print("\n");
}
else {
//print("$sFieldValue\n");
}
}
//print("\n");
}
$sSQL = "Insert into invoiceline(txnid,InvoiceLineDesc) values('$sFieldValue','$invoiceDescr')";
$oResult = odbc_exec($oConnect, $sSQL);
//$sSQL = "SELECT * FROM InvoiceLine Where txnid='$sFieldValue'";
$sSQL = "SELECT RefNumber,CustomerRefFullName,InvoiceLineItemRefFullName,InvoiceLineDesc,InvoiceLineRate,InvoiceLineQuantity,InvoiceLineAmount FROM InvoiceLine Where txnid='$sFieldValue'";
#Perform the query
$oResult = odbc_exec($oConnect, $sSQL);
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
//print("$lFieldCount");
print("<table border=\"1\">");
print("<th>Line No.</th>\n");
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldName = odbc_field_name($oResult, $lFldCnt);
print("<th>$sFieldName</th>\n");
}
$lRecCnt = 0;
#Fetch the data from the database
while(odbc_fetch_row($oResult)) {
$lRecCnt++;
print(" <tr>\n");
print(" <td>$lRecCnt</td>\n");
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldValue = trim(odbc_result($oResult, $lFldCnt));
If ($sFieldValue == "") {
print("<td> </td>\n");
}
else {
print("<td valign=\"Top\">$sFieldValue</td>\n");
}
}
print("</tr>\n");
}
print("</table>");
odbc_close($oConnect);
//echo("Invoice No: " . $invoiceNo. "<br />\n");
//echo("Invoice Desc: " . $invoiceDescr. "<br />\n");
}
}
if(isset($_POST['btnItemLine']))
{
$invoiceItemNo= $_POST['txtItemInvoiceNo'];
$invoiceItemRef = $_POST['txtItemRefDescription'];
$invoiceItemQuantity = $_POST['txtItemQuantity'];
$invoiceItemRate = $_POST['txtItemRate'];
$invoiceItemDescr = $_POST['txtItemDescription'];
if($invoiceItemNo == "" || $invoiceItemDescr == "" || $invoiceItemRef == "" || $invoiceItemQuantity =="" || $invoiceItemRate =="" )
{
echo '<script language="javascript">';
echo 'alert("Fill the Details properly")';
echo '</script>';
}
else
{
set_time_limit(120);
$oConnect = odbc_connect("QuickBooks Online Data QRemote", "", "");
$sSQL = "select txnid from InvoiceLine where RefNumber='$invoiceItemNo'";
$oResult = odbc_exec($oConnect, $sSQL);
//echo $oResult;
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldName = odbc_field_name($oResult, $lFldCnt);
}
$lRecCnt = 0;
while(odbc_fetch_row($oResult)) {
$lRecCnt++;
//print("$lRecCnt");
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldValue = trim(odbc_result($oResult, $lFldCnt));
If ($sFieldValue == "") {
print("\n");
}
else {
//print("$sFieldValue\n");
}
}
print("\n");
}
$sSQL = "Insert into invoiceline(txnid,InvoiceLineItemRefFullName, InvoiceLineQuantity, InvoiceLineRate, InvoiceLineDesc) values('$sFieldValue','$invoiceItemRef',$invoiceItemQuantity,$invoiceItemRate,'$invoiceItemDescr')";
/*if($oResult = odbc_exec($oConnect, $sSQL)){
echo '<script language="javascript">';
echo 'alert("Success")';
echo '</script>';
}
else
{
echo $oResult; exit();
}*/
//print($oResult);
$oResult = odbc_exec($oConnect, $sSQL);
$sSQL = "SELECT RefNumber,CustomerRefFullName,InvoiceLineItemRefFullName,InvoiceLineDesc,InvoiceLineRate,InvoiceLineQuantity,InvoiceLineAmount FROM InvoiceLine Where txnid='$sFieldValue'";
//$sSQL = "SELECT * FROM InvoiceLine Where txnid='$sFieldValue'";
#Perform the query
$oResult = odbc_exec($oConnect, $sSQL);
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
//print("$lFieldCount");
print("<table border=\"1\">");
print("<th>Line No.</th>\n");
while ($lFldCnt < $lFieldCount) {
$lFldCnt= $lFldCnt+1;
$sFieldName = odbc_field_name($oResult, $lFldCnt);
print("<th>$sFieldName</th>\n");
}
$lRecCnt = 0;
#Fetch the data from the database
while(odbc_fetch_row($oResult)) {
$lRecCnt++;
print("<tr>\n");
print("<td>$lRecCnt</td>\n");
$lFldCnt = 0;
$lFieldCount = odbc_num_fields($oResult);
while ($lFldCnt < $lFieldCount) {
$lFldCnt++;
$sFieldValue = trim(odbc_result($oResult, $lFldCnt));
If ($sFieldValue == "") {
print("<td> </td>\n");
}
else {
print("<td valign=\"Top\">$sFieldValue</td>\n");
}
}
print("</tr>\n");
}
print("</table>");
odbc_close($oConnect);
//echo("Invoice No: " . $invoiceNo. "<br />\n");
//echo("Invoice Desc: " . $invoiceDescr. "<br />\n");
}
}
?>
Tags: QuickBooks Online, QBO, QODBC Online, PHP
(1 vote(s))
Helpful
Not helpful
Comments (0)
Post a new comment
Full Name:
Email:
Comments:
CAPTCHA Verification
Please complete the captcha below (we use this to prevent automated submissions).
|
__label__pos
| 0.890917 |
Click here to Skip to main content
15,794,593 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I am working on a project that requires several NumericUpDown controls. Some of them change according to the values of others. For example, when nud1 value increases, nud2 value increases as well. But, if nud2 increases, nud1 stays the same. I would add code, but I cannot figure out how to do it at all! Can someone help me?
So, I have 8 NumericUpDown controls on the page. nud1 and nud5 are connected, nud3 and nud6 and nud7 are connected, and nud4 and nud8 are connected. When the first of these NUDs increases or decreases, their counterpart also increases or decreases. However, if the second NUD is increased/decreased, the first one should not change.
I need to know how to separate the UpButton method from the DownButton method. However, I cannot figure out how to utilize it. Help?
Kat
What I have tried:
I haven't been able to try anything. Every time I select or type the words "UpButton" or "DownButton", I get red squiggly lines and an error that states "UpButton is not declared. It may be inaccessible due to its protection level." I have tried to invoke it with "nudST.UpButton()"...that's about all I can think to do and it doesn't do anything except give me that little message.
Posted
Updated 19-Aug-16 1:33am
Comments
Richard MacCutchan 19-Aug-16 5:08am
You should probably use one of the events that signals when the value changes and modify the connected controls as necessary.
PoiJoy 19-Aug-16 5:11am
That's just it...WHAT events and WHAT connected controls? I've never used a NumericUpDown control before, but they're very popular, so I want to know HOW to use them...but I can't seem to find anything online about this control that makes sense.
PoiJoy 19-Aug-16 5:34am
I apparently don't seem to be conveying my problem correctly. I don't know how to read MSDN's "information." I don't know how or where to put whatever code is listed on their site. I'm new to this and reading that site information is like trying to read Greek.
Richard MacCutchan 19-Aug-16 5:38am
Then you need to go and look for samples and tutorials. See https://www.google.com/search?q=how+to+use+numeric+up+down for some useful links.
BTW if you really cannot understand the MSDN documentation you are going to have a lot of problems as a developer.
1 solution
Further to my suggestion above to create a simple application. Create a form with two updown controls and a text box. Make sure the names match those used in the following code sample. Add the method name numericUpDown_ValueChanged to the ValueChanged event of both controls and the code below to your main form code.
C#
private void numericUpDown_ValueChanged(object sender, EventArgs e)
{
if (sender == numericUpDown1)
{
// if control 1 changes then copy its value to control 2
numericUpDown2.Value = numericUpDown1.Value;
}
// write a message to the textbox showing both values
StringBuilder sb = new StringBuilder();
sb.Append("NUD1: ");
sb.Append(numericUpDown1.Value.ToString());
sb.Append(", NUD2: ");
sb.Append(numericUpDown2.Value.ToString());
textBox1.Text = sb.ToString();
}
Play around with the code and the values as necessary.
Share this answer
v2
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900
|
__label__pos
| 0.525201 |
0.15056 as a Fraction
0.15056 as a fraction equals 15056/100000 or 941/6250
Steps to convert 0.15056 into a fraction.
Write 0.15056 as
0.15056/1
Multiply both the numerator and denominator by 10 for each digit after the decimal point.
0.15056/1
=
0.15056 x 100000/1 x 100000
=
15056/100000
In order to reduce the fraction find the Greatest Common Factor (GCF) for 15056 and 100000. Keep in mind a factor is just a number that divides into another number without any remainder.
The factors of 15056 are: 1 2 4 8 16 941 1882 3764 7528 15056
The factors of 100000 are: 1 2 4 5 8 10 16 20 25 32 40 50 80 100 125 160 200 250 400 500 625 800 1000 1250 2000 2500 3125 4000 5000 6250 10000 12500 20000 25000 50000 100000
The Greatest Common Factor (GCF) for both 15056 and 100000 is: 16
Now to reduce the fraction we divide both the numerator and denominator by the GCF value.
15056/100000
=
15056 ÷ 16/100000 ÷ 16
=
941/6250
As a side note the whole number-integral part is: empty
The decimal part is: .15056 = 15056/100000
Full simple fraction breakdown: 15056/100000
= 7528/50000
= 3764/25000
= 1882/12500
= 941/6250
Scroll down to customize the precision point enabling 0.15056 to be broken down to a specific number of digits.
The page also includes 2-3D graphical representations of 0.15056 as a fraction, the different types of fractions, and what type of fraction 0.15056 is when converted.
Graph Representation of 0.15056 as a Fraction
Pie chart representation of the fractional part of 0.15056
Level of Precision for 0.15056 as a Fraction
The level of precision are the number of digits to round to. Select a lower precision point below to break decimal 0.15056 down further in fraction form. The default precision point is 5. If the last trailing digit is "5" you can use the "round half up" and "round half down" options to round that digit up or down when you change the precision point.
For example 0.875 with a precision point of 2 rounded half up = 88/100, rounded half down = 87/100.
select a precision point:
15056/100000
= 7528/50000
= 3764/25000
= 1882/12500
= 941/6250
Decimal to Fraction Calculator
Enter a Decimal Value:
Numerator & Denominator for 0.15056 as a Fraction
0.15056 = 0 15056/100000
numerator/denominator = 15056/100000
Is 15056/100000 a Mixed, Whole Number or Proper fraction?
A mixed number is made up of a whole number (whole numbers have no fractional or decimal part) and a proper fraction part (a fraction where the numerator (the top number) is less than the denominator (the bottom number). In this case the whole number value is empty and the proper fraction value is 15056/100000.
Can all decimals be converted into a fraction?
Not all decimals can be converted into a fraction. There are 3 basic types which include:
Terminating decimals have a limited number of digits after the decimal point.
Example: 7096.4639 = 7096 4639/10000
Recurring decimals have one or more repeating numbers after the decimal point which continue on infinitely.
Example: 1075.3333 = 1075 3333/10000 = 333/1000 = 33/100 = 1/3 (rounded)
Irrational decimals go on forever and never form a repeating pattern. This type of decimal cannot be expressed as a fraction.
Example: 0.640011996.....
Fraction into Decimal
You can also see the reverse conversion I.e. how fraction 15056/100000 is converted into a decimal.
Feedback
Common Decimal to Fraction Conversions
Click any decimal to see it as a fraction:
More as a Fraction Conversions
Click any decimal to see the converted fraction value:
Three Decimal Points to Fraction Conversions
Click a decimal to convert into a fraction:
Four Decimal Points to Fraction Conversions
Click a decimal to calculate the fraction value:
© www.asafraction.net
|
__label__pos
| 0.929885 |
Module: Hashie::Extensions::MethodWriter
Defined in:
lib/hashie/extensions/method_access.rb
Overview
MethodWriter gives you #key_name= shortcuts for writing to your hash. Keys are written as strings, override #convert_key if you would like to have symbols or something else.
Note that MethodWriter also overrides #respond_to such that any #method_name= will respond appropriately as true.
Examples:
class MyHash < Hash
include Hashie::Extensions::MethodWriter
end
h = MyHash.new
h.awesome = 'sauce'
h['awesome'] # => 'sauce'
Instance Method Summary collapse
Dynamic Method Handling
This class handles dynamic methods through the method_missing method
#method_missing(name, *args) ⇒ Object
65
66
67
68
69
70
71
# File 'lib/hashie/extensions/method_access.rb', line 65
def method_missing(name, *args)
if args.size == 1 && name.to_s =~ /(.*)=$/
return self[convert_key($1)] = args.first
end
super
end
Instance Method Details
#convert_key(key) ⇒ Object
73
74
75
# File 'lib/hashie/extensions/method_access.rb', line 73
def convert_key(key)
key.to_s
end
#respond_to?(name, include_private = false) ⇒ Boolean
60
61
62
63
# File 'lib/hashie/extensions/method_access.rb', line 60
def respond_to?(name, include_private = false)
return true if name.to_s =~ /=$/
super
end
|
__label__pos
| 0.975047 |
28 comments
• Marcel Wichmann Marcel Wichmann , 8 years ago
Fast. Free.*
*Until more than five people use our product and we realise that hosting images costs money
11 points
• Zoltán Hosszú, 8 years ago
Marcel, as it's stated in the footer, Ravioli is using Imgur to host images. It's free as long as I don't make money on it :)
11 points
• Marcel Wichmann Marcel Wichmann , 8 years ago
That makes sense. Thanks for the clarification. Didn't see the footer.
3 points
• Matthew O'ConnorMatthew O'Connor, 8 years ago
So all the images that get uploaded are publicly available. Is there a page on imgur that you can see new uploads on (and therefor see ravioli uploaded images)?
3 points
• Zoltán Hosszú, 8 years ago
Yes, it means exactly that. But here's what made me consider using Imgur:
About images you upload
You can upload images anonymously and share them online with only the people you choose to share them with. If you make them publicly available, they may be featured in the gallery. This means that if you upload an image to share with your friend, only your friend will be able to access it online. However, if you share an image with Facebook, Twitter, Digg, Reddit, etc., then it may end up in the gallery.
So basically if you just use it for simple image sharing, it's not going to matter. Droplr and CloudApp also shares links that are publicly available, if you type in the correct url. Same thing on Imgur. But it's anonymous.
I am making sure that every user of Ravioli will understand this risk.
3 points
• Louis-André LabadieLouis-André Labadie, 8 years ago (edited 8 years ago )
Second paragraph:
Also, don't use Imgur to host image libraries you link to from elsewhere, content for your website, advertising, avatars, or anything else that turns us into your content delivery network.
Did you speak with them, or did you go this far without reading the (very short) terms of use of the service you've based everything on?
3 points
• Zoltán Hosszú, 8 years ago (edited 8 years ago )
Louis-André, thanks for being concerned. This is a term of use for Imgur for uploading images. Ravioli is using Imgur API to upload images to their site, so this does not apply to Ravioli, but it applies to the users of Ravioli. So yes, please don't use Ravioli/Imgur to host those things.
The API docs are pretty clear on how the API can be used.
Also, I've contacted Imgur previously. Everything seems fine now, the only thing remains is the daily rate limit on their part, which can be increased for free apps like Ravioli. :)
Edit: also, I've only spent about 10-15 hours coding this app, it's pretty simple :) If that goes to waste and Imgur doesn't let it run, then I guess I'll just use it for myself instead of CloudApp or Droplr.
4 points
• Louis-André LabadieLouis-André Labadie, 8 years ago
Ah – that makes sense. It connects to the API with the individual user's imgur account?
Do you know about mac2imgur?
0 points
• Zoltán Hosszú, 8 years ago
No, the connection is anonymous, just like when you open up Imgur in your browser.
Yes, there are a few very similar apps using Imgur, but I'm trying to create a more complete user experience around this concept :)
1 point
• David ÖhlinDavid Öhlin, 8 years ago
What's with the attitude?
7 points
• Tom WoodTom Wood, 8 years ago
Great job buddy! Always brave to push an app onto DN – it's scary enough to start a post on this place, much less share your own work. Love the name, and clever idea too.
2 points
• Surjith S MSurjith S M, 8 years ago
why not use imgur directly instead?
2 points
• Zoltán Hosszú, 8 years ago
Well compare Imgur's upload process to a simple menubar app's.
To upload to Imgur: 1) You open the browser 2) Open Imgur 3) Drag in your image 4) Press upload 5) Copy image URL 6) Paste the link
In Ravioli: 1) The app runs in the menu bar 2) Drag the image over 3) Paste the link
Half the steps required :)
1 point
• Surjith S MSurjith S M, 8 years ago
So, basically, the link looks like ravoli.com/... instead of imgur/ right?
Is it kind of mapping that URL to Imgur's URL? How does that work?
0 points
• Zoltán Hosszú, 8 years ago
Yeah, the link is like that. But this part I'm not really sure of just yet. The app will either copy the original imgur link, or the rvo.li shortened link. The latter might be a concern for Imgur, so we'll see. :)
0 points
• Giulio MichelonGiulio Michelon, 8 years ago
I'm Italian and I love the name!
1 point
• Ronalds Vilcins, 8 years ago
As always in DN have some rude comments on cool app
1 point
• Zoltán Hosszú, 8 years ago
I don't think any of the comments are rude. They all are valid concerns, which will be addressed in the FAQ of the app :)
0 points
• Ed AdamsEd Adams, 8 years ago
I think this is handy and certainly will save time! Thanks!
1 point
• Andrea GrassoAndrea Grasso, 8 years ago
Awesome name!
1 point
• Casey BrittCasey Britt, 8 years ago
Am I the only one still using Scrup?
0 points
• Alejandro DorantesAlejandro Dorantes, 8 years ago
This is a really nice App Zoltán! I will start using it ASAP. Thanks a lot! Also, do you plan on open source anytime soon in the future? Kudos!
0 points
• Mohsin NaqiMohsin Naqi, 8 years ago
Can I upload images directly from my clipboard? I use puush to do that right now. It's very handy for uploading screenshots from currently opened Photoshop documents. (select area with marquee tool, copy merged, then right click puush and 'Upload Clipboard')
0 points
• Zoltán Hosszú, 8 years ago
Hey Mohsin. That's a good idea! The first version will not have this feature, but I'll definitely add this to my Ravioli wish list :) Thanks!
1 point
|
__label__pos
| 0.590916 |
68 701
Assignments Done
99,6%
Successfully Done
In January 2019
Answer to Question #13817 in C++ for Darryl kaye Sanga
Question #13817
Write a program that will compute for n! (n factorial) which is the product of all numbers from 1 to n.
Expert's answer
#include <iostream.h>
int factorial(int);
void main(void) {
int number;
cout << "Please enter a positive integer: ";
cin >> number;
if (number < 0)
cout << "That is not a positive integer.
";
else
cout << number << " factorial is: " << factorial(number) << endl;
}
int factorial(int number) {
int temp;
if(number <= 1) return 1;
temp = number * factorial(number - 1);
return temp;
}
Need a fast expert's response?
Submit order
and get a quick answer at the best price
for any assignment or question with DETAILED EXPLANATIONS!
Comments
No comments. Be first!
Leave a comment
Ask Your question
Submit
Privacy policy Terms and Conditions
|
__label__pos
| 0.8679 |
I just announced the newSpring Security 5 modules (primarily focused on OAuth2) in the course:
>> CHECK OUT LEARN SPRING SECURITY
1. Introduction
Access Control List (ACL) is a list of permissions attached to an object. An ACL specifies which identities are granted which operations on a given object.
Spring Security Access Control List is a Spring component which supports Domain Object Security. Simply put, Spring ACL helps in defining permissions for specific user/role on a single domain object – instead of across the board, at the typical per-operation level.
For example, a user with the role Admin can see (READ) and edit (WRITE) all messages on a Central Notice Box, but the normal user only can see messages, relate to them and cannot edit. Meanwhile, others user with the role Editor can see and edit some specific messages.
Hence, different user/role has different permission for each specific object. In this case, Spring ACL is capable of achieving the task. We’ll explore how to set up basic permission checking with Spring ACL in this article.
2. Configuration
2.1. ACL Database
To use Spring Security ACL, we need to create four mandatory tables in our database.
The first table is ACL_CLASS, which store class name of the domain object, columns include:
• ID
• CLASS: the class name of secured domain objects, for example: org.baeldung.acl.persistence.entity.NoticeMessage
Secondly, we need the ACL_SID table which allows us to universally identify any principle or authority in the system. The table needs:
• ID
• SID: which is the username or role name. SID stands for Security Identity
• PRINCIPAL: 0 or 1, to indicate that the corresponding SID is a principal (user, such as mary, mike, jack…) or an authority (role, such as ROLE_ADMIN, ROLE_USER, ROLE_EDITOR…)
Next table is ACL_OBJECT_IDENTITY, which stores information for each unique domain object:
• ID
• OBJECT_ID_CLASS: define the domain object class, links to ACL_CLASS table
• OBJECT_ID_IDENTITY: domain objects can be stored in many tables depending on the class. Hence, this field store the target object primary key
• PARENT_OBJECT: specify parent of this Object Identity within this table
• OWNER_SID: ID of the object owner, links to ACL_SID table
• ENTRIES_INHERITTING: whether ACL Entries of this object inherits from the parent object (ACL Entries are defined in ACL_ENTRY table)
Finally, the ACL_ENTRY store individual permission assigns to each SID on an Object Identity:
• ID
• ACL_OBJECT_IDENTITY: specify the object identity, links to ACL_OBJECT_IDENTITY table
• ACL_ORDER: the order of current entry in the ACL entries list of corresponding Object Identity
• SID: the target SID which the permission is granted to or denied from, links to ACL_SID table
• MASK: the integer bit mask that represents the actual permission being granted or denied
• GRANTING: value 1 means granting, value 0 means denying
• AUDIT_SUCCESS and AUDIT_FAILURE: for auditing purpose
2.2. Dependency
To be able to use Spring ACL in our project, let’s first define our dependencies:
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-acl</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context-support</artifactId>
</dependency>
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.6.11</version>
</dependency>
Spring ACL requires a cache to store Object Identity and ACL entries, so we’ll make use of Ehcache here. And, to support Ehcache in Spring, we also need the spring-context-support.
When not working with Spring Boot, we need to add versions explicitly. Those can be checked on Maven Central: spring-security-acl, spring-security-config, spring-context-support, ehcache-core.
2.3. ACL-Related Configuration
We need to secure all methods which return secured domain objects, or make changes to the object, by enabling Global Method Security:
@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class AclMethodSecurityConfiguration
extends GlobalMethodSecurityConfiguration {
@Autowired
MethodSecurityExpressionHandler
defaultMethodSecurityExpressionHandler;
@Override
protected MethodSecurityExpressionHandler createExpressionHandler() {
return defaultMethodSecurityExpressionHandler;
}
}
Let’s also enable Expression-Based Access Control by setting prePostEnabled to true to use Spring Expression Language (SpEL). Moreover, we need an expression handler with ACL support:
@Bean
public MethodSecurityExpressionHandler
defaultMethodSecurityExpressionHandler() {
DefaultMethodSecurityExpressionHandler expressionHandler
= new DefaultMethodSecurityExpressionHandler();
AclPermissionEvaluator permissionEvaluator
= new AclPermissionEvaluator(aclService());
expressionHandler.setPermissionEvaluator(permissionEvaluator);
return expressionHandler;
}
Hence, we assign AclPermissionEvaluator to the DefaultMethodSecurityExpressionHandler. The evaluator needs a MutableAclService to load permission settings and domain object’s definitions from the database.
For simplicity, we use the provided JdbcMutableAclService:
@Bean
public JdbcMutableAclService aclService() {
return new JdbcMutableAclService(
dataSource, lookupStrategy(), aclCache());
}
As its name, the JdbcMutableAclService uses JDBCTemplate to simplify database access. It needs a DataSource (for JDBCTemplate), LookupStrategy (provides an optimized lookup when querying the database), and an AclCache (caching ACL Entries and Object Identity).
Again, for simplicity, we use provided BasicLookupStrategy and EhCacheBasedAclCache.
@Autowired
DataSource dataSource;
@Bean
public AclAuthorizationStrategy aclAuthorizationStrategy() {
return new AclAuthorizationStrategyImpl(
new SimpleGrantedAuthority("ROLE_ADMIN"));
}
@Bean
public PermissionGrantingStrategy permissionGrantingStrategy() {
return new DefaultPermissionGrantingStrategy(
new ConsoleAuditLogger());
}
@Bean
public EhCacheBasedAclCache aclCache() {
return new EhCacheBasedAclCache(
aclEhCacheFactoryBean().getObject(),
permissionGrantingStrategy(),
aclAuthorizationStrategy()
);
}
@Bean
public EhCacheFactoryBean aclEhCacheFactoryBean() {
EhCacheFactoryBean ehCacheFactoryBean = new EhCacheFactoryBean();
ehCacheFactoryBean.setCacheManager(aclCacheManager().getObject());
ehCacheFactoryBean.setCacheName("aclCache");
return ehCacheFactoryBean;
}
@Bean
public EhCacheManagerFactoryBean aclCacheManager() {
return new EhCacheManagerFactoryBean();
}
@Bean
public LookupStrategy lookupStrategy() {
return new BasicLookupStrategy(
dataSource,
aclCache(),
aclAuthorizationStrategy(),
new ConsoleAuditLogger()
);
}
Here, the AclAuthorizationStrategy is in charge of concluding whether a current user possesses all required permissions on certain objects or not.
It needs the support of PermissionGrantingStrategy, which defines the logic for determining whether a permission is granted to a particular SID.
3. Method Security With Spring ACL
So far, we’ve done all necessary configuration. Now we can put required checking rule on our secured methods.
By default, Spring ACL refers to BasePermission class for all available permissions. Basically, we have a READ, WRITE, CREATE, DELETE and ADMINISTRATION permission.
Let’s try to define some security rules:
@PostFilter("hasPermission(filterObject, 'READ')")
List<NoticeMessage> findAll();
@PostAuthorize("hasPermission(returnObject, 'READ')")
NoticeMessage findById(Integer id);
@PreAuthorize("hasPermission(#noticeMessage, 'WRITE')")
NoticeMessage save(@Param("noticeMessage")NoticeMessage noticeMessage);
After the execution of findAll() method, @PostFilter will be triggered. The required rule hasPermission(filterObject, ‘READ’), means returning only those NoticeMessage which current user has READ permission on.
Similarly, @PostAuthorize is triggered after the execution of findById() method, make sure only return the NoticeMessage object if the current user has READ permission on it. If not, the system will throw an AccessDeniedException.
On the other side, the system triggers the @PreAuthorize annotation before invoking the save() method. It will decide where the corresponding method is allowed to execute or not. If not, AccessDeniedException will be thrown.
4. In Action
Now we gonna test all those configurations using JUnit. We’ll use H2 database to keep configuration as simple as possible.
We’ll need to add:
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency>
4.1. The Scenario
In this scenario, we’ll have two users (manager, hr) and a one user role (ROLE_EDITOR), so our acl_sid will be:
INSERT INTO acl_sid (id, principal, sid) VALUES
(1, 1, 'manager'),
(2, 1, 'hr'),
(3, 0, 'ROLE_EDITOR');
Then, we need to declare NoticeMessage class in acl_class. And three instances of NoticeMessage class will be inserted in system_message.
Moreover, corresponding records for those 3 instances must be declared in acl_object_identity:
INSERT INTO acl_class (id, class) VALUES
(1, 'org.baeldung.acl.persistence.entity.NoticeMessage');
INSERT INTO system_message(id,content) VALUES
(1,'First Level Message'),
(2,'Second Level Message'),
(3,'Third Level Message');
INSERT INTO acl_object_identity
(id, object_id_class, object_id_identity,
parent_object, owner_sid, entries_inheriting)
VALUES
(1, 1, 1, NULL, 3, 0),
(2, 1, 2, NULL, 3, 0),
(3, 1, 3, NULL, 3, 0);
Initially, we grant READ and WRITE permissions on the first object (id =1) to the user manager. Meanwhile, any user with ROLE_EDITOR will have READ permission on all three objects but only possess WRITE permission on the third object (id=3). Besides, user hr will have only READ permission on the second object.
Here, because we use default Spring ACL BasePermission class for permission checking, the mask value of the READ permission will be 1, and the mask value of WRITE permission will be 2. Our data in acl_entry will be:
INSERT INTO acl_entry
(id, acl_object_identity, ace_order,
sid, mask, granting, audit_success, audit_failure)
VALUES
(1, 1, 1, 1, 1, 1, 1, 1),
(2, 1, 2, 1, 2, 1, 1, 1),
(3, 1, 3, 3, 1, 1, 1, 1),
(4, 2, 1, 2, 1, 1, 1, 1),
(5, 2, 2, 3, 1, 1, 1, 1),
(6, 3, 1, 3, 1, 1, 1, 1),
(7, 3, 2, 3, 2, 1, 1, 1);
4.2. Test Case
First of all, we try to call the findAll method.
As our configuration, the method returns only those NoticeMessage on which the user has READ permission.
Hence, we expect the result list contains only the first message:
@Test
@WithMockUser(username = "manager")
public void
givenUserManager_whenFindAllMessage_thenReturnFirstMessage(){
List<NoticeMessage> details = repo.findAll();
assertNotNull(details);
assertEquals(1,details.size());
assertEquals(FIRST_MESSAGE_ID,details.get(0).getId());
}
Then we try to call the same method with any user which has the role – ROLE_EDITOR. Note that, in this case, these users have the READ permission on all three objects.
Hence, we expect the result list will contain all three messages:
@Test
@WithMockUser(roles = {"EDITOR"})
public void
givenRoleEditor_whenFindAllMessage_thenReturn3Message(){
List<NoticeMessage> details = repo.findAll();
assertNotNull(details);
assertEquals(3,details.size());
}
Next, using the manager user, we’ll try to get the first message by id and update its content – which should all work fine:
@Test
@WithMockUser(username = "manager")
public void
givenUserManager_whenFind1stMessageByIdAndUpdateItsContent_thenOK(){
NoticeMessage firstMessage = repo.findById(FIRST_MESSAGE_ID);
assertNotNull(firstMessage);
assertEquals(FIRST_MESSAGE_ID,firstMessage.getId());
firstMessage.setContent(EDITTED_CONTENT);
repo.save(firstMessage);
NoticeMessage editedFirstMessage = repo.findById(FIRST_MESSAGE_ID);
assertNotNull(editedFirstMessage);
assertEquals(FIRST_MESSAGE_ID,editedFirstMessage.getId());
assertEquals(EDITTED_CONTENT,editedFirstMessage.getContent());
}
But if any user with the ROLE_EDITOR role updates the content of the first message – our system will throw an AccessDeniedException:
@Test(expected = AccessDeniedException.class)
@WithMockUser(roles = {"EDITOR"})
public void
givenRoleEditor_whenFind1stMessageByIdAndUpdateContent_thenFail(){
NoticeMessage firstMessage = repo.findById(FIRST_MESSAGE_ID);
assertNotNull(firstMessage);
assertEquals(FIRST_MESSAGE_ID,firstMessage.getId());
firstMessage.setContent(EDITTED_CONTENT);
repo.save(firstMessage);
}
Similarly, the hr user can find the second message by id, but will fail to update it:
@Test
@WithMockUser(username = "hr")
public void givenUsernameHr_whenFindMessageById2_thenOK(){
NoticeMessage secondMessage = repo.findById(SECOND_MESSAGE_ID);
assertNotNull(secondMessage);
assertEquals(SECOND_MESSAGE_ID,secondMessage.getId());
}
@Test(expected = AccessDeniedException.class)
@WithMockUser(username = "hr")
public void givenUsernameHr_whenUpdateMessageWithId2_thenFail(){
NoticeMessage secondMessage = new NoticeMessage();
secondMessage.setId(SECOND_MESSAGE_ID);
secondMessage.setContent(EDITTED_CONTENT);
repo.save(secondMessage);
}
5. Conclusion
We’ve gone through basic configuration and usage of Spring ACL in this article.
As we know, Spring ACL required specific tables for managing object, principle/authority, and permission setting. All interactions with those tables, especially updating action, must go through AclService. We’ll explore this service for basic CRUD actions in a future article.
By default, we are restricted to predefined permission in BasePermission class.
Finally, the implementation of this tutorial can be found over on Github.
I just announced the new Spring Security 5 modules (primarily focused on OAuth2) in the course:
>> CHECK OUT LEARN SPRING SECURITY
newest oldest most voted
Fábio Moriguchi
Guest
Fábio Moriguchi
Hi,
Is it possible to use pagination with Spring Data and Spring Acl? Is there any integration?
Thanks !
Grzegorz Piwowarek
Editor
ACL is just an access management strategy so it shouldn’t matter if you paginate or not
|
__label__pos
| 0.840185 |
You're viewing help content for version:
Learn how to enable the detection of duplicate assets in AEM.
If you attempt to upload an asset that exists in Adobe Experience Manager (AEM) Assets, the duplicate detection feature identifies it as duplicate. Duplicate detection is disabled by default. To enable the feature, do the following steps:
1. Go to the AEM Web Console Configuration page at http://[server]:[port]/system/console/configMgr.
2. Edit the configuration for the servlet Day CQ DAM Create Asset.
3. Select the detect duplicate option, and click/tap Save.
Select detect duplicate option in the servlet
Select detect duplicate option in the servlet
The detect duplicate feature is now enabled in AEM Assets. When a user attempts to upload an asset that exists in AEM, the system checks for conflict and indicates it. The assets are identified using SHA-1 hash stored at jcr:content/metadata/dam:sha1, which means duplicate assets are detected irrespective of the filenames.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
Legal Notices | Online Privacy Policy
|
__label__pos
| 0.665727 |
Busca em Profundidade numa matriz
bom gente, preciso de um help… eu tenho uma busca em profundidade q funciona uma matriz, pegando o valor inserido nela… mas eu preciso q ela funciona mostrando as posições da matriz… pois eh uma matriz de 0 e 1 (um labirinto 0 espaço vaizio e 1 parede) só q não tou conseguindo desembolar…
esse eh o meu grafo:
[code]public class Digrafo
{
int numVertice;
int matAdj[][];
public Digrafo(int num)
{
int i,j;
this.numVertice=num;
int matAdj[][]=new int[num+1][num+1];
for(i=1;i<=num;++i)
for(j=1;j<=num;++j)
matAdj[i][j]=0;
this.matAdj = matAdj;
}
public void insereVertice(int v1, int v2)
{
if (v1<=numVertice && v2<=numVertice)
matAdj[v1][v2]=1;
}
public void retiraVertice(int v1, int v2)
{
if (v1<=numVertice && v2<=numVertice)
matAdj[v1][v2]=0;
}
public String Grafo()
{
int i,j;
String x="";
for(i=1;i<=numVertice;++i)
for(j=1;j<=numVertice;++j)
if(matAdj[i][j]==1)
x=x+"\n("+i+", "+j+")";
return x;
}
}
[/code]
esse eh a minha busca…
private void jBtnBuscaemProfundidade_actionPerformed(ActionEvent e) { int visited[] = new int[G.numVertice+1]; int pai[] = new int[G.numVertice+1]; int p[] = new int[G.numVertice*G.numVertice+1]; int orig =Integer.parseInt(jTxtFldVertice1.getText()); int dest =Integer.parseInt(jTxtFldVertice2.getText()); int topo=1, atual, i; boolean encontrou = false; String caminho = ""; for (i=1;i<=G.numVertice;++i) { pai[i]=0; visited[i]=0; } for (i=1;i<=G.numVertice*G.numVertice;++i) p[i]=0; p[topo]=orig; while (topo>0) { atual = p[topo]; --topo; if (atual == dest) { encontrou = true; break; } if (visited[atual]==0) for (i=1; i<=G.numVertice;++i) if (G.matAdj[atual][i]==1) { ++topo; p[topo]=i; if (pai[i]==0) pai[i]=atual; } } if (encontrou) { topo=0; caminho=""+dest; i=pai[dest]; while (i!=orig) { ++topo; caminho=i+"\n"+caminho; i=pai[i]; } caminho=orig+"\n"+caminho; JOptionPane.showMessageDialog(null,caminho); } }
grato aew…
valew
|
__label__pos
| 0.625294 |
WordPress.org
Forums
links always show up on their own line (3 posts)
1. momsheadquarters
Member
Posted 5 years ago #
Why do my links, linked text, emails etc. always show up on their own line? Is there a simple way to make it so they are simply embedded in a line of text? Do I have to edit each page/post? Is there a way to edit the stylesheet so that they are ALWAYS embedded?
Thanks
2. momsheadquarters
Member
Posted 5 years ago #
Heres and example:
http://www.momsheadquarters.com/the-headquarters/family-2/baby/cloth-diaper-review-2/swimmer-diapers/monkey-doodlez
Here I purposely put these links on their own line. However, I would like to be able to say e-mail:xxxxxxxxxxxxxxx or contact:xxxxxxxxxxxx and be able to have it both the link type and the link on the same line. Does this make sense?
Thank you
3. momsheadquarters
Member
Posted 5 years ago #
I would prefer to edit it in the stylesheet so that I don't have to edit each and every page.
Topic Closed
This topic has been closed to new replies.
About this Topic
|
__label__pos
| 0.908186 |
kwizNET Subscribers, please login to turn off the Ads!
Email us to get an instant 20% discount on highly effective K-12 Math & English kwizNET Programs!
Online Quiz (Worksheet A B C D)
Questions Per Quiz = 2 4 6 8 10
Grade 7 - Mathematics
7.11 Excentres, Median, Centroid, Altitude and Orthocentre
Points to Remember:
1. An excircle or escribed circle of the triangle is a circle lying outside the triangle, tangent to one of its sides and tangent to the extensions of the other two. Every triangle has three distinct excircles, each tangent to one of the triangle's sides.
2. The centre of one of the excircles of a given triangle is known as the excentre. It is the point of intersection of one of the internal angle bisectors and two of the external angle bisectors of the triangle.
Example:
Identify the excircles and the excentres in the above figure.
Solution:
In triangle ABC, the excircles are the 3 circles outside the triangle with excentres F, H and D respectively.
3. The medians of a triangle are the line segments joining the vertices of the triangle to the midpoints of the opposite sides.
4. The medians of a triangle are concurrent.
Example:
Identify the median of the above triangle.
Solution:
In the triangle, 'AD' is the median.
5. The centroid of a triangle is the point of concurrence of its medians.
6. The centroid of a triangle divides the line segment joining any vertex to the midpoint of the opposite side in the ratio 2:1. In short centroid is a point of trisection of each median.
Example:
Identify the centroid of the triangle given above.
Solution:
In the triangle, 'D' is the centroid.
7. Altitudes of a triangle are the perpendiculars drawn from the vertices of a triangle to the opposite sides.
8. The altitude of a triangle are concurrent.
Example:
Identify the altitude in the given triangle
Solution:
In the triangle, 'AD' is the altitude.
9. The orthocenter is the point of concurrence of the altitudes of a triangle.
10. When the triangle is acute, then the orthocenter falls inside the triangle.
11. When the triangle is obtuse the orthocenter falls outside the triangle.
12. When the triangle is right angled, the orthocenter coincides with the vertex at which right angle is formed.
Example:
.
Solution:
In the triangle the orthocentre is 'G'.
Hence we can conclude as follows.
Median
(The line segment joining the vertices of a triangle to the opposite sides.)
Altitude
(The perpendiculars drawn from the vertices of a triangle to the opposite sides.)
Centroid
The point of concurrence of the medians of a triangle.
---------
Orthocentre
----------
The point of concurrence of the altitudes of a triangle.
Directions: Read the above review points carefully and answer the following questions:
1. Illustrate each of the above review points by drawing the excentres, the centroid and the orthocentre of different triangles.
2. Explain in your own words the different properties of excentres, centroid and circumcentre of a triangle, with examples.
3. Draw the centroid of a triangle with sides 5 cm, 9 cm and 7 cm.
4. Draw the orthocentre of triangle whose sides are given by 10 cm, 7 cm and 8 cm.
Q 1: Among the points the excentres, the circumcentre, the incentre, the orthocentre and the centroid, ________ may lie on the triangle.
the circumcentre and the orthocentre
the incentre and the centroid
the excentres
Q 2: The point of concurrence of the medians of a triangle is called the _______ of the triangle.
excentres
centroid
orthocentre
Q 3: In an equilateral triangle the circumcentre, the orthocentre, the incentre and the centroid _________.
are the excentres
are midpoints
coincides
Q 4: Among the points the excentres, the circumcentre, the incentre, the orthocentre and the centroid. The points that always lie inside the triangle are _______.
the incentre and the centroid
the circumcentre and the orthocentre
the excentres
Q 5: Among the points the excentres, the circumcentre, the incentre, the orthocentre and the centroid, ________ always lie outside the triangle.
the incentre and the centroid
the circumcentre and the orthocentre
the excentres
Q 6: In a right-angled triangle its orthocentre coincides with the _________.
midpoint of the hypotenuse
midpoint
the excentres
Question 7: This question is available to subscribers only!
Question 8: This question is available to subscribers only!
Subscription to kwizNET Learning System offers the following benefits:
• Unrestricted access to grade appropriate lessons, quizzes, & printable worksheets
• Instant scoring of online quizzes
• Progress tracking and award certificates to keep your student motivated
• Unlimited practice with auto-generated 'WIZ MATH' quizzes
• Child-friendly website with no advertisements
• Choice of Math, English, Science, & Social Studies Curriculums
• Excellent value for K-12 and ACT, SAT, & TOEFL Test Preparation
• Get discount offers by sending an email to [email protected]
Quiz Timer
|
__label__pos
| 0.989152 |
TypeScript 1.7
支持 async/await 编译到 ES6 (Node v4+)
TypeScript 目前在已经原生支持 ES6 generator 的引擎 (比如 Node v4 及以上版本) 上支持异步函数. 异步函数前置 async 关键字; await 会暂停执行, 直到一个异步函数执行后返回的 promise 被 fulfill 后获得它的值.
例子
在下面的例子中, 输入的内容将会延时 200 毫秒逐个打印:
"use strict";
// printDelayed 返回值是一个 'Promise<void>'
async function printDelayed(elements: string[]) {
for (const element of elements) {
await delay(200);
console.log(element);
}
}
async function delay(milliseconds: number) {
return new Promise<void>(resolve => {
setTimeout(resolve, milliseconds);
});
}
printDelayed(["Hello", "beautiful", "asynchronous", "world"]).then(() => {
console.log();
console.log("打印每一个内容!");
});
查看 Async Functions 一文了解更多.
支持同时使用 --target ES6--module
TypeScript 1.7 将 ES6 添加到了 --module 选项支持的选项的列表, 当编译到 ES6 时允许指定模块类型. 这让使用具体运行时中你需要的特性更加灵活.
例子
{
"compilerOptions": {
"module": "amd",
"target": "es6"
}
}
this 类型
在方法中返回当前对象 (也就是 this) 是一种创建链式 API 的常见方式. 比如, 考虑下面的 BasicCalculator 模块:
export default class BasicCalculator {
public constructor(protected value: number = 0) { }
public currentValue(): number {
return this.value;
}
public add(operand: number) {
this.value += operand;
return this;
}
public subtract(operand: number) {
this.value -= operand;
return this;
}
public multiply(operand: number) {
this.value *= operand;
return this;
}
public divide(operand: number) {
this.value /= operand;
return this;
}
}
使用者可以这样表述 2 * 5 + 1:
import calc from "./BasicCalculator";
let v = new calc(2)
.multiply(5)
.add(1)
.currentValue();
这使得这么一种优雅的编码方式成为可能; 然而, 对于想要去继承 BasicCalculator 的类来说有一个问题. 想象使用者可能需要编写一个 ScientificCalculator:
import BasicCalculator from "./BasicCalculator";
export default class ScientificCalculator extends BasicCalculator {
public constructor(value = 0) {
super(value);
}
public square() {
this.value = this.value ** 2;
return this;
}
public sin() {
this.value = Math.sin(this.value);
return this;
}
}
因为 BasicCalculator 的方法返回了 this, TypeScript 过去推断的类型是 BasicCalculator, 如果在 ScientificCalculator 的实例上调用属于 BasicCalculator 的方法, 类型系统不能很好地处理.
举例来说:
import calc from "./ScientificCalculator";
let v = new calc(0.5)
.square()
.divide(2)
.sin() // Error: 'BasicCalculator' 没有 'sin' 方法.
.currentValue();
这已经不再是问题 - TypeScript 现在在类的实例方法中, 会将 this 推断为一个特殊的叫做 this 的类型. this 类型也就写作 this, 可以大致理解为 "方法调用时点左边的类型".
this 类型在描述一些使用了 mixin 风格继承的库 (比如 Ember.js) 的交叉类型:
interface MyType {
extend<T>(other: T): this & T;
}
ES7 幂运算符
TypeScript 1.7 支持将在 ES7/ES2016 中增加的幂运算符: ****=. 这些运算符会被转换为 ES3/ES5 中的 Math.pow.
举例
var x = 2 ** 3;
var y = 10;
y **= 2;
var z = -(4 ** 3);
会生成下面的 JavaScript:
var x = Math.pow(2, 3);
var y = 10;
y = Math.pow(y, 2);
var z = -(Math.pow(4, 3));
改进对象字面量解构的检查
TypeScript 1.7 使对象和数组字面量解构初始值的检查更加直观和自然.
当一个对象字面量通过与之对应的对象解构绑定推断类型时:
• 对象解构绑定中有默认值的属性对于对象字面量来说可选.
• 对象解构绑定中的属性如果在对象字面量中没有匹配的值, 则该属性必须有默认值, 并且会被添加到对象字面量的类型中.
• 对象字面量中的属性必须在对象解构绑定中存在.
当一个数组字面量通过与之对应的数组解构绑定推断类型时:
• 数组解构绑定中的元素如果在数组字面量中没有匹配的值, 则该元素必须有默认值, 并且会被添加到数组字面量的类型中.
举例
// f1 的类型为 (arg?: { x?: number, y?: number }) => void
function f1({ x = 0, y = 0 } = {}) { }
// And can be called as:
f1();
f1({});
f1({ x: 1 });
f1({ y: 1 });
f1({ x: 1, y: 1 });
// f2 的类型为 (arg?: (x: number, y?: number) => void
function f2({ x, y = 0 } = { x: 0 }) { }
f2();
f2({}); // 错误, x 非可选
f2({ x: 1 });
f2({ y: 1 }); // 错误, x 非可选
f2({ x: 1, y: 1 });
装饰器 (decorators) 支持的编译目标版本增加 ES3
装饰器现在可以编译到 ES3. TypeScript 1.7 在 __decorate 函数中移除了 ES5 中增加的 reduceRight. 相关改动也内联了对 Object.getOwnPropertyDescriptorObject.defineProperty 的调用, 并向后兼容, 使 ES5 的输出可以消除前面提到的 Object 方法的重复[1].
results matching ""
No results matching ""
|
__label__pos
| 0.993293 |
1. Introduction
Often, restarting Secure Shell (SSH) sessions is a daily occurrence. Because of this, it’s usually optimal to have repeating commands automated on entry.
In this tutorial, we explore ways to automate command execution during SSH session establishment. Our main example is a simple directory change. First, we briefly discuss remote administration and the repetitive tasks it may introduce. Next, we look at system solutions for an automatic directory change. After that, we turn to SSH for ways to automate command execution. Finally, an SSH configuration option takes the spotlight as the potential best solution.
For brevity and security reasons, we only consider the newest iteration of SSH version 2 (SSHv2) as implemented by OpenSSH.
We tested the code in this tutorial on Debian 11 (Bullseye) with GNU Bash 5.1.4 and OpenSSH 8.4p1. It should work in most POSIX-compliant environments.
2. Remote Administration
Indeed, there are many different reasons for establishing an SSH session:
During network troubleshooting, there are sets of commands most administrators would run directly. However, if we automate repetitive actions for each SSH session, we can minimize our time when performing the task.
For the last two points from the list, administrators usually start by changing to a specific directory. For example, we might switch to a log (/var/logs), web server content (/var/www), or other common location.
Let’s take /var/log as an example. Switching to that path is easy with a simple command:
$ cd /var/logs
However, repeating an action on each connection can become an unnecessary delay and burden. Indeed, we have several options to avoid this.
3. System Solutions
Naturally, one of the simplest ways to run a command automatically comes from the system itself without relating to the access protocol.
3.1. Shell Configuration
As usual, we can employ the configuration of our shell to automate command execution in interactive and non-interactive sessions.
For example, Bash has the .bash_profile and .bashrc files, respectively. Both are usually in the $HOME directory but have several alternative locations.
Actually, including any command in either Bash session setup file should suffice as both source .bashrc:
$ echo 'cd /var/log' >> $HOME/.bashrc
$ echo 'cd /var/log' >> $HOME/.bash_profile
In the case of the latter command, we would only have the directory change in interactive sessions. Further, as one alternative to .bash_profile, the .profile file is more versatile in terms of shell compatibility.
3.2. Environment Variables
By default, OpenSSH uses AcceptEnv to pass through locale and language data:
AcceptEnv LANG LC_*
Building on top of the shell configuration changes from earlier, we can pass information with our client connection via an LC_* variable and the SendEnv option:
$ LC_CDPATH=/var/www ssh -o SendEnv=LC_CDPATH baeldung@web
Here, we use the ssh client with its -o switch to introduce more options. Also, the command is prepended with a local environment variable assignment.
In practice, we can include the SendEnv option in our ssh_config file and then use $LC_CDPATH as our path in .bashrc or .bash_profile.
3.3. User Home Directory
When a directory switch is our only aim, depending on the circumstances, changing a given user’s home directory can be a viable option.
If our website administrator is webadmin, we can change the home directory of that user per our needs:
$ usermod --home '/var/www' webadmin
Here, we use the –home (-d) flag with the new path to change the supplied user’s home.
4. Authorized SSH Key Commands
On the SSH side, we can employ a lesser-known feature of the authorized_keys file to execute commands on all connections with a given public key.
By prepending command= to a line in authorized_keys, we can include a directory switch:
command="cd /var/www; /bin/bash -i" ssh-rsa AAAAB3NzaC1yc2EAAAAD[...]J9w7W0eJ/Yqr1hc6QjU= baeldung@web
Note that the contents of command must be surrounded by double quotes. In fact, there are many other options and rules in authorized_keys.
Here, we use a script with two statements, as it needs to execute our desired action but also provide a shell afterward. In this case, it’s an interactive (-i) Bash session, but we can change that accordingly.
Critically, there are several potentially fatal flaws with this method:
• can override the default shell
• must specify session type on our own
• the prepended command affects other tools like scp and rsync, as well as all non-interactive sessions
• any error in the command or authorized_keys file syntax can prevent login, locking us out of our remote system
Still, the same user can create two keys and have only one change the current path. Further, we can configure this behavior in ssh_config on the client side and base it on the hostname:
Host xost
HostName 192.168.6.66
User webadmin
IdentityFile ~/.ssh/key
Host webxost
HostName 192.168.6.66
User webadmin
IdentityFile ~/.ssh/commandkey
Now, connecting to webxost employs the key, which invokes a command, while using xost connects as usual.
5. SSH RemoteCommand
Finally, perhaps the most standard way for executing any command when establishing a remote SSH session is the aptly-named RemoteCommand statement, part of the ssh_config client configuration since OpenSSH version 7.6.
By using this statement, the setup becomes simple:
Host webxost
HostName 192.168.6.66
User webadmin
IdentityFile ~/.ssh/key
RequestTTY force
RemoteCommand 'cd /var/www; /bin/bash -i'
At this point, when connecting to 192.168.6.66 via the hostname webxost, we should end up in /var/www and an interactive Bash session. Because of the RemoteCommand statement, we need to explicitly force a TTY with RequestTTY.
This way, we have a properly set up shell session already in the correct directory.
6. Summary
In this article, we looked at ways to execute remote commands when establishing an SSH session with the example of a simple directory change.
In conclusion, while there are many ways to approach the problem and example, one stands out as the standard.
Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.
|
__label__pos
| 0.775577 |
Javatpoint Logo
Javatpoint Logo
JAXB Tutorial
JAXB tutorial provides concepts and API to convert object into XML and XML into object. Our JAXB tutorial is designed for beginners and professionals.
JAXB stands for Java Architecture for XML Binding. It provides mechanism to marshal (write) java objects into XML and unmarshal (read) XML into object. Simply, you can say it is used to convert java object into xml and vice-versa.
JAXB 2 Tutorial
Features of JAXB 2.0
JAXB 2.0 includes several features that were not present in JAXB 1.x. They are as follows:
1) Annotation support: JAXB 2.0 provides support to annotation so less coding is required to develop JAXB application. The javax.xml.bind.annotation package provides classes and interfaces for JAXB 2.0.
2) Support for all W3C XML Schema features: it supports all the W3C schema unlike JAXB 1.0.
3) Additional Validation Capabilities: it provides additional validation support by JAXP 1.3 validation API.
4) Small Runtime Library: it required small runtime library that JAXB 1.0.
5) Reduction of generated schema-derived classes: it reduces a lot of generated schema-derived classes.
Simple JAXB Marshalling Example: Converting Object into XML
Let's see the steps to convert java object into XML document.
• Create POJO or bind the schema and generate the classes
• Create the JAXBContext object
• Create the Marshaller objects
• Create the content tree by using set methods
• Call the marshal method
File: Employee.java
@XmlRootElement specifies the root element for the xml document.
@XmlAttribute specifies the attribute for the root element.
@XmlElement specifies the sub element for the root element.
File: ObjectToXml.java
Output:
The generated xml file will look like this:
File: employee.xml
Simple JAXB UnMarshalling Example: Converting XML into Object
File: XMLToObject.java
Output:
1 Vimal Jaiswal 50000.0
Youtube For Videos Join Our Youtube Channel: Join Now
Help Others, Please Share
facebook twitter pinterest
Learn Latest Tutorials
Preparation
Trending Technologies
B.Tech / MCA
|
__label__pos
| 0.646 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
As an example I have a text field that might contain the following string:
"d7199^^==^^81^^==^^A sentence or two!!"
I want to tokenize this data but have each token contain the first part of the string. So, I'd like the tokens to look like this for the example above:
"d7199^^==^^81^^==^^a"
"d7199^^==^^81^^==^^sentence"
"d7199^^==^^81^^==^^or"
"d7199^^==^^81^^==^^two"
How would I go about doing this?
share|improve this question
1 Answer 1
You can implement your own custom Tokenizer and add it to the Solr classpath. Then use it in your Solr schema.xml and solrconfig.xml
share|improve this answer
After a bit of research this was my most logical conclusion as well. If you can give me some good examples the bounty all be yers! – Jason Palmer Sep 1 '11 at 13:19
How do you know when the first part of the input reaches its end? – jpountz Sep 2 '11 at 15:50
I could either define a different separator or we could just have it end at the last token ^^==^^. Or something else if you have a better suggestion. 3 more days until the bounty expires :( – Jason Palmer Sep 5 '11 at 16:08
1
It seems obvious that one must subclass a Tokenizer but HOW? – gyozo kudor May 25 '12 at 12:22
I have the same question, and i dont understand what is Solr classpath... – trillions Jun 28 '12 at 4:44
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.734629 |
创建临时架构
在特定情况下,可能需要创建一个Experience Data Model (XDM)架构,该架构具有仅供单个数据集使用的命名字段。 这称为“临时”模式。 在Experience Platform的各种数据引入工作流中使用了临时架构,包括引入CSV文件和创建特定类型的源连接。
本文档提供了使用架构注册表API创建临时架构的一般步骤。 它旨在与其他Experience Platform教程结合使用,这些教程需要创建临时架构作为其工作流的一部分。 其中每个文档都提供了有关如何为其特定用例正确配置临时模式的详细信息。
快速入门
本教程要求您对Experience Data Model (XDM)系统有一定的了解。 在开始本教程之前,请查看以下XDM文档:
在开始本教程之前,请查看开发人员指南以了解成功调用Schema Registry API所需了解的重要信息。 这包括您的{TENANT_ID}、“容器”的概念以及发出请求所需的标头(请特别注意“接受”标头及其可能的值)。
创建临时类
XDM架构的数据行为由其底层类决定。 创建临时架构的第一步是基于adhoc行为创建类。 这是通过对/tenant/classes终结点发出POST请求来完成的。
API格式
POST /tenant/classes
请求
以下请求创建一个新XDM类,该类由有效负载中提供的属性进行配置。 通过在allOf数组中提供$ref属性设置为https://ns.adobe.com/xdm/data/adhoc,该类继承adhoc行为。 该请求还定义了一个_adhoc对象,其中包含类的自定义字段。
NOTE
_adhoc下定义的自定义字段因临时架构的用例而异。 有关基于用例的必需自定义字段,请参阅相应教程中的特定工作流。
curl -X POST \
https://platform.adobe.io/data/foundation/schemaregistry/tenant/classes \
-H 'Authorization: Bearer {ACCESS_TOKEN}' \
-H 'Content-Type: application/json' \
-H 'x-api-key: {API_KEY}' \
-H 'x-gw-ims-org-id: {ORG_ID}' \
-H 'x-sandbox-name: {SANDBOX_NAME}' \
-d '{
"title":"New ad-hoc class",
"description": "New ad-hoc class description",
"type":"object",
"allOf": [
{
"$ref":"https://ns.adobe.com/xdm/data/adhoc"
},
{
"properties": {
"_adhoc": {
"type":"object",
"properties": {
"field1": {
"type":"string"
},
"field2": {
"type":"string"
}
}
}
}
}
]
}'
属性
描述
$ref
新类的数据行为。 对于临时类,此值必须设置为https://ns.adobe.com/xdm/data/adhoc
properties._adhoc
包含类的自定义字段的对象,以字段名称和数据类型的键值对表示。
响应
成功的响应将返回新类的详细信息,将properties._adhoc对象的名称替换为该类的系统生成的只读唯一标识符GUID。 meta:datasetNamespace属性也会自动生成,并包含在响应中。
{
"$id": "https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f",
"meta:altId": "_{TENANT_ID}.classes.6395cbd58812a6d64c4e5344f7b9120f",
"meta:resourceType": "classes",
"version": "1.0",
"title": "New Class",
"description": "New class description",
"type": "object",
"allOf": [
{
"$ref": "https://ns.adobe.com/xdm/data/adhoc"
},
{
"properties": {
"_6395cbd58812a6d64c4e5344f7b9120f": {
"type": "object",
"properties": {
"field1": {
"type": "string",
"meta:xdmType": "string"
},
"field2": {
"type": "string",
"meta:xdmType": "string"
}
},
"meta:xdmType": "object"
}
},
"type": "object",
"meta:xdmType": "object"
}
],
"meta:abstract": true,
"meta:extensible": true,
"meta:extends": [
"https://ns.adobe.com/xdm/data/adhoc"
],
"meta:containerId": "tenant",
"meta:datasetNamespace": "_6395cbd58812a6d64c4e5344f7b9120f",
"imsOrg": "{ORG_ID}",
"meta:xdmType": "object",
"meta:registryMetadata": {
"repo:createdDate": 1557527784822,
"repo:lastModifiedDate": 1557527784822,
"xdm:createdClientId": "{CREATED_CLIENT}",
"xdm:lastModifiedClientId": "{MODIFIED_CLIENT}",
"eTag": "Jggrlh4PQdZUvDUhQHXKx38iTQo="
}
}
属性
描述
$id
用作新临时类的只读、系统生成的唯一标识符的URI。 此值将在创建临时模式的下一个步骤中使用。
创建临时架构
创建临时类后,可以通过向/tenant/schemas端点发出POST请求来创建实现该类的新架构。
API格式
POST /tenant/schemas
请求
以下请求创建一个新架构,提供对其有效负载中之前创建的临时类的$id的引用($ref)。
curl -X POST \
https://platform.adobe.io/data/foundation/schemaregistry/tenant/schemas \
-H 'Authorization: Bearer {ACCESS_TOKEN}' \
-H 'Content-Type: application/json' \
-H 'x-api-key: {API_KEY}' \
-H 'x-gw-ims-org-id: {ORG_ID}' \
-H 'x-sandbox-name: {SANDBOX_NAME}' \
-d '{
"title":"New Schema",
"description": "New schema description.",
"type":"object",
"allOf": [
{
"$ref":"https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f"
}
]
}'
响应
成功的响应返回新创建架构的详细信息,包括其系统生成的只读$id
{
"$id": "https://ns.adobe.com/{TENANT_ID}/schemas/26f6833e55db1dd8308aa07a64f2042d",
"meta:altId": "_{TENANT_ID}.schemas.26f6833e55db1dd8308aa07a64f2042d",
"meta:resourceType": "schemas",
"version": "1.0",
"title": "New Schema",
"description": "New schema description.",
"type": "object",
"allOf": [
{
"$ref": "https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f"
}
],
"meta:datasetNamespace": "_6395cbd58812a6d64c4e5344f7b9120f",
"meta:class": "https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f",
"meta:abstract": false,
"meta:extensible": false,
"meta:extends": [
"https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f",
"https://ns.adobe.com/xdm/data/adhoc"
],
"meta:containerId": "tenant",
"imsOrg": "{ORG_ID}",
"meta:xdmType": "object",
"meta:registryMetadata": {
"repo:createdDate": 1557528570542,
"repo:lastModifiedDate": 1557528570542,
"xdm:createdClientId": "{CREATED_CLIENT}",
"xdm:lastModifiedClientId": "{MODIFIED_CLIENT}",
"eTag": "Jggrlh4PQdZUvDUhQHXKx38iTQo="
}
}
查看完整的临时架构
NOTE
此步骤是可选的。如果您不想检查临时架构的字段结构,可以跳至本教程末尾的后续步骤部分。
创建临时架构后,您可以发出查找(GET)请求以查看其展开形式的架构。 可通过在GET请求中使用适当的接受标头来做到这一点,如下所示。
API格式
GET /tenant/schemas/{SCHEMA_ID}
参数
描述
{SCHEMA_ID}
要访问的临时架构的URL编码的$id URI或meta:altId
请求
以下请求使用Accept标头application/vnd.adobe.xed-full+json; version=1,该标头返回架构的展开形式。 请注意,从Schema Registry检索特定资源时,请求的“接受”标头必须包含相关资源的主要版本。
curl -X GET \
https://platform.adobe.io/data/foundation/schemaregistry/tenant/schemas/_{TENANT_ID}.schemas.26f6833e55db1dd8308aa07a64f2042d \
-H 'Accept: application/vnd.adobe.xed-full+json; version=1' \
-H 'Authorization: Bearer {ACCESS_TOKEN}' \
-H 'x-api-key: {API_KEY}' \
-H 'x-gw-ims-org-id: {ORG_ID}' \
-H 'x-sandbox-name: {SANDBOX_NAME}' \
响应
成功的响应返回架构的详细信息,包括嵌套在properties下的所有字段。
{
"$id": "https://ns.adobe.com/{TENANT_ID}/schemas/26f6833e55db1dd8308aa07a64f2042d",
"meta:altId": "_{TENANT_ID}.schemas.26f6833e55db1dd8308aa07a64f2042d",
"meta:resourceType": "schemas",
"version": "1.0",
"title": "New Schema",
"description": "New schema description.",
"type": "object",
"meta:datasetNamespace": "_6395cbd58812a6d64c4e5344f7b9120f",
"meta:class": "https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f",
"meta:abstract": false,
"meta:extensible": false,
"meta:extends": [
"https://ns.adobe.com/{TENANT_ID}/classes/6395cbd58812a6d64c4e5344f7b9120f",
"https://ns.adobe.com/xdm/data/adhoc"
],
"meta:containerId": "tenant",
"imsOrg": "{ORG_ID}",
"meta:xdmType": "object",
"properties": {
"_6395cbd58812a6d64c4e5344f7b9120f": {
"type": "object",
"meta:xdmType": "object",
"properties": {
"field1": {
"type": "string",
"meta:xdmType": "string"
},
"field2": {
"type": "string",
"meta:xdmType": "string"
}
}
}
},
"meta:registryMetadata": {
"repo:createdDate": 1557528570542,
"repo:lastModifiedDate": 1557528570542,
"xdm:createdClientId": "{CREATED_CLIENT}",
"xdm:lastModifiedClientId": "{MODIFIED_CLIENT}",
"eTag": "bTogM1ON2LO/F7rlcc1iOWmNVy0="
}
}
后续步骤 next-steps
通过遵循本教程,您已成功创建新的临时架构。 如果您是作为另一个教程的一部分被带到此文档的,您现在可以使用临时架构的$id按照指示完成工作流。
有关使用Schema Registry API的更多信息,请参阅开发人员指南
recommendation-more-help
62e9ffd9-1c74-4cef-8f47-0d00af32fc07
|
__label__pos
| 0.770817 |
The Full Wiki
Relative risk: Wikis
Advertisements
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
Did you know ...
More interesting facts on Relative risk
Include this on your site/blog:
Encyclopedia
From Wikipedia, the free encyclopedia
In statistics and mathematical epidemiology, relative risk (RR) is the risk of an event (or of developing a disease) relative to exposure. Relative risk is a ratio of the probability of the event occurring in the exposed group versus a non-exposed group.[1]
RR= \frac {p_\text{exposed}}{p_\text{non-exposed}}
Consider an example where the probability of developing lung cancer among smokers was 20% and among non-smokers 1%. This situation is expressed in the 2 × 2 table to the right.
Risk Disease status
Present Absent
Smk a b
Non-smk c d
Here, a = 20(%), b = 80, c = 1, and d = 99. Then the relative risk of cancer associated with smoking would be
RR=\frac {a/(a+b)}{c/(c+d)} = \frac {20/100}{1/100} = 20.
Smokers would be twenty times as likely as non-smokers to develop lung cancer.
Another term for the relative risk is the risk ratio because it is the ratio of the risk in the exposed divided by the risk in the unexposed.
Contents
Statistical use and meaning
Relative risk is used frequently in the statistical analysis of binary outcomes where the outcome of interest has relatively low probability. It is thus often suited to clinical trial data, where it is used to compare the risk of developing a disease, in people not receiving the new medical treatment (or receiving a placebo) versus people who are receiving an established (standard of care) treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment (or receiving a placebo). It is particularly attractive because it can be calculated by hand in the simple case, but is also susceptible to regression modelling, typically in a Poisson regression framework.
In a simple comparison between an experimental group and a control group:
• A relative risk of 1 means there is no difference in risk between the two groups.
• An RR of < 1 means the event is less likely to occur in the experimental group than in the control group.
• An RR of > 1 means the event is more likely to occur in the experimental group than in the control group.
As a consequence of the Delta method, the log of the relative risk has a sampling distribution that is approximately normal with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group (see Delta method) [2]. This permits the construction of a confidence interval (CI) which is symmetric around log(RR), i.e.,
CI = \log(RR)\pm \mathrm{SE}\times z_\alpha
where zα is the standard score for the chosen level of significance and SE the standard error. The antilog can be taken of the two bounds of the log-CI, giving the high and low bounds for an asymmetric confidence interval around the relative risk.
In regression models, the treatment is typically included as a dummy variable along with other factors that may affect risk. The relative risk is normally reported as calculated for the mean of the sample values of the explanatory variables.
Advertisements
Association with odds ratio
Relative risk is different from the odds ratio, although it asymptotically approaches it for small probabilities. In the example of association of smoking to lung cancer considered above, if a is substantially smaller than b, then a/(a + b) \scriptstyle\approx a/b. And if similarly c is smaller enough than d, then c/(c + d) \scriptstyle\approx c/d. Thus
RR=\frac {a/(a+b)}{c/(c+d)} \approx \frac {ad}{bc}
This is nothing else but the odds ratio.
In fact, the odds ratio has much wider use in statistics, since logistic regression, often associated with clinical trials, works with the log of the odds ratio, not relative risk. Because the log of the odds ratio is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with type of treatment would be the same in a logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different. In cases like this, statistical models of the odds ratio often reflect the underlying mechanisms more effectively.
Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are almost 10 times higher than the odds with B.
In medical research, the odds ratio is favoured for case-control studies and retrospective studies. Relative risk is used in randomized controlled trials and cohort studies.[3]
In statistical modelling, approaches like poisson regression (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate, and thus leads to a risk ratio or relative risk. Logistic regression (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio.
Statistical significance (confidence) and relative risk
Whether a given relative risk can be considered statistically significant is dependent on the relative difference between the conditions compared, the amount of measurement and the noise associated with the measurement (of the events considered). In other words, the confidence one has, in a given relative risk being non-random (i.e. it is not a consequence of chance), depends on the signal-to-noise ratio and the sample size.
Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by Sackett[4]:
\text{confidence} = \frac\text{signal}\text{noise} \times \sqrt{\text{sample size}}.
For clarity, the above formula is presented in tabular form below.
Dependence of confidence with noise, signal and sample size (tabular form)
Parameter Parameter increases Parameter decreases
Noise Confidence decreases Confidence increases
Signal Confidence increases Confidence decreases
Sample size Confidence increases Confidence decreases
In words, the confidence is higher if the noise is lower and/or the sample size is larger and/or the effect size (signal) is increased. The confidence of a relative risk value (and its associated confidence interval) is not dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared.
In medicine, small effect sizes (reflected by small relative risk values) are usually considered clinically relevant (if there is great confidence in them) and are frequently used to guide treatment decisions. A relative risk of 1.10 may seem very small, but over a large number of patients will make a noticeable difference. Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.
Worked example
Example 1: risk reduction Example 2: risk increase
Experimental group (E) Control group (C) Total (E) (C)
Events (E) EE = 15 CE = 100 115 EE = 75 CE = 100
Non-events (N) EN = 135 CN = 150 285 EN = 75 CN = 150
Total subjects (S) ES = EE + EN = 150 CS = CE + CN = 250 400 ES = 150 CS = 250
Event rate (ER) EER = EE / ES = 0.1, or 10% CER = CE / CS = 0.4, or 40% N/A EER = 0.5 (50%) CER = 0.4 (40%)
Equation Variable Abbr. Example 1 Example 2
EER − CER < 0: absolute risk reduction ARR (−)0.3, or (−)30% N/A
> 0: absolute risk increase ARI N/A 0.1, or 10%
(EER − CER) / CER < 0: relative risk reduction RRR (−)0.75, or (−)75% N/A
> 0: relative risk increase RRI N/A 0.25, or 25%
1 / (EER − CER) < 0: number needed to treat NNT (−)3.33 N/A
> 0: number needed to harm NNH N/A 10
EER / CER relative risk RR 0.25 1.25
(EE / EN) / (CE / CN) odds ratio OR 0.167 1.5
EE / (EE + CE) − EN / (EN + CN) attributable risk AR (−)0.34, or (−)34% 0.095, or 9.5%
(RR − 1) / RR attributable risk percent ARP N/A 20%
1 − RR (or 1 − OR) preventive fraction PF 0.75, or 75% N/A
• Example 3: Ratios are presented for each of experimental and control groups. In the disease-risk 2 × 2 table above, suppose a + c = 1 and b + d = 1 and the total number of patients and healthy people be m and n, respectively. Then prevalence ratio becomes p = m/(m + n). We can put q = m/n = p/(1 − p). Thus
RR=\frac {am/(am+bn)}{cm/(cm+dn)}=\frac {a(d+bq)}{b(c+aq)}=\frac {ad\left \lbrace 1+(b/d)q \right \rbrace }{bc\left \lbrace 1+(a/c)q\right \rbrace }.
If p is small enough, then q would be small enough and either of (b/d)q and (a/c)q would be small enough to be regarded as 0 compared with 1. RR would be reduced to the odd ratio as above.
Among Japanese, not a small fraction of patients of Behçet's disease are bestowed with a specific HLA type, namely HLA-B51 gene.[5] In a survey, the proportion is 63% of the patients with this gene, while in healthy people the ratio is 21%.[5] If the figures are considered to be representative for most Japanese, using the values of 12,700 patients in Japan in 1984 and the Japanese population about 120 million in 1982, then RR = 6.40. Compare with the odd ratio 6.41.
See also
Statistical ratios
References
1. ^ Sistrom CL, Garvan CW (January 2004). "Proportions, odds, and risk". Radiology 230 (1): 12–9. doi:10.1148/radiol.2301031028. PMID 14695382. http://radiology.rsnajnls.org/cgi/pmidlookup?view=long&pmid=14695382.
2. ^ See e.g. Stata FAQ on CIs for odds ratios, hazard ratios, IRRs and RRRs at http://www.stata.com/support/faqs/stat/2deltameth.html
3. ^ Medical University of South Carolina. Odds ratio versus relative risk. Accessed on: September 8, 2005.
4. ^ Sackett DL. Why randomized controlled trials fail but needn't: 2. Failure to employ physiological statistics, or the only formula a clinician-trialist is ever likely to need (or understand!). CMAJ. 2001 Oct 30;165(9):1226-37. PMID 11706914. Free Full Text.
5. ^ a b Ohno S, Ohguchi M, Hirose S, Matsuda H, Wakisaka A, Aizawa M (1982). "Close association of HLA-BW51, MT2 and Behçet's disease," In Inaba, G, ed. (1982). Behçet's Disease : Pathogenetic Mechanism and Clinical Future: Proceedings of the International Conference on Behçet's Disease, held October 23–24, 1981, pp. 73–79, Tokyo: University of Tokyo Press, ISBN 0860083225.
External links
Advertisements
Got something to say? Make a comment.
Your name
Your email address
Message
|
__label__pos
| 0.974382 |
Clutchy Clutchy - 1 year ago 104
C Question
C - STM32f4 error on function call when initializing GPIO
I have written a program to initialize GPIO for STM32F4 but for some reason after I try to build this code:
Include ST's header:
#include "stm32f4_discovery.h"
Defining starting adresses of GPIO:
#define GPIOA ((struct GPIO *) 0x40020000)
#define GPIOB ((struct GPIO *) 0x40020400)
#define GPIOC ((struct GPIO *) 0x40020800)
#define GPIOD ((struct GPIO *) 0x40020C00)
#define GPIOE ((struct GPIO *) 0x40021000)
#define GPIOF ((struct GPIO *) 0x40021400)
#define GPIOG ((struct GPIO *) 0x40021800)
#define GPIOH ((struct GPIO *) 0x40021C00)
#define GPIOI ((struct GPIO *) 0x40022000)
Reset and Clock Control:
#define RCC ((uint32_t *) 0x40023830)
#define IN uint8_t 0
#define OUT uint8_t 1
#define NO_PULL uint8_t 0
#define PULL_UP uint8_t 1
#define PULL_DOWN uint8_t 2
#define PUSH_PULL uint8_t 0
#define OPEN_DRAIN uint8_t 1
#define S2MHz uint8_t 0
#define S25MHz uint8_t 1
#define S50MHz uint8_t 2
#define S100MHz uint8_t 3
Basic GPIO struct:
struct GPIO {
uint32_t MODER;
uint32_t TYPER;
uint32_t OSPEEDR;
uint32_t PUPDR;
uint32_t IDR;
uint32_t ODR;
uint16_t BSSR_SET;
uint16_t BSSR_RESET;
};
void Init_GPIO(struct GPIO *GPIO_Type, uint32_t GPIO_Mode, uint8_t in_out, uint8_t pull, uint8_t push_pull, uint8_t freq) {
// Set MODER:
if (in_out) {
GPIO_Type->MODER |= (1 << GPIO_Mode);
GPIO_Type->MODER &= ~(2 << GPIO_Mode);
}
else {
GPIO_Type->MODER &= ~(3 << GPIO_Mode);
}
// Set PUPDR:
if (!pull) {
GPIO_Type->PUPDR &= ~(3 << GPIO_Mode);
}
else if (pull == 1) {
GPIO_Type->PUPDR |= (1 << GPIO_Mode);
GPIO_Type->PUPDR &= ~(2 << GPIO_Mode);
}
else if (pull == 2) {
GPIO_Type->PUPDR |= (2 << GPIO_Mode);
GPIO_Type->PUPDR &= ~(1 << GPIO_Mode);
}
// Set TYPER:
if (push_pull) {
GPIO_Type->TYPER &= ~(1 << GPIO_Mode);
}
else {
GPIO_Type->TYPER |= (1 << GPIO_Mode);
}
// Set OSPEEDR:
if (!freq) {
GPIO_Type->OSPEEDR &= ~(3 << GPIO_Mode);
}
else if (freq == 1) {
GPIO_Type->OSPEEDR |= (1 << GPIO_Mode);
GPIO_Type->OSPEEDR &= (2 << GPIO_Mode);
}
else if (freq == 2) {
GPIO_Type->OSPEEDR |= (2 << GPIO_Mode);
GPIO_Type->OSPEEDR &= ~(1 << GPIO_Mode);
}
else {
GPIO_Type->OSPEEDR &= (3 << GPIO_Mode);
}
}
/**
* @brief Main program
* @param None
* @retval None
*/
int main(void)
{
Init_GPIO(GPIOD, 12, OUT, NO_PULL, PUSH_PULL, S2MHz);
Init_GPIO(GPIOA, 0, IN, NO_PULL, PUSH_PULL, S2MHz);
while (1) {
}
}
I get the following errors which refer to
Init_GPIO
fucntion call:
Error[Pe254]: type name is not allowed C:\Users\..\main.c 93
Error[Pe165]: too few arguments in function call C:\Users\..\main.c 93
Error[Pe018]: expected a ")" C:\Users\..\main.c 93
Error[Pe254]: type name is not allowed C:\Users\..\main.c 94
Error[Pe165]: too few arguments in function call C:\Users\..\main.c 94
Error[Pe018]: expected a ")" C:\Users\..\main.c 94
Error while running C/C++ Compiler
Answer Source
Your macros IN, OUT etc. are incorrectly defined and when expanded in the calls to Init_GPIO() make no syntactic sense.
Change:
#define IN uint8_t 0
to
#define IN ((uint8_t)0)
for example, and similarly for the other similarly define macros.
Alternatively, use the ST Standard Peripheral Library where GPIO initialisation and similar symbols, types and macros to those you have defined are already correctly defined. An example of using the standard peripheral library to access GPIO can be found here. The author refers to it as the CMSIS Library, but strictly it is merely CMSIS compliant but STM32 specific. Other examples are included with the library itself. The library is likely included with you toolchain (which appears to be IAR), but can be downloaded from ST here - you can probably ignore stuff about the library being superseded by STM32Cube unless you want that kind of hand-holding an code bloat and are not likley to ever want to port your code to non STM32 platforms.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.961499 |
Barbarian Meets Codingbarbarianmeetscoding
WebDev, UX & a Pinch of Fantasy
3 minutes readnosql
RavenDB
RavenDB is an open-source Document Database written in .NET. The main focus of RavenDB is to allow developers to build high-performance, low latency applications quickly and efficiently.
Data is stored in scheme-less JSON, and can be retrieved effectively using LINQ via RavenDB .NET API or using the RESTful API over HTTP. Internally, RavenDB uses indexes that are automatically created based on your usage, or created explicitly by the developer.
RavenDB, as it is common in NoSQL databases, is build for web-scale and supports replication and sharding out of the box.
Introduction to RavenDB
Why use NoSQL/RavenDB instead of a RDBMS?
• High developer productivity
• Schemaless
• Made to scale
• Impedance mismatch between OOD/Relational model
ORM is the Vietnam of Computer Science. It represents a quagmire which starts well, gets more complicated as time passes, and before long entraps its users in a commitment that has no clear demarcation point, no clear win conditions, and no clear exit strategy
RavenDB
RavenDB is a Document Database which has the following characteristics:
• Every document must have an id
• Documents are stored as JSON
• It is schemaless, the database doesn’t enforce any schema
• Reads are always fast (high read availability)
• Self-optimization based on usage. RavenDB creates and manages indexes automatically and batches certain operations depending on usage.
• Multiple deployment options, client/server or embedded
• .NET and REST APIs. REST APIs is a first class citizen
• Safe by default
• Your data is always safe via transactions
• RavenDB protects you from misusing it: it limits the number of database calls preventing N+1, and it protects you against unbounded result sets putting a 128 result cap
Basic Concepts
RavenDB uses the concepts of documents and collections to store and queryfor data.
Documents
The unit of storage in RavenDB is the document, it represents any piece of data (usually a C# POCO - Plain Old CLR Object - or a whole aggregate) serialized as a JSON formatted string. This allows RavenDB to easily store complex object graphs that match very well Object Oriented Programming. Each document is assigned a unique identifier.
One interesting feature related to the use of documents that differences RavenDB from ORMs is that RavenDB works perfectly with pure POCOS, without any need for adding attributes nor configuration. When given an object graph, RavenDB will serialize it to JSON and persist it whole, this, as we will see in some examples later, can be a big mind shift away from RDBMS and data normalization.
Collections
to be continued
References
Jaime González García
Written by Jaime González García , Dad, Husband, Front-end software engineer, UX designer, amateur pixel artist, tinkerer and master of the arcane arts. You should follow him on Twitter where he shares useful stuff! (and is funny too).Jaime González García
|
__label__pos
| 0.62006 |
Validate Method
Persistate API documentation
Validate Method
NamespacesPersistatePersistentValidate()()()()
[This is preliminary documentation and is subject to change.]
Allows inheritors to create a validation method for Persistent classes.
Declaration Syntax
C#
public virtual string Validate()
Return Value
If validation passes, this should return null. If validation fails, this should return the text of any error message to be shown. By default, this returns null.
Assembly: Persistate (Module: Persistate) Version: 0.6.1.20 (0.6.1.20)
|
__label__pos
| 0.56681 |
Web content best practices
In this section:
• Learn how to make your site search-engine friendly
• Learn how to make your content accessible
• Other web best practices
Making your content search engine friendly
Much of your website traffic will come from search engines. To ensure your website is findable, you need to write and mark up your content in a way that allows the search engine to present the most relevant results. Put another way, search engine friendly content will get more relevant traffic to your site.
What makes a site search engine friendly? Two things:
• Relevant content.
• A site built to standards, in other words, in an accessible way.
Search engine friendliness and accessibility are very closely related - you can think of Google as just another blind user.
SilverStripe CMS automatically does a number of things that make your site search-engine friendly and accessible. As a website editor, you don't have to concern yourself with the code side of things. However, there is much you can do to when it comes to your content:
• Post relevant content. This may sound obvious, but it's the most important thing you can do as an editor. If your content is relevant to your users, your site wil rank higher for the search terms they are using.
• If it's important to you to rank highly for specific phrases, it's key to literally mention these phrases in the first paragraph of relevant pages on the site. The absolute best way to rank number one is to have dedicated pages for these phrases. In the title of these pages would be the phrases you wish to rank for.
• Use CMS tools for to properly mark up your content.
• Ensure correct spelling across all your content - typos make the site rank lower.
• Ensure there are no broken links in your site - again, broken links will make your site rank lower.
• Provide text alternatives for media content, such as ALT text for your images, since search engines can't see images, just their descriptions.
URLs, page names, titles and navigation labels
When you first create a new page, start by entering its name in the Page name field on the "Content" tab. SilverStripe CMS automatically populates a number of other fields based on that name. You can leave them as is, or change them individually.
Why should you care? It's useful to know where and how the different names are displayed.
Page names
Meta titles
• The Page name is what generates the main headline (the <h1> tag) for the page's content.
• The Navigation label is what appears in your site's navigation. Sometimes when you have a lengthy page name, it makes sense to create a shortened navigation label.
• The URL gets generated based on the page name, using the words and dashes. Human-readable URLs make a page more easily found by search engines. Most of the time, the URL that SilveStripe CMS generates will be fine, but you can manually change it if necessary.
Notes:
Your website developer will have configured your SilverStripe site for either simple or hierarchical URLs.
Simple URLs only use a single level of depth. For example, a page for a staff member might be called "John Smith", and its URL would be http://website.com/john-smith. Simple URLs are short and memorable, however, you are more likely to have multiple pages with the same name.
If a URL is already in use, the CMS will generate URLs with numbers, e.g., /staff-members-1, /staff-members-2, etc.
In this case, it's a good idea to manually change the URLs to something more meaningful, such as /staff-members-berlin, /staff-members-hong-kong.
Hierarchical URLs provide a logical path for a page as it exists in the site's structure. In our example, this might be http://website.com/offices/new-york/staff/john-smith
Meta tags
Meta tags also make your web page more findable. The Description meta attribute should contain a concise and relevant summary of what the page contains. This will show in search engine results, and helps visitors understand the content of the page.
Notes:
The meta fields for title and keywords have been removed in v3.1. Keyword have been removed due after an official Google press release which confirmed that Google doesn't use the keywords tag anymore (see link.
Google doesn’t like repetition of keywords and phrases in the description. It sees this as 'keyword stuffing', which is looked at as search engine spam (not good!) Avoid this.
Clean HTML
SilverStripe CMS generates clean HTML code when you type your content into the CMS. However, often you already have content in another format, such as Microsoft Word, which you need to simply transfer into the CMS.
Avoid cutting and pasting directly from a word processor with the standard cut and paste functions. Always "Paste from Word" if using Microsoft Word, or "Paste as Text" if using another word processor. Word processors tend to do poor jobs of creating web markup code and often insert extraneous code which make your site less search-engine friendly and accessible.
Making your content accessible
Why does accessibility matter?
An accessible website means that it can be viewed by the widest audience possible. Accessibility not only refers to people with physical disabilities (such as blind users), but also people with cognitive, learning or motor skills disabilities, and people who access your site with mobile devices or old, outdated technology. Lastly, as mentioned previously, search engines can be considered disabled users in the sense that they can't' see your design or images or interact with your site.
Accessibility is important for a number of reasons:
• Ethical — being inclusive is the right thing to do and has a positive impact on how your audience perceives you or your organisation.
• Business — sites that can be used by everyone have a larger audience. They are more findable and therefore generate more traffic/business. Accessible sites are also easier to maintain, resulting in fewer ongoing costs.
• Legal — many governments require websites to comply with certain standards. Details depend on your country, however, typically the guidelines are based on the Web Content Accessibility Guidelines developed by the World Wide Web Consortium (W3C.)
What can content editors do?
Accessibility is not something you implement once and then have it. There is an infinite number of types of disability, platforms, devices, configurations, so you have to choose to what level you want to comply.
The more you know about your users, the better. Web stats, such as Google Analytics, can help you find out about your users' most frequently used browsers and operating systems. User research can help to learn more about how users with disabilities interact with the site - but that's costly and often not an option. Automated tools can help and are a useful first step as they look for all the obvious issues and generate a list of problems. However, it still requires a person to assess and interpret the results and make decisions about items flagged by the automated tools. For example, you can use an automated tool to check whether all your images have alternative text, but you as the editor of your site need to determine if the text in the alt attribute is descriptive and appropriate.
The following paragraphs describe some of the quick wins—things you can do easily to greatly increase the accessibility of your site.
Alternative text for images
When embedding images in your content, always provide alternative text that can serve as a placeholder in case the image itself cannot be displayed. Alternative text is often referred to as the "ALT tag", although that's not technically correct :-). Alternative text is important for those who cannot see the actual image, such as vision impaired users, people with text-only browsers, or search engine, and it should be meaningful and describe what the image shows.
In addition, you may also want to add title text for your image. Title text is for additional information about your image, such as the name of the photographer, or the date when it was taken. The title text appears as a tooltip when the user hovers over the image.
Alt text for images
Headings and lists
Mark up your headings by selecting the right heading style from the Format dropdown. Using headings properly gives your content hierarchy and, for example, allows users with screen readers to skip ahead to the next heading. Note that Heading 1 will be the title of your page, and all lower-level headings should be nested properly (e.g., Heading 3 should be within a Heading 2 section, etc.)
Headings
If your content uses lists, select either bullets (unordered list) or numbers (ordered list) - don't use dashes or asterisks to mark up lists.
When you insert a link, make sure your link text (the part of your content that's clickable) is meaningful and relates to the page you're linking to. Don't use "click here" or "read more" as they don't tell the user where your link is going. Think about what would make sense for someone who has the link text read out to them by a screen reader.
Alternative formats for media
If your site uses other media, such as Flash, audio or video content, document attachments or animations, make sure this content is accessible to users who can't see or hear it, or who don't have the software to view it. How exactly you do this will depend on the kind of content and how critical it is to your site and your users. Typical alternatives would be transcripts, summaries (for example, of PDF documents), captions (for videos), or text-only versions.
Tables and charts
If your site uses graphs to convey information, also provide text-based summaries that describe the information displayed in the graphs.
For tables, make sure you mark them up so that screen readers can interpret them correctly (for example, mark header rows/columns as such.) It's also a good idea to provide a text-based summary of the information contained in the table.
Further reading: Introduction to Web Accessibility (external link)
Other best practices
Image dimensions
When you embed images in your content, always specify width and height. This allows the browser to start rendering the page before downloading the actual image, which speeds up page loading. SilverStripe CMS prepopulates the image dimensions automatically, so unless you want to change them, you don't have to do anything special.
Custom error pages
Sometimes your users will get a Page not Found error because they clicked on an outdated link or misspelled the URL of the page they were looking for. The "Page not Found" error is also known as a "404 error."
It's a good idea to create a custom 404 page for your site. On this page, you can refer your users to key pages on your site, encourage them to use search or a sitemap, and give them information on how to contact you about problems with the website.
To create a custom error page, click Add new in the Pages pane and select Error Page. In the Content tab, select Error code 404 - not found, then create your content as for any other page. Note that you can also create custom pages for other errors, but Page not found is by far the most common one that your users will encounter so you can usually safely ignore the other options.
Error page
Was this article helpful?
|
__label__pos
| 0.503615 |
E-mail Phishing Attacks
What is Phishing?
Phishing is the term given to communications, usually email, where the attacker tries to fool the target into revealing private information about themselves or their organization. Often claiming to be IT Solutions, the attacker will usually include hyperlinks to malicious websites in the email that appear to be legitimate or include malicious attachments. Simply visiting the URL or downloading the attachment can be enough to compromise your machine.
How does Phishing affect me?
If your account is compromised, attackers will often start sending out more phishing emails from your account. This could damage your reputation and decrease the trust others place in your future emails.
Additionally, attackers may be able to compromise any other accounts attributed to that email address. this could include bank accounts, social networking accounts, file back up, remote connection to your computer, and so on.
How can I tell if it’s phishing?
Attackers will often use a sense of urgency in their messages. For example:
• “Your account will be disabled unless you act now!!!”
• “Please update your account information or your account will be terminated”
• “Your mailbox storage has exceeded the quota”
• Often, they will include a link or attachment. these will usually, although not always, require some action on our part, such as selecting or downloading, to be successful.
• Spelling and grammatical errors are very common among phishing communications. Many of the attacks originate from overseas or from people who don’t speak English as their first language. Although this is not always the case, treat emails with excessive grammatical errors with extra scrutiny.
How can I protect myself from this?
DON’T CLICK ON LINKS!
When you receive an email about your account, instead of clicking on the link, open your browser and manually type in the site address. Because you are going to the site yourself, you can be more confident that you are going to the right place. A bank or other financial institution should never be sending you emails with links in them.
Seriously, Don’t Click On the Links
Although a link in an email may appear to be leading you to a legitimate site, for example http://www.rctc.edu. It may in fact be leading you to a compromised or malicious site like http://compromised.website.com/rctc/edu. Attackers will make a link into a hyperlink that appears legitimate when you read it, while it instead opens a different address. It is important to be aware of what website you are on in your browser.
Take a Minute to Verify
If you receive an email about your bank account being compromised, take the time to call your bank. If in fact your account is compromised, you will be able to get additional assistance over the phone. It is important to use the phone number found on your bankcard and not a phone number included in the email.
Trust Your Spam Filters
Modern spam filters are able to block messages based on trends. For example, if 10,000 Gmail accounts received the same email from the same address with the same link to reset your password, then Google’s spam filters are more likely to send that email directly to the spam folder.
How to report Phishing emails to Microsoft:
Click on the phishing scam message, select the down arrow next to Junk, and then select Phishing on the toolbar. Office 365 does not block the sender because senders of phishing scam messages typically impersonate legitimate senders. If you prefer, add the sender to your blocked senders.
|
__label__pos
| 0.537119 |
JavaFX Game (Connect Four)
Tags
, , ,
This is my first JavaFX game tutorial and my first blog post about JavaFX panel. I made this connect four game with just 200+ lines of code just enough for a simple game. I used the GridPane panel here to lay out the disks, GridPane is one of the JavaFX layout pane but it is different from the other pane because it lays out its children within a flexible grid of rows and columns.
Here is the code snippet on how to set the GridPanes column and row constraint:
gridpane.getColumnConstraints().addAll(new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE));
gridpane.getRowConstraints().addAll(new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE));
The GridPane will have 4 rows and 4 columns 100 wide square size grid.
You can have the rest of the code below enjoy. 🙂
import javafx.animation.TranslateTransition;
import javafx.application.Application;
import javafx.beans.property.SimpleObjectProperty;
import javafx.event.EventHandler;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.effect.DropShadow;
import javafx.scene.effect.Reflection;
import javafx.scene.input.MouseEvent;
import javafx.scene.layout.*;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.scene.shape.Path;
import javafx.scene.shape.Rectangle;
import javafx.scene.shape.Shape;
import javafx.stage.Stage;
import javafx.util.Duration;
/**
*
* @author mark_anro
*/
public class Main extends Application {
/**
* @param args the command line arguments
*/
private SimpleObjectProperty<Color> playerColorProperty = new SimpleObjectProperty<Color>(Color.RED);
private int r;
private int c;
public static void main(String[] args) {
launch(args);
}
@Override
public void start(Stage primaryStage) {
final BorderPane root = new BorderPane();
final GridPane gridpane = new GridPane();
primaryStage.setTitle("JavaFX Connect Four");
primaryStage.setResizable(true);
final Button addCellButton = new Button("Add Grids");
Scene scene = new Scene(root, 750, 680, true);
scene.setFill(Color.BLACK);
scene.getStylesheets().add("net/glyphsoft/styles.css");
gridpane.setTranslateY(20);
gridpane.setAlignment(Pos.CENTER);
gridpane.getColumnConstraints().addAll(new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE),
new ColumnConstraints(100,100,Double.MAX_VALUE));
gridpane.getRowConstraints().addAll(new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE),
new RowConstraints(100,100,Double.MAX_VALUE));
createGrids(gridpane);
root.setCenter(gridpane);
DropShadow effect = new DropShadow();
effect.setColor(Color.BLUE);
addCellButton.setEffect(effect);
addCellButton.setTranslateY(10);
addCellButton.setTranslateX(10);
root.setTop(addCellButton);
addCellButton.setOnMouseClicked(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
addGrid(gridpane);
}
});
primaryStage.setScene(scene);
primaryStage.setResizable(false);
primaryStage.show();
}
//Add Column and Row
private void addGrid(final GridPane gridpane){
gridpane.getColumnConstraints().addAll(new ColumnConstraints(100,100,Double.MAX_VALUE));
gridpane.getRowConstraints().addAll(new RowConstraints(100,100,Double.MAX_VALUE));
createGrids(gridpane);
}
//Create Grids
private void createGrids(final GridPane gridpane){
gridpane.getChildren().clear();
for(r=0;r<gridpane.getColumnConstraints().size(); r++){
for(c=0; c<gridpane.getColumnConstraints().size(); c++){
Rectangle rect = new Rectangle(100,100);
Circle circ = new Circle(47);
circ.centerXProperty().set(50);
circ.centerYProperty().set(50);
Shape cell = Path.subtract(rect, circ);
cell.setFill(Color.BLUE);
cell.setStroke(Color.BLUE);
cell.setOpacity(.8);
DropShadow effect = new DropShadow();
effect.setSpread(.2);
effect.setRadius(25);
effect.setColor(Color.BLUE);
cell.setEffect(effect);
final Circle diskPreview = new Circle(40);
diskPreview.setOpacity(.5);
diskPreview.setFill(Color.TRANSPARENT);
diskPreview.setOnMouseEntered(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
diskPreview.setFill(Color.WHITE);
if(playerColorProperty.get()==Color.RED){
diskPreview.setFill(Color.RED);
}else{
diskPreview.setFill(Color.YELLOW);
}
}
});
diskPreview.setOnMouseExited(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
diskPreview.setFill(Color.TRANSPARENT);
}
});
final Circle disk = new Circle(40);
disk.fillProperty().bind(playerColorProperty);
disk.setOpacity(.5);
disk.setTranslateY(-(100*(r+1)));
final TranslateTransition translateTranstion = new TranslateTransition(Duration.millis(300), disk);
disk.setOnMouseEntered(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
diskPreview.setFill(Color.WHITE);
if(playerColorProperty.get()==Color.RED){
diskPreview.setFill(Color.RED);
}else{
diskPreview.setFill(Color.YELLOW);
}
}
});
disk.setOnMouseExited(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
diskPreview.setFill(Color.TRANSPARENT);
}
});
disk.setOnMouseClicked(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
if(disk.getTranslateY()!=0){
translateTranstion.setToY(0);
translateTranstion.play();
if(playerColorProperty.get()==Color.RED){
playerColorProperty.set(Color.YELLOW);
disk.fillProperty().bind(new SimpleObjectProperty<Color>(Color.RED));
}else{
playerColorProperty.set(Color.RED);
disk.fillProperty().bind(new SimpleObjectProperty<Color>(Color.YELLOW));
}
}
}
});
diskPreview.setOnMouseClicked(new EventHandler<MouseEvent>(){
@Override
public void handle(MouseEvent arg0) {
if(disk.getTranslateY()!=0){
translateTranstion.setToY(0);
translateTranstion.play();
if(playerColorProperty.get()==Color.RED){
playerColorProperty.set(Color.YELLOW);
disk.fillProperty().bind(new SimpleObjectProperty<Color>(Color.RED));
}else{
playerColorProperty.set(Color.RED);
disk.fillProperty().bind(new SimpleObjectProperty<Color>(Color.YELLOW));
}
}
}
});
StackPane stack = new StackPane();
stack.getChildren().addAll(cell, diskPreview, disk);
gridpane.add(stack, c, r);
if(r==gridpane.getColumnConstraints().size()-1){
stack.setEffect(new Reflection());
}
}
}
}
}
Advertisements
JavaFX 2 Loading Indicator
A simple way to create a loading indicator in JavaFX 2.
Code:
import javafx.animation.KeyFrame;
import javafx.animation.KeyValue;
import javafx.animation.Timeline;
import javafx.beans.property.DoubleProperty;
import javafx.beans.property.SimpleDoubleProperty;
import javafx.scene.Parent;
import javafx.scene.layout.StackPane;
import javafx.scene.layout.VBox;
import javafx.scene.paint.Color;
import javafx.scene.shape.Rectangle;
import javafx.scene.text.Text;
import javafx.util.Duration;
public class LoadingIndicator extends Parent{
private Timeline timeline = new Timeline();
private DoubleProperty stroke = new SimpleDoubleProperty(100.0);
public LoadingIndicator(){
super();
timeline.setCycleCount(Timeline.INDEFINITE);
final KeyValue kv = new KeyValue(stroke, 0);
final KeyFrame kf = new KeyFrame(Duration.millis(1500), kv);
timeline.getKeyFrames().add(kf);
timeline.play();
VBox root = new VBox(3);
StackPane progressIndicator = new StackPane();
Rectangle bar = new Rectangle(350,13);
bar.setFill(Color.TRANSPARENT);
bar.setStroke(Color.WHITE);
bar.setArcHeight(15);
bar.setArcWidth(15);
bar.setStrokeWidth(2);
Rectangle progress = new Rectangle(342,6);
progress.setFill(Color.WHITE);
progress.setStroke(Color.WHITE);
progress.setArcHeight(8);
progress.setArcWidth(8);
progress.setStrokeWidth(1.5);
progress.getStrokeDashArray().addAll(3.0,7.0,3.0,7.0);
progress.strokeDashOffsetProperty().bind(stroke);
progressIndicator.getChildren().add(progress);
progressIndicator.getChildren().add(bar);
root.getChildren().add(progressIndicator);
Text label = new Text("Loading...");
label.setFill(Color.WHITE);
root.getChildren().add(label);
getChildren().add(root);
}
}
GWT and HTML5 Canvas Demo
This is my first experiment with GWT and HTML5 Canvas. My first attempt is to create rectangles, with just a few lines of code I came up something like this:
Sample
Code:
public class GwtHtml5 implements EntryPoint {
static final String canvasHolderId = "canvasholder";
static final String unsupportedBrowser = "Your browser does not support the HTML5 Canvas";
static final int height = 400;
static final int width = 500;
final CssColor colorRed = CssColor.make("red");
final CssColor colorGreen = CssColor.make("green");
final CssColor colorBlue = CssColor.make("blue");
Canvas canvas;
Context2d context;
public void onModuleLoad() {
canvas = Canvas.createIfSupported();
if (canvas == null) {
RootPanel.get(canvasHolderId).add(new Label(unsupportedBrowser));
return;
}
createCanvas();
}
private void createCanvas(){
canvas.setWidth(width + "px");
canvas.setHeight(height + "px");
canvas.setCoordinateSpaceWidth(width);
canvas.setCoordinateSpaceHeight(height);
RootPanel.get(canvasHolderId).add(canvas);
context = canvas.getContext2d();
context.beginPath();
context.setFillStyle(colorRed);
context.fillRect(100, 50, 100, 100);
context.setFillStyle(colorGreen);
context.fillRect(200, 150, 100, 100);
context.setFillStyle(colorBlue);
context.fillRect(300, 250, 100, 100);
context.closePath();
}
}
And my Spring balls experiment with some codes that I found on the Web.
Demo
Click to view Demo
JavaFX 2 Transition
An example of a faded Path in JavaFx. It has two different canvas the one in the left side is the Fade Transition and other one in the right is also a Fade Transition but with a Stroke Transition.
FadeTransition ft = new FadeTransition(Duration.millis(5000), path);
ft.setFromValue(1.0);
ft.setToValue(0.0);
ft.play();
To change the stroke color of a shape simply use the Stroke Transition.
StrokeTransition sT = new StrokeTransition(Duration.millis(2000), path, Color.LIME, Color.YELLOW);
sT.play();
GWT Custom Button using UIBinder
Here’s an example on how to create a custom button using UIBinder on GWT.
Click to view Demo
public class GwtUIBinderButton implements EntryPoint {
public void onModuleLoad() {
Button button = new Button();
button.setText("Button");
button.addClickHandler(new ClickHandler(){
@Override
public void onClick(ClickEvent event) {
Window.alert("Button clicked");
}
});
RootPanel.get("container").add(button);
}
}
public class Button extends Composite implements HasText, HasClickHandlers, ClickHandler{
private static ButtonUiBinder uiBinder = GWT.create(ButtonUiBinder.class);
interface ButtonUiBinder extends UiBinder<Widget, Button> {
}
@UiField(provided=true)
FocusPanel pane = new FocusPanel();
@UiField(provided=true)
Label label = new Label();
public Button() {
pane.addClickHandler(this);
initWidget(uiBinder.createAndBindUi(this));
}
@Override
public HandlerRegistration addClickHandler(ClickHandler handler) {
return addHandler(handler, ClickEvent.getType());
}
@Override
public void onClick(ClickEvent event) {
this.fireEvent(event);
}
@Override
public String getText() {
return label.getText();
}
@Override
public void setText(String text) {
label.setText(text);
}
}
<!DOCTYPE ui:UiBinder SYSTEM "http://dl.google.com/gwt/DTD/xhtml.ent">
<ui:UiBinder xmlns:ui="urn:ui:com.google.gwt.uibinder"
xmlns:g="urn:import:com.google.gwt.user.client.ui">
<ui:style>
.button{
background-color: #eeeeee;
background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #eeeeee), color-stop(100%, #cccccc));
background-image: -webkit-linear-gradient(top, #eeeeee, #cccccc);
background-image: -moz-linear-gradient(top, #eeeeee, #cccccc);
background-image: -ms-linear-gradient(top, #eeeeee, #cccccc);
background-image: -o-linear-gradient(top, #eeeeee, #cccccc);
background-image: linear-gradient(top, #eeeeee, #cccccc);
border: 1px solid #ccc;
border-bottom: 1px solid #bbb;
-webkit-border-radius: 3px;
-moz-border-radius: 3px;
-ms-border-radius: 3px;
-o-border-radius: 3px;
border-radius: 3px;
color: #333;
font: bold 11px "Lucida Grande", "Lucida Sans Unicode", "Lucida Sans", Geneva, Verdana, sans-serif;
line-height: 1;
padding: 0px 0;
text-align: center;
text-shadow: 0 1px 0 #eee;
width: 120px;
}
.button:hover{
background-color: #dddddd;
background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #dddddd), color-stop(100%, #bbbbbb));
background-image: -webkit-linear-gradient(top, #dddddd, #bbbbbb);
background-image: -moz-linear-gradient(top, #dddddd, #bbbbbb);
background-image: -ms-linear-gradient(top, #dddddd, #bbbbbb);
background-image: -o-linear-gradient(top, #dddddd, #bbbbbb);
background-image: linear-gradient(top, #dddddd, #bbbbbb);
border: 1px solid #bbb;
border-bottom: 1px solid #999;
cursor: pointer;
text-shadow: 0 1px 0 #ddd;
}
.button:active{
border: 1px solid #aaa;
border-bottom: 1px solid #888;
-webkit-box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee;
-moz-box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee;
box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee;
}
.pane{
text-align: center;
}
</ui:style>
<g:SimplePanel ui:field="pane" styleName="{style.button}">
<g:Label ui:field="label"></g:Label>
</g:SimplePanel>
</ui:UiBinder>
Adding an Image:
<g:SimplePanel ui:field="pane" styleName="{style.button}">
<g:HTMLPanel>
<table align="center">
<tr>
<td>
<g:Image styleName="{style.pane}" url="gwt-logo-42x42.png"></g:Image>
</td>
<td>
<g:Label ui:field="label"></g:Label>
</td>
</tr>
</table>
</g:HTMLPanel>
</g:SimplePanel>
Click to view Demo
JavaFX 2 Form
This is a simple JavaFX Login Form with a TRON like effect.
In this example I am using CSS to style the TextField and Button. Here is the snippet of the CSS and Effect code:
.text-field{
-fx-background-color: transparent;
-fx-border-color: #00CCFF;
-fx-text-fill: white;
}
.password-field{
-fx-background-color: transparent;
-fx-border-color: #00CCFF;
-fx-text-fill: white;
}
.button{
-fx-background-color: transparent;
-fx-border-color: white;
-fx-background-radius: 30;
-fx-border-radius: 30;
-fx-text-fill: white;
-fx-font-weight: bold;
-fx-font-size: 14px;
-fx-padding: 10 20 10 20;
}
DropShadow effect = new DropShadow();
effect.setColor(color);
effect.setBlurType(BlurType.GAUSSIAN);
effect.setSpread(spread);
effect.setRadius(radius);
JavaFX 2 Custom Shapes
Sometimes using an Image will limit you in adding an effect on it, especially if you’re working on animations or a game application. So to give an example I tried creating a playing card suit shapes using the Path class and the Path elements.
Here is the code snippet of the Spade I made:
Path spade = new Path();
spade.getElements().add(new MoveTo(25.0f, 0.0f));
spade.getElements().add(new LineTo(45.0f, 30.0f));
spade.getElements().add(QuadCurveToBuilder.create()
.controlX(40.0f)
.controlY(50.0f)
.x(27.0f)
.y(30.0f)
.build());
spade.getElements().add(QuadCurveToBuilder.create()
.controlX(28.0f)
.controlY(35.0f)
.x(35.0f)
.y(50.0f)
.build());
spade.getElements().add(new LineTo(15.0f, 50.0f));
spade.getElements().add(QuadCurveToBuilder.create()
.controlX(22.0f)
.controlY(35.0f)
.x(23.0f)
.y(30.0f)
.build());
spade.getElements().add(QuadCurveToBuilder.create()
.controlX(10.0f)
.controlY(50.0f)
.x(5.0f)
.y(30.0f).build());
spade.getElements().add(new LineTo(25.0f, 0.0f));
JavaFX 2.0 Neon Globe
Tags
This is my first JavaFX 2.0 example that is developed on Mac, using the JavaFX 2.1 Developer Preview for Mac. To give an example I created a 3D spinning Neon Globe. I’d play much around with the JavaFX Transforms and Shapes to the create neon objects.
It would be good if the JavaFX team will include the other JavaFX 1.3 variant Shape like ShapeIntersect, ShapeSubtract and DelegateShape for the next releases of JavaFX.
JavaFX 2.0 Shape Experiment using Circle
Tags
This example is my experiment using javafx.scene.shape.Circle in JavaFX.
Figures below uses 3 circles only to create a 3D Object and to change the visual view of the circles you have to set the StrokeLineCap of the circles, the end cap styles: StrokeLineCap.BUTT, StrokeLineCap.ROUND, and StrokeLineCap.SQUARE and you need to set the StrokeDashArray aswell by adding lengths of the dash segments.
I posted a few lines of code below:
circle.setStrokeLineCap(StrokeLineCap.BUTT)
circle.getStrokeDashArray().addAll(1.0, 13.0);
circle.setStrokeWidth(215);
circle.setRadius(100);
Simple JavaFX 2.0 3D Object
Tags
A simple JavaFX 2.0 3D example with Path Transition and Timeline Animation.
public class Main extends Application {
private static final double WIDTH = 700, HEIGHT = 500;
private PathTransition pathBackFaceTransition;
private PathTransition pathFrontFaceTransition;
private Timeline animation;
private void init(Stage primaryStage) {
Group root = new Group();
primaryStage.setTitle("JavaFX 3D");
primaryStage.setResizable(false);
Scene scene = new Scene(root, WIDTH, HEIGHT, true);
scene.setFill(Color.BLACK);
primaryStage.setScene(scene);
PerspectiveCamera camera = new PerspectiveCamera();
Translate translate = new Translate(WIDTH / 2, HEIGHT / 2);
Rotate rotate = new Rotate(180, Rotate.Y_AXIS);
primaryStage.getScene().setCamera(camera);
root.getTransforms().addAll(translate, rotate);
Node node = create3dContent();
root.getChildren().add(node);
}
public void play(){
pathBackFaceTransition.play();
pathFrontFaceTransition.play();
}
@Override
public void stop(){
pathBackFaceTransition.stop();
pathFrontFaceTransition.stop();
}
public Node create3dContent() {
final Face cube = new Face(250);
cube.rx.setAngle(0);
cube.ry.setAngle(0);
cube.rz.setAngle(0);
cube.setOnMouseMoved(new EventHandler(){
@Override
public void handle(final MouseEvent arg0) {
animation = new Timeline();
animation.getKeyFrames().addAll(
new KeyFrame(new Duration(2000),
new KeyValue(cube.rx.angleProperty(),arg0.getY()),
new KeyValue(cube.ry.angleProperty(),-arg0.getX()),
new KeyValue(cube.rz.angleProperty(),arg0.getY())
));
animation.play();
}
});
return new Group(cube);
}
public class Face extends Group {
final Rotate rx = new Rotate(0, Rotate.X_AXIS);
final Rotate ry = new Rotate(0, Rotate.Y_AXIS);
final Rotate rz = new Rotate(0, Rotate.Z_AXIS);
RectangleBuilder frontFace;// front face
RectangleBuilder rightFace;// right face
RectangleBuilder leftFace;// left face
RectangleBuilder backFace;// back face
public Face(double size) {
Color[] colors = {Color.TRANSPARENT, Color.YELLOW};
backFace = RectangleBuilder.create()
.strokeDashArray(1.0,6.0)
.width(size)
.height(size)
.fill(colors[0])
.strokeWidth(2)
.stroke(Color.BLUE)
.translateX(-0.5 * size)
.translateY(-0.5 * size)
.translateZ(-0.5*size)
.rotationAxis(Rotate.Z_AXIS)
.rotate(45);
rightFace = RectangleBuilder.create()
.strokeDashArray(1.0,6.0)
.width(size)
.height(size)
.fill(colors[0])
.strokeWidth(2)
.stroke(Color.BLUE)
.translateX(-1 * size)
.translateY(-0.5 * size)
.rotationAxis(Rotate.Y_AXIS)
.rotate(90);
leftFace = RectangleBuilder.create()
.strokeDashArray(1.0,6.0)
.width(size)
.height(size)
.fill(colors[0])
.strokeWidth(2)
.stroke(Color.BLUE)
.translateX(0)
.translateY(-0.5 * size)
.rotationAxis(Rotate.Y_AXIS)
.rotate(90);
frontFace = RectangleBuilder.create()
.strokeDashArray(1.0,6.0)
.width(size)
.height(size)
.fill(colors[0])
.strokeWidth(2)
.stroke(Color.BLUE)
.translateX(-0.5 * size)
.translateY(-0.5 * size)
.translateZ(0.5*size)
.rotationAxis(Rotate.Z_AXIS)
.rotate(225);
Rectangle rectangleFrontFace = frontFace.build();
Rectangle rectangleRightFace = rightFace.build();
Rectangle ractangleLeftFace = leftFace.build();
Rectangle rectangleBackFace = backFace.build();
Bloom backFaceBloomEffect = new Bloom();
Circle backCircle = new Circle();
backCircle.setStrokeWidth(10);
backCircle.setRadius(10);
backCircle.setStrokeLineCap(StrokeLineCap.ROUND);
backCircle.setStroke(colors[1]);
backCircle.getStrokeDashArray().addAll(1.0, 20.0);
backCircle.setTranslateX(-0.5 * size);
backCircle.setTranslateY(-0.5 * size);
backCircle.setTranslateZ(-0.5 * size);
backCircle.setEffect(backFaceBloomEffect);
Bloom frontFaceBloomEffect = new Bloom();
Circle frontCircle = new Circle();
frontCircle.setStrokeWidth(10);
frontCircle.setRadius(10);
frontCircle.setStrokeLineCap(StrokeLineCap.ROUND);
frontCircle.setStroke(colors[1]);
frontCircle.getStrokeDashArray().addAll(1.0, 20.0);
frontCircle.setTranslateX(-0.5 * size);
frontCircle.setTranslateY(-0.5 * size);
frontCircle.setTranslateZ(0.5 * size);
frontCircle.setEffect(frontFaceBloomEffect);
pathBackFaceTransition = new PathTransition();
pathBackFaceTransition.setPath(rectangleBackFace);
pathBackFaceTransition.setNode(frontCircle);
pathBackFaceTransition.setDuration(Duration.seconds(4));
pathBackFaceTransition.setOrientation(OrientationType.ORTHOGONAL_TO_TANGENT);
pathBackFaceTransition.setCycleCount(Timeline.INDEFINITE);
pathFrontFaceTransition = new PathTransition();
pathFrontFaceTransition.setPath(rectangleFrontFace);
pathFrontFaceTransition.setNode(backCircle);
pathFrontFaceTransition.setDuration(Duration.seconds(4));
pathFrontFaceTransition.setOrientation(OrientationType.ORTHOGONAL_TO_TANGENT);
pathFrontFaceTransition.setCycleCount(Timeline.INDEFINITE);
getChildren().addAll(backCircle, frontCircle, rectangleBackFace, rectangleRightFace, ractangleLeftFace, rectangleFrontFace);
getTransforms().addAll(rz, ry, rx);
}
}
@Override
public void start(Stage primaryStage) throws Exception {
init(primaryStage);
primaryStage.show();
play();
}
public static void main(String[] args) {
launch(args);
}
}
|
__label__pos
| 0.853918 |
Walkthrough: Creating a Windows Service Application in the Component Designer
Walkthrough: Creating a Windows Service Application in the Component Designer
This article demonstrates how to create a simple Windows Service application in Visual Studio that writes messages to an event log. Here are the basic steps that you perform to create and use your service:
1. Creating a Service by using the Windows Service project template, and configure it. This template creates a class for you that inherits from System.ServiceProcess.ServiceBase and writes much of the basic service code, such as the code to start the service.
2. Adding Features to the Service for the OnStart and OnStop procedures, and override any other methods that you want to redefine.
3. Setting Service Status. By default, services created with System.ServiceProcess.ServiceBase implement only a subset of the available status flags. If your service takes a long time to start up, pause, or stop, you can implement status values such as Start Pending or Stop Pending to indicate that it's working on an operation.
4. Adding Installers to the Service for your service application.
5. (Optional) Set Startup Parameters, specify default startup arguments, and enable users to override default settings when they start your service manually.
6. Building the Service.
7. Installing the Service on the local machine.
8. Access the Windows Service Control Manager and Starting and Running the Service.
9. Uninstalling a Windows Service.
System_CAPS_warningWarning
The Windows Services project template that is required for this walkthrough is not available in the Express edition of Visual Studio.
System_CAPS_noteNote
Your computer might show different names or locations for some of the Visual Studio user interface elements in the following instructions. The Visual Studio edition that you have and the settings that you use determine these elements. For more information, see Personalizing the Visual Studio IDE.
To begin, you create the project and set values that are required for the service to function correctly.
To create and configure your service
1. In Visual Studio, on the menu bar, choose File, New, Project.
The New Project dialog box opens.
2. In the list of Visual Basic or Visual C# project templates, choose Windows Service, and name the project MyNewService. Choose OK.
The project template automatically adds a component class named Service1 that inherits from System.ServiceProcess.ServiceBase.
3. On the Edit menu, choose Find and Replace, Find in Files (Keyboard: Ctrl+Shift+F). Change all occurrences of Service1 to MyNewService. You’ll find instances in Service1.cs, Program.cs, and Service1.Designer.cs (or their .vb equivalents).
4. In the Properties window for Service1.cs [Design] or Service1.vb [Design], set the ServiceName and the (Name) property for Service1 to MyNewService, if it's not already set.
5. In Solution Explorer, rename Service1.cs to MyNewService.cs, or Service1.vb to MyNewService.vb.
In this section, you add a custom event log to the Windows service. Event logs are not associated in any way with Windows services. Here the EventLog component is used as an example of the type of component you could add to a Windows service.
To add custom event log functionality to your service
1. In Solution Explorer, open the context menu for MyNewService.cs or MyNewService.vb, and then choose View Designer.
2. From the Components section of the Toolbox, drag an EventLog component to the designer.
3. In Solution Explorer, open the context menu for MyNewService.cs or MyNewService.vb, and then choose View Code.
4. Add a declaration for the eventLog object in the MyNewService class, right after the line that declares the components variable:
private System.ComponentModel.IContainer components;
private System.Diagnostics.EventLog eventLog1;
5. Add or edit the constructor to define a custom event log:
public MyNewService()
{
InitializeComponent();
eventLog1 = new System.Diagnostics.EventLog();
if (!System.Diagnostics.EventLog.SourceExists("MySource"))
{
System.Diagnostics.EventLog.CreateEventSource(
"MySource","MyNewLog");
}
eventLog1.Source = "MySource";
eventLog1.Log = "MyNewLog";
}
To define what occurs when the service starts
• In the Code Editor, locate the OnStart method that was automatically overridden when you created the project, and replace the code with the following. This adds an entry to the event log when the service starts running:
protected override void OnStart(string[] args)
{
eventLog1.WriteEntry("In OnStart");
}
A service application is designed to be long-running, so it usually polls or monitors something in the system. The monitoring is set up in the OnStart method. However, OnStart doesn’t actually do the monitoring. The OnStart method must return to the operating system after the service's operation has begun. It must not loop forever or block. To set up a simple polling mechanism, you can use the System.Timers.Timer component as follows: In the OnStart method, set parameters on the component, and then set the Enabled property to true. The timer raises events in your code periodically, at which time your service could do its monitoring. You can use the following code to do this:
// Set up a timer to trigger every minute.
System.Timers.Timer timer = new System.Timers.Timer();
timer.Interval = 60000; // 60 seconds
timer.Elapsed += new System.Timers.ElapsedEventHandler(this.OnTimer);
timer.Start();
Add code to handle the timer event:
public void OnTimer(object sender, System.Timers.ElapsedEventArgs args)
{
// TODO: Insert monitoring activities here.
eventLog1.WriteEntry("Monitoring the System", EventLogEntryType.Information, eventId++);
}
You might want to perform tasks by using background worker threads instead of running all your work on the main thread. For an example of this, see the System.ServiceProcess.ServiceBase reference page.
To define what occurs when the service is stopped
• Replace the code for the OnStop method with the following. This adds an entry to the event log when the service is stopped:
protected override void OnStop()
{
eventLog1.WriteEntry("In onStop.");
}
In the next section, you can override the OnPause, OnContinue, and OnShutdown methods to define additional processing for your component.
To define other actions for the service
• Locate the method that you want to handle, and override it to define what you want to occur.
The following code shows how you can override the OnContinue method:
protected override void OnContinue()
{
eventLog1.WriteEntry("In OnContinue.");
}
Some custom actions have to occur when a Windows service is installed by the Installer class. Visual Studio can create these installers specifically for a Windows service and add them to your project.
Services report their status to the Service Control Manager, so that users can tell whether a service is functioning correctly. By default, services that inherit from ServiceBase report a limited set of status settings, including Stopped, Paused, and Running. If a service takes a little while to start up, it might be helpful to report a Start Pending status. You can also implement the Start Pending and Stop Pending status settings by adding code that calls into the Windows SetServiceStatus function.
To implement service pending status
1. Add a using statement or Imports declaration to the System.Runtime.InteropServices namespace in the MyNewService.cs or MyNewService.vb file:
using System.Runtime.InteropServices;
2. Add the following code to MyNewService.cs to declare the ServiceState values and to add a structure for the status, which you'll use in a platform invoke call:
public enum ServiceState
{
SERVICE_STOPPED = 0x00000001,
SERVICE_START_PENDING = 0x00000002,
SERVICE_STOP_PENDING = 0x00000003,
SERVICE_RUNNING = 0x00000004,
SERVICE_CONTINUE_PENDING = 0x00000005,
SERVICE_PAUSE_PENDING = 0x00000006,
SERVICE_PAUSED = 0x00000007,
}
[StructLayout(LayoutKind.Sequential)]
public struct ServiceStatus
{
public long dwServiceType;
public ServiceState dwCurrentState;
public long dwControlsAccepted;
public long dwWin32ExitCode;
public long dwServiceSpecificExitCode;
public long dwCheckPoint;
public long dwWaitHint;
};
3. Now, in the MyNewService class, declare the SetServiceStatus function by using platform invoke:
[DllImport("advapi32.dll", SetLastError=true)]
private static extern bool SetServiceStatus(IntPtr handle, ref ServiceStatus serviceStatus);
4. To implement the Start Pending status, add the following code to the beginning of the OnStart method:
// Update the service state to Start Pending.
ServiceStatus serviceStatus = new ServiceStatus();
serviceStatus.dwCurrentState = ServiceState.SERVICE_START_PENDING;
serviceStatus.dwWaitHint = 100000;
SetServiceStatus(this.ServiceHandle, ref serviceStatus);
5. Add code to set the status to Running at the end of the OnStart method.
// Update the service state to Running.
serviceStatus.dwCurrentState = ServiceState.SERVICE_RUNNING;
SetServiceStatus(this.ServiceHandle, ref serviceStatus);
' Update the service state to Running.
serviceStatus.dwCurrentState = ServiceState.SERVICE_RUNNING
SetServiceStatus(Me.ServiceHandle, serviceStatus)
6. (Optional) Repeat this procedure for the OnStop method.
System_CAPS_cautionCaution
The Service Control Manager uses the dwWaitHint and dwCheckpoint members of the SERVICE_STATUS structure to determine how much time to wait for a Windows Service to start or shut down. If your OnStart and OnStop methods run long, your service can request more time by calling SetServiceStatus again with an incremented dwCheckPoint value.
Before you can run a Windows Service, you need to install it, which registers it with the Service Control Manager. You can add installers to your project that handle the registration details.
To create the installers for your service
1. In Solution Explorer, open the context menu for MyNewService.cs or MyNewService.vb, and then choose View Designer.
2. Click the background of the designer to select the service itself, instead of any of its contents.
3. Open the context menu for the designer window (if you’re using a pointing device, right-click inside the window), and then choose Add Installer.
By default, a component class that contains two installers is added to your project. The component is named ProjectInstaller, and the installers it contains are the installer for your service and the installer for the service's associated process.
4. In Design view for ProjectInstaller, choose serviceInstaller1 for a Visual C# project, or ServiceInstaller1 for a Visual Basic project.
5. In the Properties window, make sure the ServiceName property is set to MyNewService.
6. Set the Description property to some text, such as "A sample service". This text appears in the Services window and helps the user identify the service and understand what it’s used for.
7. Set the DisplayName property to the text that you want to appear in the Services window in the Name column. For example, you can enter "MyNewService Display Name". This name can be different from the ServiceName property, which is the name used by the system (for example, when you use the net start command to start your service).
8. Set the StartType property to Automatic.
Installer Properties for a Windows Service
9. In the designer, choose serviceProcessInstaller1 for a Visual C# project, or ServiceProcessInstaller1 for a Visual Basic project. Set the Account property to LocalSystem. This will cause the service to be installed and to run on a local service account.
System_CAPS_security Security Note
The LocalSystem account has broad permissions, including the ability to write to the event log. Use this account with caution, because it might increase your risk of attacks from malicious software. For other tasks, consider using the LocalService account, which acts as a non-privileged user on the local computer and presents anonymous credentials to any remote server. This example fails if you try to use the LocalService account, because it needs permission to write to the event log.
For more information about installers, see How to: Add Installers to Your Service Application.
A Windows Service, like any other executable, can accept command-line arguments, or startup parameters. When you add code to process startup parameters, users can start your service with their own custom startup parameters by using the Services window in the Windows Control Panel. However, these startup parameters are not persisted the next time the service starts. To set startup parameters permanently, you can set them in the registry, as shown in this procedure.
System_CAPS_noteNote
Before you decide to add startup parameters, consider whether that is the best way to pass information to your service. Although startup parameters are easy to use and to parse, and users can easily override them, they might be harder for users to discover and use without documentation. Generally, if your service requires more than just a few startup parameters, you should consider using the registry or a configuration file instead. Every Windows Service has an entry in the registry under HKLM\System\CurrentControlSet\services. Under the service's key, you can use the Parameters subkey to store information that your service can access. You can use application configuration files for a Windows Service the same way you do for other types of programs. For example code, see AppSettings.
Adding startup parameters
1. In the Main method in Program.cs or in MyNewService.Designer.vb, add an argument for the command line:
static void Main(string[] args) { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new MyNewService(args) }; ServiceBase.Run(ServicesToRun); }
2. Change the MyNewService constructor as follows:
public MyNewService(string[] args) { InitializeComponent(); string eventSourceName = "MySource"; string logName = "MyNewLog"; if (args.Count() > 0) { eventSourceName = args[0]; } if (args.Count() > 1) { logName = args[1]; } eventLog1 = new System.Diagnostics.EventLog(); if (!System.Diagnostics.EventLog.SourceExists(eventSourceName)) { System.Diagnostics.EventLog.CreateEventSource( eventSourceName, logName); } eventLog1.Source = eventSourceName; eventLog1.Log = logName; }
This code sets the event source and log name according to the supplied startup parameters, or uses default values if no arguments are supplied.
3. To specify the command-line arguments, add the following code to the ProjectInstaller class in ProjectInstaller.cs or ProjectInstaller.vb:
protected override void OnBeforeInstall(IDictionary savedState) { string parameter = "MySource1\" \"MyLogFile1"; Context.Parameters["assemblypath"] = "\"" + Context.Parameters["assemblypath"] + "\" \"" + parameter + "\""; base.OnBeforeInstall(savedState); }
This code modifies the ImagePath registry key, which typically contains the full path to the executable for the Windows Service, by adding the default parameter values. The quotation marks around the path (and around each individual parameter) are required for the service to start up correctly. To change the startup parameters for this Windows Service, users can change the parameters given in the ImagePath registry key, although the better way is to change it programmatically and expose the functionality to users in a friendly way (for example, in a management or configuration utility).
To build your service project
1. In Solution Explorer, open the context menu for your project, and then choose Properties. The property pages for your project appear.
2. On the Application tab, in the Startup object list, choose MyNewService.Program.
3. In Solution Explorer, open the context menu for your project, and then choose Build to build the project (Keyboard: Ctrl+Shift+B).
Now that you've built the Windows service, you can install it. To install a Windows service, you must have administrative credentials on the computer on which you're installing it.
To install a Windows Service
1. In Windows 7 and Windows Server, open the Developer Command Prompt under Visual Studio Tools in the Start menu. In Windows 8 or Windows 8.1, choose the Visual Studio Tools tile on the Start screen, and then run Developer Command Prompt with administrative credentials. (If you’re using a mouse, right-click on Developer Command Prompt, and then choose Run as Administrator.)
2. In the Command Prompt window, navigate to the folder that contains your project's output. For example, under your My Documents folder, navigate to Visual Studio 2013\Projects\MyNewService\bin\Debug.
3. Enter the following command:
installutil.exe MyNewService.exe
If the service installs successfully, installutil.exe will report success. If the system could not find InstallUtil.exe, make sure that it exists on your computer. This tool is installed with the .NET Framework to the folder %WINDIR%\Microsoft.NET\Framework[64]\framework_version. For example, the default path for the 32-bit version of the .NET Framework 4, 4.5, 4.5.1, and 4.5.2 is C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe.
For more information, see How to: Install and Uninstall Services.
To start and stop your service
1. In Windows, open the Start screen or Start menu, and type services.msc.
You should now see MyNewService listed in the Services window.
MyNewService in the Services window.
2. In the Services window, open the shortcut menu for your service, and then choose Start.
3. Open the shortcut menu for the service, and then choose Stop.
4. (Optional) From the command line, you can use the commands net start ServiceName and net stop ServiceName to start and stop your service.
To verify the event log output of your service
1. In Visual Studio, open Server Explorer (Keyboard: Ctrl+Alt+S), and access the Event Logs node for the local computer.
2. Locate the listing for MyNewLog (or MyLogFile1, if you used the optional procedure to add command-line arguments) and expand it. You should see entries for the two actions (start and stop) your service has performed.
Use the Event Viewer to see the event log entries.
To uninstall your service
1. Open a developer command prompt with administrative credentials.
2. In the Command Prompt window, navigate to the folder that contains your project's output. For example, under your My Documents folder, navigate to Visual Studio 2013\Projects\MyNewService\bin\Debug.
3. Enter the following command:
installutil.exe /u MyNewService.exe
If the service uninstalls successfully, installutil.exe will report that your service was successfully removed. For more information, see How to: Install and Uninstall Services.
Next Steps
You can create a standalone setup program that others can use to install your Windows service, but it requires additional steps. ClickOnce doesn't support Windows services, so you can't use the Publish Wizard. You can use a full edition of InstallShield, which Microsoft doesn't provide. For more information about InstallShield, see InstallShield Limited Edition. You can also use the Windows Installer XML Toolset to create an installer for a Windows service.
You might explore the use of a ServiceController component, which enables you to send commands to the service you have installed.
You can use an installer to create an event log when the application is installed instead of creating the event log when the application runs. Additionally, the event log will be deleted by the installer when the application is uninstalled. For more information, see the EventLogInstaller reference page.
Show:
© 2015 Microsoft
|
__label__pos
| 0.718033 |
Follow Our FB Page << CircleMedia.in >> for Daily Laughter. We Post Funny, Viral, Comedy Videos, Memes, Vines...
sir i wanted to know how we wap in c to add numbers without
using arithmetic operator in which digits are entered by
user?
Answers were Sorted based on User's Feedback
sir i wanted to know how we wap in c to add numbers without using arithmetic operator in which dig..
Answer / niranjan vg
#include<stdio.h>
int main()
{
int a,b,sum,carry;
printf("\n Enter the numbers : ");
scanf("%d%d",&a,&b);
sum=a^b;
carry=a&b; // Produce a extra carry bit if present
while(carry!=0)
{
carry<<=1; // shift for every iteration so
that it gets added with the next digit
a=sum;
b=carry;
sum=a^b; // perform Xor Operation
carry=a&b; // Calculate the new value for carry
}
printf("\n The sum is %d", sum);
}
Is This Answer Correct ? 3 Yes 1 No
sir i wanted to know how we wap in c to add numbers without using arithmetic operator in which dig..
Answer / sheshivardhan reddy.rayala
using aadd()function we can add the arguements without
using arithmetic operator
Is This Answer Correct ? 1 Yes 2 No
Post New Answer
More C Interview Questions
When is a “switch” statement preferable over an “if” statement?
0 Answers
Write a simple code fragment that will check if a number is positive or negative.
0 Answers
7-Given an index k, return the kth row of the Pascal's triangle. For example, when k = 3, the row is [1,3,3,1]. For reference look at the following standard pascal’s triangle.
0 Answers
What's wrong with "char *p = malloc(10);" ?
4 Answers
What's a "sequence point"?
3 Answers
why wipro wase
0 Answers Wipro,
what is difference b/w extern & volatile variable??
6 Answers Teleca,
mplementation of stack using any programing language
1 Answers Marlabs,
c program to add and delete an element from circular queue using array
3 Answers
why in C,C++'s int size is 2 byte and .net(c#) int Size is 4 byte?
2 Answers
Explain what is the difference between #include and #include 'file' ?
0 Answers
Explain low-order bytes.
0 Answers
Categories
|
__label__pos
| 0.999969 |
MP3 Hunter obtain single MP3 music
I used Button1 to learn in an MP3 files Frames bytes to the record(Of Byte()) then used Button3 to jot down every these to a brand new post identify which home windows Media player had no hassle playing the new pilaster made up of all the Frames from the checklist(Of Byte()).
https://www.audacityteam.org/ can adjust the tracks name, , disc, year and style. Tags are supported for mp3, ogg, flac, wav.
MP3 information are just like WAV files but are to 1/tenth the sizeyet maintain high blare high quality. mp3gain is concerning 3.5MB,can be downloaded surrounded by lower than 10 atomics over a fifty sixk modem attachment. ffmpeg don't perceive anything a Megabyte is, understand that 1/tenth the size:
Well, I guessed proper but I cant hear any pronounce difference. and i question there may be any audible distinction (doesn't matter what is actually affirmed the 5zero/5zero stats). That doesnt imply 128kbps is nice sufficient as 320. to start with 128=128 just isn't all the time genuine, there are different codecs and configurations, you may encode surrounded by 128 better than inside three20. for example, this specific 128kbps instance chomp MS cD road projection generally provides you higher clatter high quality with lower bitrate and 32zero doesnt. just a bit sham from the creator, that for one reason need to save from harm low bitrate audio. Then, there may be a racket comprehensiveness, you'll not hear the distinction between 1kbps beep and a hundred0GBps beep. however yeah, you'll hear the difference between well cD riped 128 and three20 kbps contained by most music tracks independently of doesn't matter what your audio system is, so long as it cost greater than 10 bucks. I alone my cDs only inside VBR by means of chief settcontained bygs whatsoever offers me clatter quality and cramped piece size. this manner there is almost no audible difference between compact disk and mp3 cheap/mid range systems sort a hundred 2zero0 bucks.
What MP3GAIN of memory system is utilized in MP3 and MP4?
You can usedvd ripping softwreto trudge dvd to audio format rank after which your mp3 participant. it's very easy part. If you do not know easy methods to begin, go to thedvd ripper information .
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.589107 |
...
View Full Version : Run script AFTER page refresh?
Johnb21
01-05-2011, 02:39 AM
Is this possible? I am writing a script that will display an alert box that reads "your play was saved" after a user clicks the submit button on a form. The only problem is that after clicking save, the page refreshes. Here is the code I tried to use
function savedMessage(){
document.getElementById("CmdFinish").addEventListener("click",function(e) {
GM_setValue("Saved", 1); }, false);
if (GM_getValue("saved", 0) == 1) {
alert("Your play has been saved");
GM_setValue("saved", 0); // or GM_deleteValue("saved"); }
}
The button ID is "CmdFinish" and the value is "saved". Is there ANY way i can make the script run only on a page refresh?
**Note**
The script is for a specific page and not the whole site, so it wont run on every refresh on the site, only when it's needed on 1 page.
Nile
01-05-2011, 02:49 AM
You could try setting a cookie or when saving the page, you could add something like ?saved=true, and splice it using splice and indexOf
EZ Archive Ads Plugin for vBulletin Copyright 2006 Computer Help Forum
|
__label__pos
| 0.88757 |
Encontrar el tipo de image de NSData o UIImage
Estoy cargando una image de una URL proporcionada por un tercero. No hay una extensión de file (o nombre de file para el caso) en la URL (ya que es una URL oscurecida). Puedo tomar los datos de este (en forma de NSData) y cargarlos en un UIImage y mostrarlos bien.
Quiero mantener estos datos en un file. Sin embargo, no sé en qué formatting están los datos (PNG, JPG, BMP)? Supongo que es JPG (ya que es una image de la web), pero ¿hay una forma programática de averiguarlo a ciencia cierta? He mirado alnetworkingedor de StackOverflow y en la documentation y no he podido encontrar nada.
TIA.
Edit: ¿Realmente necesito la extensión de file? Lo estoy persistiendo en un almacenamiento externo (Amazon S3), pero teniendo en count que siempre se usará en el context de iOS o un browser (ambos parecen estar bien interpretando los datos sin una extensión), tal vez esto no sea un problema .
Si tiene NSData para el file de image, puede adivinar el tipo de contenido mirando el primer byte:
+ (NSString *)contentTypeForImageData:(NSData *)data { uint8_t c; [data getBytes:&c length:1]; switch (c) { case 0xFF: return @"image/jpeg"; case 0x89: return @"image/png"; case 0x47: return @"image/gif"; case 0x49: case 0x4D: return @"image/tiff"; } return nil; }
Al mejorar la respuesta de wl , aquí hay una manera mucho más extensa y precisa de pnetworkingecir el tipo MIME de la image en function de la firma. El código se inspiró en gran medida en php's ext / standard / image.c .
- (NSString *)mimeTypeByGuessingFromData:(NSData *)data { char bytes[12] = {0}; [data getBytes:&bytes length:12]; const char bmp[2] = {'B', 'M'}; const char gif[3] = {'G', 'I', 'F'}; const char swf[3] = {'F', 'W', 'S'}; const char swc[3] = {'C', 'W', 'S'}; const char jpg[3] = {0xff, 0xd8, 0xff}; const char psd[4] = {'8', 'B', 'P', 'S'}; const char iff[4] = {'F', 'O', 'R', 'M'}; const char webp[4] = {'R', 'I', 'F', 'F'}; const char ico[4] = {0x00, 0x00, 0x01, 0x00}; const char tif_ii[4] = {'I','I', 0x2A, 0x00}; const char tif_mm[4] = {'M','M', 0x00, 0x2A}; const char png[8] = {0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a}; const char jp2[12] = {0x00, 0x00, 0x00, 0x0c, 0x6a, 0x50, 0x20, 0x20, 0x0d, 0x0a, 0x87, 0x0a}; if (!memcmp(bytes, bmp, 2)) { return @"image/x-ms-bmp"; } else if (!memcmp(bytes, gif, 3)) { return @"image/gif"; } else if (!memcmp(bytes, jpg, 3)) { return @"image/jpeg"; } else if (!memcmp(bytes, psd, 4)) { return @"image/psd"; } else if (!memcmp(bytes, iff, 4)) { return @"image/iff"; } else if (!memcmp(bytes, webp, 4)) { return @"image/webp"; } else if (!memcmp(bytes, ico, 4)) { return @"image/vnd.microsoft.icon"; } else if (!memcmp(bytes, tif_ii, 4) || !memcmp(bytes, tif_mm, 4)) { return @"image/tiff"; } else if (!memcmp(bytes, png, 8)) { return @"image/png"; } else if (!memcmp(bytes, jp2, 12)) { return @"image/jp2"; } return @"application/octet-stream"; // default type }
El método anterior reconoce los siguientes types de imágenes:
• image/x-ms-bmp (bmp)
• image/gif (gif)
• image/jpeg (jpg, jpeg)
• image/psd (psd)
• image/iff (iff)
• image/webp (webp)
• image/vnd.microsoft.icon (ico)
• image/tiff (tif, tiff)
• image/png (png)
• image/jp2 (jp2)
Desafortunadamente, no hay una manera simple de get este tipo de información de una instancia de UIImage , porque no se puede acceder a sus datos de bitmap encapsulados.
Si está recuperando la image de una URL, es probable que pueda inspeccionar los encabezados de respuesta HTTP. ¿ Content-Type encabezado Content-Type contiene algo útil? (Me imagino que sería, ya que un browser probablemente sería capaz de mostrar la image correctamente, y solo podría hacerlo si el tipo de contenido estuviera configurado adecuadamente)
Versión Swift3:
let data: Data = UIImagePNGRepresentation(yourImage)! extension Data { var format: String { let array = [UInt8](self) let ext: String switch (array[0]) { case 0xFF: ext = "jpg" case 0x89: ext = "png" case 0x47: ext = "gif" case 0x49, 0x4D : ext = "tiff" default: ext = "unknown" } return ext } }
La solución de @Tai Le para Swift 3 es asignar datos completos a la matriz de bytes. Si una image es grande, puede provocar un locking. Esta solución solo asigna un solo byte:
import Foundation public extension Data { var fileExtension: String { var values = [UInt8](repeating:0, count:1) self.copyBytes(to: &values, count: 1) let ext: String switch (values[0]) { case 0xFF: ext = ".jpg" case 0x89: ext = ".png" case 0x47: ext = ".gif" case 0x49, 0x4D : ext = ".tiff" default: ext = ".png" } return ext } }
Si realmente le importa a usted, creo que tendrá que examinar el bytestream. Un JPEG comenzará con los bytes FF D8. Un PNG comenzará con 89 50 4E 47 0D 0A 1A 0A. No sé si BMP tiene un encabezado similar, pero no creo que sea muy probable que te encuentres con los de la web en 2010.
¿Pero realmente te importa? ¿No puedes simplemente tratarlo como una image desconocida y dejar que Cocoa Touch haga el trabajo?
Implemente una verificación de firma para cada formatting de image conocido. Aquí hay una function rápida de Objective-C que hace eso para los datos de PNG:
// Verify that NSData contains PNG data by checking the signature - (BOOL) isPNGData:(NSData*)data { // Verify that the PNG file signature matches static const unsigned char png_sign[8] = {137, 80, 78, 71, 13, 10, 26, 10}; unsigned char sig[8] = {0, 0, 0, 0, 0, 0, 0, 0}; if ([data length] <= 8) { return FALSE; } [data getBytes:&sig length:8]; BOOL same = (memcmp(sig, png_sign, 8) == 0); return same; }
Hice una biblioteca para verificar el tipo de image de NSData:
https://github.com/sweetmandm/ImageFormatInspector
Una alternativa de respuesta aceptada es comprobar la UTI de la image I/O frameWork con Image image I/O frameWork . Puede lograr el formulario de tipo de image UTI. testing esto:
CGImageSourceRef imgSrc = CGImageSourceCreateWithData((CFDataRef)data, NULL); NSString *uti = (NSString*)CGImageSourceGetType(imgSrc); NSLog(@"%@",uti);
Por ejemplo, una UTI de una image GIF es "com.compuserve.gif" y la UTI de una image PNG es "public.png". PERO no puede lograr UTI a partir de la image en la que no se reconoce image I/O frameWork .
|
__label__pos
| 0.936509 |
A Framework for QoS-based Routing in the Internet
RFC 2386
Document Type RFC - Informational (August 1998; No errata)
Last updated 2013-03-02
Stream IETF
Formats plain text pdf htmlized bibtex
Stream WG state (None)
Document shepherd No shepherd assigned
IESG IESG state RFC 2386 (Informational)
Consensus Boilerplate Unknown
Telechat date
Responsible AD (None)
Send notices to (None)
Network Working Group E. Crawley
Request for Comments: 2386 Argon Networks
Category: Informational R. Nair
Arrowpoint
B. Rajagopalan
NEC USA
H. Sandick
Bay Networks
August 1998
A Framework for QoS-based Routing in the Internet
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (1998). All Rights Reserved.
ABSTRACT
QoS-based routing has been recognized as a missing piece in the
evolution of QoS-based service offerings in the Internet. This
document describes some of the QoS-based routing issues and
requirements, and proposes a framework for QoS-based routing in the
Internet. This framework is based on extending the current Internet
routing model of intra and interdomain routing to support QoS.
1. SCOPE OF DOCUMENT & PHILOSOPHY
This document proposes a framework for QoS-based routing, with the
objective of fostering the development of an Internet-wide solution
while encouraging innovations in solving the many problems that
arise. QoS-based routing has many complex facets and it is
recommended that the following two-pronged approach be employed
towards its development:
1. Encourage the growth and evolution of novel intradomain QoS-based
routing architectures. This is to allow the development of
independent, innovative solutions that address the many QoS-based
routing issues. Such solutions may be deployed in autonomous
systems (ASs), large and small, based on their specific needs.
Crawley, et. al. Informational [Page 1]
RFC 2386 A Framework for QoS-based Routing August 1998
2. Encourage simple, consistent and stable interactions between ASs
implementing routing solutions developed as above.
This approach follows the traditional separation between intra and
interdomain routing. It allows solutions like QOSPF [GKOP98, ZSSC97],
Integrated PNNI [IPNNI] or other schemes to be deployed for
intradomain routing without any restriction, other than their ability
to interact with a common, and perhaps simple, interdomain routing
protocol. The need to develop a single, all encompassing solution to
the complex problem of QoS-based routing is therefore obviated. As a
practical matter, there are many different views on how QoS-based
routing should be done. Much overall progress can be made if an
opportunity exists for various ideas to be developed and deployed
concurrently, while some consensus on the interdomain routing
architecture is being developed. Finally, this routing model is
perhaps the most practical from an evolution point of view. It is
superfluous to say that the eventual success of a QoS-based Internet
routing architecture would depend on the ease of evolution.
The aim of this document is to describe the QoS-based routing issues,
identify basic requirements on intra and interdomain routing, and
describe an extension of the current interdomain routing model to
support QoS. It is not an objective of this document to specify the
details of intradomain QoS-based routing architectures. This is left
up to the various intradomain routing efforts that might follow. Nor
is it an objective to specify the details of the interface between
reservation protocols such as RSVP and QoS-based routing. The
specific interface functionality needed, however, would be clear from
the intra and interdomain routing solutions devised. In the
intradomain area, the goal is to develop the basic routing
requirements while allowing maximum freedom for the development of
solutions. In the interdomain area, the objectives are to identify
the QoS-based routing functions, and facilitate the development or
enhancement of a routing protocol that allows relatively simple
interaction between domains.
In the next section, a glossary of relevant terminology is given. In
Section 3, the objectives of QoS-based routing are described and the
issues that must be dealt with by QoS-based Internet routing efforts
are outlined. In Section 4, some requirements on intradomain routing
are defined. These requirements are purposely broad, putting few
constraints on solution approaches. The interdomain routing model and
issues are described in Section 5 and QoS-based multicast routing is
discussed in Section 6. The interaction between QoS-based routing
and resource reservation protocols is briefly considered in Section
7. Security considerations are listed in Section 8 and related work
is described in Section 9. Finally, summary and conclusions are
presented in Section 10.
Crawley, et. al. Informational [Page 2]
RFC 2386 A Framework for QoS-based Routing August 1998
2. GLOSSARY
The following glossary lists the terminology used in this document
and an explanation of what is meant. Some of these terms may have
different connotations, but when used in this document, their meaning
is as given.
Alternate Path Routing : A routing technique where multiple paths,
rather than just the shortest path, between a source and a
destination are utilized to route traffic. One of the objectives of
alternate path routing is to distribute load among multiple paths in
the network.
Autonomous System (AS): A routing domain which has a common
administrative authority and consistent internal routing policy. An
AS may employ multiple intradomain routing protocols internally and
interfaces to other ASs via a common interdomain routing protocol.
Source: A host or router that can be identified by a unique unicast
IP address.
Unicast destination: A host or router that can be identified by a
unique unicast IP address.
Multicast destination: A multicast IP address indicating all hosts
and routers that are members of the corresponding group.
IP flow (or simply "flow"): An IP packet stream from a source to a
destination (unicast or multicast) with an associated Quality of
Service (QoS) (see below) and higher level demultiplexing
information. The associated QoS could be "best-effort".
Quality-of-Service (QoS): A set of service requirements to be met by
the network while transporting a flow.
Service class: The definitions of the semantics and parameters of a
specific type of QoS.
Integrated services: The Integrated Services model for the Internet
defined in RFC 1633 allows for integration of QoS services with the
best effort services of the Internet. The Integrated Services
(IntServ) working group in the IETF has defined two service classes,
Controlled Load Service [W97] and Guaranteed Service [SPG97].
RSVP: The ReSerVation Protocol [BZBH97]. A QoS signaling protocol
for the Internet.
Path: A unicast or multicast path.
Crawley, et. al. Informational [Page 3]
RFC 2386 A Framework for QoS-based Routing August 1998
Unicast path: A sequence of links from an IP source to a unicast IP
destination, determined by the routing scheme for forwarding packets.
Multicast path (or Multicast Tree): A subtree of the network topology
in which all the leaves and zero or more interior nodes are members
of the same multicast group. A multicast path may be per-source, in
which case the subtree is rooted at the source.
Flow set-up: The act of establishing state in routers along a path to
satisfy the QoS requirement of a flow.
Crankback: A technique where a flow setup is recursively backtracked
along the partial flow path up to the first node that can determine
an alternative path to the destination.
QoS-based routing: A routing mechanism under which paths for flows
are determined based on some knowledge of resource availability in
the network as well as the QoS requirement of flows.
Route pinning: A mechanism to keep a flow path fixed for a duration
of time.
Flow Admission Control (FAC): A process by which it is determined
whether a link or a node has sufficient resources to satisfy the QoS
required for a flow. FAC is typically applied by each node in the
path of a flow during flow set-up to check local resource
availability.
Higher-level admission control: A process by which it is determined
whether or not a flow set-up should proceed, based on estimates and
policy requirements of the overall resource usage by the flow.
Higher-level admission control may result in the failure of a flow
set-up even when FAC at each node along the flow path indicates
resource availability.
3. QOS-BASED ROUTING: BACKGROUND AND ISSUES
3.1 Best-Effort and QoS-Based Routing
Routing deployed in today's Internet is focused on connectivity and
typically supports only one type of datagram service called "best
effort" [WC96]. Current Internet routing protocols, e.g. OSPF, RIP,
use "shortest path routing", i.e. routing that is optimized for a
single arbitrary metric, administrative weight or hop count. These
routing protocols are also "opportunistic," using the current
shortest path or route to a destination. Alternate paths with
acceptable but non-optimal cost can not be used to route traffic
(shortest path routing protocols do allow a router to alternate among
Crawley, et. al. Informational [Page 4]
RFC 2386 A Framework for QoS-based Routing August 1998
several equal cost paths to a destination).
QoS-based routing must extend the current routing paradigm in three
basic ways. First, to support traffic using integrated-services
class of services, multiple paths between node pairs will have to be
calculated. Some of these new classes of service will require the
distribution of additional routing metrics, e.g. delay, and available
bandwidth. If any of these metrics change frequently, routing updates
can become more frequent thereby consuming network bandwidth and
router CPU cycles.
Second, today's opportunistic routing will shift traffic from one
path to another as soon as a "better" path is found. The traffic
will be shifted even if the existing path can meet the service
requirements of the existing traffic. If routing calculation is tied
to frequently changing consumable resources (e.g. available
bandwidth) this change will happen more often and can introduce
routing oscillations as traffic shifts back and forth between
alternate paths. Furthermore, frequently changing routes can increase
the variation in the delay and jitter experienced by the end users.
Third, as mentioned earlier, today's optimal path routing algorithms
do not support alternate routing. If the best existing path cannot
admit a new flow, the associated traffic cannot be forwarded even if
an adequate alternate path exists.
3.2 QoS-Based Routing and Resource Reservation
It is important to understand the difference between QoS-based
routing and resource reservation. While resource reservation
protocols such as RSVP [BZBH97] provide a method for requesting and
reserving network resources, they do not provide a mechanism for
determining a network path that has adequate resources to accommodate
the requested QoS. Conversely, QoS-based routing allows the
determination of a path that has a good chance of accommodating the
requested QoS, but it does not include a mechanism to reserve the
required resources.
Consequently, QoS-based routing is usually used in conjunction with
some form of resource reservation or resource allocation mechanism.
Simple forms of QoS-based routing have been used in the past for Type
of Service (TOS) routing [M98]. In the case of OSPF, a different
shortest-path tree can be computed for each of the 8 TOS values in
the IP header [ISI81]. Such mechanisms can be used to select
specially provisioned paths but do not completely assure that
resources are not overbooked along the path. As long as strict
resource management and control are not needed, mechanisms such as
TOS-based routing are useful for separating whole classes of traffic
Crawley, et. al. Informational [Page 5]
RFC 2386 A Framework for QoS-based Routing August 1998
over multiple routes. Such mechanisms might work well with the
emerging Differential Services efforts [BBCD98].
Combining a resource reservation protocol with QoS-based routing
allows fine control over the route and resources at the cost of
additional state and setup time. For example, a protocol such as RSVP
may be used to trigger QoS-based routing calculations to meet the
needs of a specific flow.
3.3 QoS-Based Routing: Objectives
Under QoS-based routing, paths for flows would be determined based
on some knowledge of resource availability in the network, as well as
the QoS requirement of flows. The main objectives of QoS-based
routing are:
1. Dynamic determination of feasible paths: QoS-based routing can
determine a path, from among possibly many choices, that has a
good chance of accommodating the QoS of the given flow. Feasible
path selection may be subject to policy constraints, such as path
cost, provider selection, etc.
2. Optimization of resource usage: A network state-dependent QoS-
based routing scheme can aid in the efficient utilization of
network resources by improving the total network throughput. Such
a routing scheme can be the basis for efficient network
engineering.
3. Graceful performance degradation: State-dependent routing can
compensate for transient inadequacies in network engineering
(e.g., during focused overload conditions), giving better
throughput and a more graceful performance degradation as
compared to a state-insensitive routing scheme [A84].
QoS-based routing in the Internet, however, raises many issues:
- How do routers determine the QoS capability of each outgoing link
and reserve link resources? Note that some of these links may be
virtual, over ATM networks and others may be broadcast multi-
access links.
- What is the granularity of routing decision (i.e., destination-
based, source and destination-based, or flow-based)?
- What routing metrics are used and how are QoS-accommodating paths
computed for unicast flows?
Crawley, et. al. Informational [Page 6]
RFC 2386 A Framework for QoS-based Routing August 1998
- How are QoS-accommodating paths computed for multicast flows with
different reservation styles and receiver heterogeneity?
- What are the performance objectives while computing QoS-based
paths?
- What are the administrative control issues?
- What factors affect the routing overheads?, and
- How is scalability achieved?
Some of these issues are discussed briefly next. Interdomain routing
is discussed in Section 5.
3.4 QoS Determination and Resource Reservation
To determine whether the QoS requirements of a flow can be
accommodated on a link, a router must be able to determine the QoS
available on the link. It is still an open issue as to how the QoS
availability is determined for broadcast multiple access links (e.g.,
Ethernet). A related problem is the reservation of resources over
such links. Solutions to these problems are just emerging [GPSS98].
Similar problems arise when a router is connected to a large non-
broadcast multiple access network, such as ATM. In this case, if the
destination of a flow is outside the ATM network, the router may have
multiple egress choices. Furthermore, the QoS availability on the ATM
paths to each egress point may be different. The issues then are,
o how does a router determine all the egress choices across the
ATM network?
o how does it determine what QoS is available over the path to
each egress point?, and
o what QoS value does the router advertise for the ATM link.
Typically, IP routing over ATM (e.g., NHRP) allows the selection of a
single egress point in the ATM network, and the procedure does not
incorporate any knowledge of the QoS required over the path. An
approach like I-PNNI [IPNNI] would be helpful here, although it
introduces some complexity.
An additional problem with resource reservation is how to determine
what resources have already been allocated to a multicast flow. The
availability of this information during path computation improves the
chances of finding a path to add a new receiver to a multicast flow.
QOSPF [ZSSC97] handles this problem by letting routers broadcast
reserved resource information to other routers in their area.
Crawley, et. al. Informational [Page 7]
RFC 2386 A Framework for QoS-based Routing August 1998
Alternate path routing [ZES97] deals with this issue by using probe
messages to find a path with sufficient resources. Path QoS
Computation (PQC) method, proposed in [GOA97], propagates bandwidth
allocation information in RSVP PATH messages. A router receiving the
PATH message gets an indication of the resource allocation only on
those links in the path to itself from the source. Allocation for
the same flow on other remote branches of the multicast tree is not
available. Thus, the PQC method may not be sufficient to find
feasible QoS-accommodating paths to all receivers.
3.5 Granularity of Routing Decision
Routing in the Internet is currently based only on the destination
address of a packet. Many multicast routing protocols require
routing based on the source AND destination of a packet. The
Integrated Services architecture and RSVP allow QoS determination for
an individual flow between a source and a destination. This set of
routing granularities presents a problem for QoS routing solutions.
If routing based only on destination address is considered, then an
intermediate router will route all flows between different sources
and a given destination along the same path. This is acceptable if
the path has adequate capacity but a problem arises if there are
multiple flows to a destination that exceed the capacity of the link.
One version of QOSPF [ZSSC97] determines QoS routes based on source
and destination address. This implies that all traffic between a
given source and destination, regardless of the flow, will travel
down the same route. Again, the route must have capacity for all the
QoS traffic for the source/destination pair. The amount of routing
state also increases since the routing tables must include
source/destination pairs instead of just the destination.
The best granularity is found when routing is based on individual
flows but this incurs a tremendous cost in terms of the routing
state. Each QoS flow can be routed separately between any source and
destination. PQC [GOA97] and alternate path routing [ZES97], are
examples of solutions which operate at the flow level.
Both source/destination and flow-based routing may be susceptible to
packet looping under hop-by-hop forwarding. Suppose a node along a
flow or source/destination-based path loses the state information for
the flow. Also suppose that the flow-based route is different from
the regular destination-based route. The potential then exists for a
routing loop to form when the node forwards a packet belonging to the
flow using its destination-based routing table to a node that occurs
Crawley, et. al. Informational [Page 8]
RFC 2386 A Framework for QoS-based Routing August 1998
earlier on the flow-based path. This is because the latter node may
use its flow-based routing table to forward the packet again to the
former and this can go on indefinitely.
3.6 Metrics and Path Computation
3.6.1 Metric Selection and Representation
There are some considerations in defining suitable link and node
metrics [WC96]. First, the metrics must represent the basic network
properties of interest. Such metrics include residual bandwidth,
delay and jitter. Since the flow QoS requirements have to be mapped
onto path metrics, the metrics define the types of QoS guarantees the
network can support. Alternatively, QoS-based routing cannot support
QoS requirements that cannot be meaningfully mapped onto a reasonable
combination of path metrics. Second, path computation based on a
metric or a combination of metrics must not be too complex as to
render them impractical. In this regard, it is worthwhile to note
that path computation based on certain combinations of metrics (e.g.,
delay and jitter) is theoretically hard. Thus, the allowable
combinations of metrics must be determined while taking into account
the complexity of computing paths based on these metrics and the QoS
needs of flows. A common strategy to allow flexible combinations of
metrics while at the same time reduce the path computation complexity
is to utilize "sequential filtering". Under this approach, a
combination of metrics is ordered in some fashion, reflecting the
importance of different metrics (e.g., cost followed by delay, etc.).
Paths based on the primary metric are computed first (using a simple
algorithm, e.g., shortest path) and a subset of them are eliminated
based on the secondary metric and so forth until a single path is
found. This is an approximation technique and it trades off global
optimality for path computation simplicity (The filtering technique
may be simpler, depending on the set of metrics used. For example,
with bandwidth and cost as metrics, it is possible to first eliminate
the set of links that do not have the requested bandwidth and then
compute the least cost path using the remaining links.)
Now, once suitable link and node metrics are defined, a uniform
representation of them is required across independent domains -
employing possibly different routing schemes - in order to derive
path metrics consistently (path metrics are obtained by the
composition of link and node metrics). Encoding of the maximum,
minimum, range, and granularity of the metrics are needed. Also, the
definitions of comparison and accumulation operators are required. In
addition, suitable triggers must be defined for indicating a
significant change from a minor change. The former will cause a
routing update to be generated. The stability of the QoS routes would
Crawley, et. al. Informational [Page 9]
RFC 2386 A Framework for QoS-based Routing August 1998
depend on the ability to control the generation of updates. With
interdomain routing, it is essential to obtain a fairly stable view
of the interconnection among the ASs.
3.6.2 Metric Hierarchy
A hierarchy can be defined among various classes of service based on
the degree to which traffic from one class can potentially degrade
service of traffic from lower classes that traverse the same link. In
this hierarchy, guaranteed constant bit rate traffic is at the top
and "best-effort" datagram traffic at the bottom. Classes providing
service higher in the hierarchy impact classes providing service in
lower levels. The same situation is not true in the other direction.
For example, a datagram flow cannot affect a real-time service. Thus,
it may be necessary to distribute and update different metrics for
each type of service in the worst case. But, several advantages
result by identifying a single default metric. For example, one
could derive a single metric combining the availability of datagram
and real-time service over a common substrate.
3.6.3 Datagram Flows
A delay-sensitive metric is probably the most obvious type of metric
suitable for datagram flows. However, it requires careful analysis to
avoid instabilities and to reduce storage and bandwidth requirements.
For example, a recursive filtering technique based on a simple and
efficient weighted averaging algorithm [NC94] could be used. This
filter is used to stabilize the metric. While it is adequate for
smoothing most loading patterns, it will not distinguish between
patterns consisting of regular bursts of traffic and random loading.
Among other stabilizing tools, is a minimum time between updates that
can help filter out high-frequency oscillations.
3.6.4 Real-time Flows
In real-time quality-of-service, delay variation is generally more
critical than delay as long as the delay is not too high. Clearly,
voice-based applications cannot tolerate more than a certain level of
delay. The condition of varying delays may be expected to a greater
degree in a shared medium environment with datagrams, than in a
network implemented over a switched substrate. Routing a real-time
flow therefore reduces to an exercise in allocating the required
network resources while minimizing fragmentation of bandwidth. The
resulting situation is a bandwidth-limited minimum hop path from a
source to the destination. In other words, the router performs an
ordered search through paths of increasing hop count until it finds
one that meets all the bandwidth needs of the flow. To reduce
contention and the probability of false probes (due to inaccuracy in
Crawley, et. al. Informational [Page 10]
RFC 2386 A Framework for QoS-based Routing August 1998
route tables), the router could select a path randomly from a
"window" of paths which meet the needs of the flow and satisfy one of
three additional criteria: best-fit, first-fit or worst-fit. Note
that there is a similarity between the allocation of bandwidth and
the allocation of memory in a multiprocessing system. First-fit seems
to be appropriate for a system with a high real-time flow arrival
rates; and worst-fit is ideal for real-time flows with high holding
times. This rather nonintuitive result was shown in [NC94].
3.6.5 Path Properties
Path computation by itself is merely a search technique, e.g.,
Shortest Path First (SPF) is a search technique based on dynamic
programming. The usefulness of the paths computed depends to a large
extent on the metrics used in evaluating the cost of a path with
respect to a flow.
Each link considered by the path computation engine must be evaluated
against the requirements of the flow, i.e., the cost of providing the
services required by the flow must be estimated with respect to the
capabilities of the link. This requires a uniform method of combining
features such as delay, bandwidth, priority and other service
features. Furthermore, the costs must reflect the lost opportunity
of using each link after routing the flow.
3.6.6 Performance Objectives
One common objective during path computation is to improve the total
network throughput. In this regard, merely routing a flow on any
path that accommodates its QoS requirement is not a good strategy. In
fact, this corresponds to uncontrolled alternate routing [SD95] and
may adversely impact performance at higher traffic loads. It is
therefore necessary to consider the total resource allocation for a
flow along a path, in relation to available resources, to determine
whether or not the flow should be routed on the path. Such a
mechanism is referred to in this document as "higher level admission
control". The goal of this is to ensure that the "cost" incurred by
the network in routing a flow with a given QoS is never more than the
revenue gained. The routing cost in this regard may be the lost
revenue in potentially blocking other flows that contend for the same
resources. The formulation of the higher level admission control
strategy, with suitable administrative hooks and with fairness to all
flows desiring entry to the network, is an issue. The fairness
problem arises because flows with smaller reservations tend to be
more successfully routed than flows with large reservations, for a
given engineered capacity. To guarantee a certain level of
Crawley, et. al. Informational [Page 11]
RFC 2386 A Framework for QoS-based Routing August 1998
acceptance rate for "larger" flows, without over-engineering the
network, requires a fair higher level admission control mechanism.
The application of higher level admission control to multicast
routing is discussed later.
3.7 Administrative Control
There are several administrative control issues. First, within an AS
employing state-dependent routing, administrative control of routing
behavior may be necessary. One example discussed earlier was higher
level admission control. Some others are described in this section.
Second, the control of interdomain routing based on policy is an
issue. The discussion of interdomain routing is defered to Section
5.
Two areas that need administrative control, in addition to
appropriate routing mechanisms, are handling flow priority with
preemption, and resource allocation for multiple service classes.
3.7.1 Flow Priorities and Preemption
If there are critical flows that must be accorded higher priority
than other types of flows, a mechanism must be implemented in the
network to recognize flow priorities. There are two aspects to
prioritizing flows. First, there must be a policy to decide how
different users are allowed to set priorities for flows they
originate. The network must be able to verify that a given flow is
allowed to claim a priority level signaled for it. Second, the
routing scheme must ensure that a path with the requested QoS will be
found for a flow with a probability that increases with the priority
of the flow. In other words, for a given network load, a high
priority flow should be more likely to get a certain QoS from the
network than a lower priority flow requesting the same QoS. Routing
procedures for flow prioritization can be complex. Identification
and evaluation of different procedures are areas that require
investigation.
3.7.2 Resource Control
If there are multiple service classes, it is necessary to engineer a
network to carry the forecasted traffic demands of each class. To do
this, router and link resources may be logically partitioned among
various service classes. It is desirable to have dynamic partitioning
whereby unused resources in various partitions are dynamically
shifted to other partitions on demand [ACFH92]. Dynamic sharing,
however, must be done in a controlled fashion in order to prevent
traffic under some service class from taking up more resources than
Crawley, et. al. Informational [Page 12]
RFC 2386 A Framework for QoS-based Routing August 1998
what was engineered for it for prolonged periods of time. The design
of such a resource sharing scheme, and its incorporation into the
QoS-based routing scheme are significant issues.
3.8 QoS-Based Routing for Multicast Flows
QoS-based multicast routing is an important problem, especially if
the notion of higher level admission control is included. The
dynamism in the receiver set allowed by IP multicast, and receiver
heterogeneity add to the problem. With straightforward implementation
of distributed heuristic algorithms for multicast path computation
[W88, C91], the difficulty is essentially one of scalability. To
accommodate QoS, multicast path computation at a router must have
knowledge of not only the id of subnets where group members are
present, but also the identity of branches in the existing tree. In
other words, routers must keep flow-specific state information. Also,
computing optimal shared trees based on the shared reservation style
[BZBH97], may require new algorithms. Multicast routing is discussed
in some detail in Section 6.
3.9 Routing Overheads
The overheads incurred by a routing scheme depend on the type of the
routing scheme, as well as the implementation. There are three types
of overheads to be considered: computation, storage and
communication. It is necessary to understand the implications of
choosing a routing mechanism in terms of these overheads.
For example, considering link state routing, the choice of the update
propagation mechanism is important since network state is dynamic and
changes relatively frequently. Specifically, a flooding mechanism
would result in many unnecessary message transmissions and
processing. Alternative techniques, such as tree-based forwarding
[R96], have to be considered. A related issue is the quantization of
state information to prevent frequent updating of dynamic state.
While coarse quantization reduces updating overheads, it may affect
the performance of the routing scheme. The tradeoff has to be
carefully evaluated. QoS-based routing incurs certain overheads
during flow establishment, for example, computing a source route.
Whether this overhead is disproportionate compared to the length of
the sessions is an issue. In general, techniques for the minimization
of routing-related overheads during flow establishment must be
investigated. Approaches that are useful include pre-computation of
routes, caching recently used routes, and TOS routing based on hints
in packets (e.g., the TOS field).
Crawley, et. al. Informational [Page 13]
RFC 2386 A Framework for QoS-based Routing August 1998
3.10 Scaling by Hierarchical Aggregation
QoS-based routing should be scalable, and hierarchical aggregation is
a common technique for scaling (e.g., [PNNI96]). But this introduces
problems with regard to the accuracy of the aggregated state
information [L95]. Also, the aggregation of paths under multiple
constraints is difficult. One of the difficulties is the risk of
accepting a flow based on inaccurate information, but not being able
to support the QoS requirements of flow because the capabilities of
the actual paths that are aggregated are not known during route
computation. Performance impacts of aggregating path metric
information must therefore be understood. A way to compensate for
inaccuracies is to use crankback, i.e., dynamic search for alternate
paths as a flow is being routed. But crankback increases the time to
set up a flow, and may adversely affect the performance of the
routing scheme under some circumstances. Thus, crankback must be used
judiciously, if at all, along with a higher level admission control
mechanism.
4. INTRADOMAIN ROUTING REQUIREMENTS
At the intradomain level, the objective is to allow as much latitude
as possible in addressing the QoS-based routing issues. Indeed, there
are many ideas about how QoS-based routing services can be
provisioned within ASs. These range from on-demand path computation
based on current state information, to statically provisioned paths
supporting a few service classes.
Another aspect that might invite differing solutions is performance
optimization. Based on the technique used for this, intradomain
routing could be very sophisticated or rather simple. Finally, the
service classes supported, as well as the specific QoS engineered for
a service class, could differ from AS to AS. For instance, some ASs
may not support guaranteed service, while others may. Also, some ASs
supporting the service may be engineered for a better delay bound
than others. Thus, it requires considerable thought to determine the
high level requirements for intradomain routing that both supports
the overall view of QoS-based routing in the Internet and allows
maximum autonomy in developing solutions.
Our view is that certain minimum requirements must be satisfied by
intradomain routing in order to be qualified as "QoS-based" routing.
These are:
- The routing scheme must route a flow along a path that can
accommodate its QoS requirements, or indicate that the flow cannot
be admitted with the QoS currently being requested.
Crawley, et. al. Informational [Page 14]
RFC 2386 A Framework for QoS-based Routing August 1998
- The routing scheme must indicate disruptions to the current route
of a flow due to topological changes.
- The routing scheme must accommodate best-effort flows without any
resource reservation requirements. That is, present best effort
applications and protocol stacks need not have to change to run in
a domain employing QoS-based routing.
- The routing scheme may optionally support QoS-based multicasting
with receiver heterogeneity and shared reservation styles.
In addition, the following capabilities are also recommended:
- Capabilities to optimize resource usage.
- Implementation of higher level admission control procedures to
limit the overall resource utilization by individual flows.
Further requirements along these lines may be specified. The
requirements should capture the consensus view of QoS-based routing,
but should not preclude particular approaches (e.g., TOS-based
routing) from being implemented. Thus, the intradomain requirements
are expected to be rather broad.
5. INTERDOMAIN ROUTING
The fundamental requirement on interdomain QoS-based routing is
scalability. This implies that interdomain routing cannot be based
on highly dynamic network state information. Rather, such routing
must be aided by sound network engineering and relatively sparse
information exchange between independent routing domains. This
approach has the advantage that it can be realized by straightforward
extensions of the present Internet interdomain routing model. A
number of issues, however, need to be addressed to achieve this, as
discussed below.
Crawley, et. al. Informational [Page 15]
RFC 2386 A Framework for QoS-based Routing August 1998
5.1 Interdomain QoS-Based Routing Model
The interdomain QoS-based routing model is depicted below:
AS1 AS2 AS3
___________ _____________ ____________
| | | | | |
| B------B B----B |
| | | | | |
-----B----- B------------- --B---------
\ / /
\ / /
____B_____B____ _________B______
| | | |
| B-------B |
| | | |
| B-------B |
--------------- ----------------
AS4 AS5
Here, ASs exchange standardized routing information via border nodes
B. Under this model, each AS can itself consist of a set of
interconnected ASs, with standardized routing interaction. Thus, the
interdomain routing model is hierarchical. Also, each lowest level
AS employs an intradomain QoS-based routing scheme (proprietary or
standardized by intradomain routing efforts such as QOSPF). Given
this structure, some questions that arise are:
- What information is exchanged between ASs?
- What routing capabilities does the information exchange lead to?
(E.g., source routing, on-demand path computation, etc.)
- How is the external routing information represented within an AS?
- How are interdomain paths computed?
- What sort of policy controls may be exerted on interdomain path
computation and flow routing?, and
- How is interdomain QoS-based multicast routing accomplished?
At a high level, the answers to these questions depend on the routing
paradigm. Specifically, considering link state routing, the
information exchanged between domains would consist of an abstract
representation of the domains in the form of logical nodes and links,
along with metrics that quantify their properties and resource
availability. The hierarchical structure of the ASs may be handled
Crawley, et. al. Informational [Page 16]
RFC 2386 A Framework for QoS-based Routing August 1998
by a hierarchical link state representation, with appropriate metric
aggregation.
Link state routing may not necessarily be advantageous for
interdomain routing for the following reasons:
- One advantage of intradomain link state routing is that it would
allow fairly detailed link state information be used to compute
paths on demand for flows requiring QoS. The state and metric
aggregation used in interdomain routing, on the other hand, erodes
this property to a great degree.
- The usefulness of keeping track of the abstract topology and
metrics of a remote domain, or the interconnection between remote
domains is not obvious. This is especially the case when the remote
topology and metric encoding are lossy.
- ASs may not want to advertise any details of their internal
topology or resource availability.
- Scalability in interdomain routing can be achieved only if
information exchange between domains is relatively infrequent.
Thus, it seems practical to limit information flow between domains
as much as possible.
Compact information flow allows the implementation QoS-enhanced
versions of existing interdomain protocols such as BGP-4. We look at
the interdomain routing issues in this context.
5.2 Interdomain Information Flow
The information flow between routing domains must enable certain
basic functions:
1. Determination of reachability to various destinations
2. Loop-free flow routes
3. Address aggregation whenever possible
4. Determination of the QoS that will be supported on the path to a
destination. The QoS information should be relatively static,
determined from the engineered topology and capacity of an AS
rather than ephemeral fluctuations in traffic load through the
AS. Ideally, the QoS supported in a transit AS should be allowed
to vary significantly only under exceptional circumstances, such
as failures or focused overload.
Crawley, et. al. Informational [Page 17]
RFC 2386 A Framework for QoS-based Routing August 1998
5. Determination, optionally, of multiple paths for a given
destination, based on service classes.
6. Expression of routing policies, including monetary cost, as a
function of flow parameters, usage and administrative factors.
Items 1-3 are already part of existing interdomain routing. Item 5 is
also a straightfoward extension of the current model. The main
problem areas are therefore items 4 and 6.
The QoS of an end-to-end path is obtained by composing the QoS
available in each transit AS. Thus, border routers must first
determine what the locally available QoS is in order to advertise
routes to both internal and external destinations. The determination
of local "AS metrics" (corresponding to link metrics in the
intradomain case) should not be subject to too much dynamism. Thus,
the issue is how to define such metrics and what triggers an
occasional change that results in re-advertisements of routes.
The approach suggested in this document is not to compute paths based
on residual or instantaneous values of AS metics (which can be
dynamic), but utilize only the QoS capabilities engineered for
aggregate transit flows. Such engineering may be based on the
knowledge of traffic to be expected from each neighboring ASs and the
corresponding QOS needs. This information may be obtained based on
contracts agreed upon prior to the provisioning of services. The AS
metric then corresponds to the QoS capabilities of the "virtual path"
engineered through the AS (for transit traffic) and a different
metric may be used for different neighbors. This is illustrated in
the following figure.
AS1 AS2 AS3
___________ _____________ ____________
| | | | | |
| B------B1 B2----B |
| | | | | |
-----B----- B3------------ --B---------
\ /
\ /
____B_____B____
| |
| |
| |
| |
---------------
AS4
Crawley, et. al. Informational [Page 18]
RFC 2386 A Framework for QoS-based Routing August 1998
Here, B1 may utilize an AS metric specific for AS1 when computing
path metrics to be advertised to AS1. This metric is based on the
resources engineered in AS2 for transit traffic from AS1. Similarly,
B3 may utilize a different metric when computing path metrics to be
advertised to AS4. Now, it is assumed that as long as traffic flow
into AS2 from AS1 or AS4 does not exceed the engineered values, these
path metrics would hold. Excess traffic due to transient
fluctuations, however, may be handled as best effort or marked with a
discard bit.
Thus, this model is different from the intradomain model, where end
nodes pick a path dynamically based on the QoS needs of the flow to
be routed. Here, paths within ASs are engineered based on presumed,
measured or declared traffic and QoS requirements. Under this model,
an AS can contract for routes via multiple transit ASs with different
QoS requirements. For instance, AS4 above can use both AS1 and AS2 as
transits for same or different destinations. Also, a QoS contract
between one AS and another may generate another contract between the
second and a third AS and so forth.
An issue is what triggers the recomputation of path metrics within an
AS. Failures or other events that prevent engineered resource
allocation should certainly trigger recomputation. Recomputation
should not be triggered in response to arrival of flows within the
engineered limit.
5.3 Path Computation
Path computation for an external destination at a border node is
based on reachability, path metrics and local policies of selection.
If there are multiple selection criteria (e.g., delay, bandwidth,
cost, etc.), mutiple alternaives may have to be maintained as well as
propagated by border nodes. Selection of a path from among many
alternatives would depend on the QoS requests of flows, as well as
policies. Path computation may also utilze any heuristics for
optimizing resource usage.
5.4 Flow Aggregation
An important issue in interdomain routing is the amount of flow state
to be processed by transit ASs. Reducing the flow state by
aggregation techniques must therefore be seriously considered. Flow
aggregation means that transit traffic through an AS is classified
into a few aggregated streams rather than being routed at the
individual flow level. For example, an entry border router may
classify various transit flows entering an AS into a few coarse
categories, based on the egress node and QoS requirements of the
flows. Then, the aggregated stream for a given traffic class may be
Crawley, et. al. Informational [Page 19]
RFC 2386 A Framework for QoS-based Routing August 1998
routed as a single flow inside the AS to the exit border router. This
router may then present individual flows to different neighboring ASs
and the process repeats at each entry border router. Under this
scenario, it is essential that entry border routers keep track of the
resource requirements for each transit flow and apply admission
control to determine whether the aggregate requirement from any
neighbor exceeds the engineered limit. If so, some policy must be
invoked to deal with the excess traffic. Otherwise, it may be assumed
that aggregated flows are routed over paths that have adequate
resources to guarantee QoS for the member flows. Finally, it is
possible that entry border routers at a transit AS may prefer not to
aggregate flows if finer grain routing within the AS may be more
efficient (e.g., to aid load balancing within the AS).
5.5 Path Cost Determination
It is hoped that the integrated services Internet architecture would
allow providers to charge for IP flows based on their QoS
requirements. A QoS-based routing architecture can aid in
distributing information on expected costs of routing flows to
various destinations via different domains. Clearly, from a
provider's point of view, there is a cost incurred in guaranteeing
QoS to flows. This cost could be a function of several parameters,
some related to flow parameters, others based on policy. From a
user's point of view, the consequence of requesting a particular QoS
for a flow is the cost incurred, and hence the selection of providers
may be based on cost. A routing scheme can aid a provider in
distributing the costs in routing to various destinations, as a
function of several parameters, to other providers or to end users.
In the interdomain routing model described earlier, the costs to a
destination will change as routing updates are passed through a
transit domain. One of the goals of the routing scheme should be to
maintain a uniform semantics for cost values (or functions) as they
are handled by intermediate domains. As an example, consider the cost
function generated by border node B1 in domain A and passed to node
B2 in domain B below. The routing update may be injected into domain
B by B2 and finally passed to B4 in domain C by router B3. Domain B
may interpret the cost value received from domain A in any way it
wants, for instance, adding a locally significant component to it.
But when this cost value is passed to domain C, the meaning of it
must be what domain A intended, plus the incremental cost of
transiting domain B, but not what domain B uses internally.
Crawley, et. al. Informational [Page 20]
RFC 2386 A Framework for QoS-based Routing August 1998
Domain A Domain B Domain C
____________ ___________ ____________
| | | | | |
| B1------B2 B3---B4 |
| | | | | |
------------ ----------- ------------
A problem with charging for a flow is the determination of the cost
when the QoS promised for the flow was not actually delivered.
Clearly, when a flow is routed via multiple domains, it must be
determined whether each domain delivers the QoS it declares possible
for traffic through it.
6. QOS-BASED MULTICAST ROUTING
The goals of QoS-based multicast routing are as follows:
- Scalability to large groups with dynamic membership
- Robustness in the presence of topological changes
- Support for receiver-initiated, heterogeneous reservations
- Support for shared reservation styles, and
- Support for "global" admission control, i.e., administrative
control of resource consumption by the multicast flow.
The RSVP multicast flow model is as follows. The sender of a
multicast flow advertises the traffic characteristics periodically to
the receivers. On receipt of an advertisement, a receiver may
generate a message to reserve resources along the flow path from the
sender. Receiver reservations may be heterogeneous. Other multicast
models may be considered.
The multicast routing scheme attempts to determine a path from the
sender to each receiver that can accommodate the requested
reservation. The routing scheme may attempt to maximize network
resource utilization by minimizing the total bandwidth allocated to
the multicast flow, or by optimizing some other measure.
6.1 Scalability, Robustness and Heterogeneity
When addressing scalability, two aspects must be considered:
1. The overheads associated with receiver discovery. This overhead
is incurred when determining the multicast tree for forwarding
best-effort sender traffic characterization to receivers.
Crawley, et. al. Informational [Page 21]
RFC 2386 A Framework for QoS-based Routing August 1998
2. The overheads associated with QoS-based multicast path
computation. This overhead is incurred when flow-specific
state information has to be collected by a router to determine
QoS-accommodating paths to a receiver.
Depending on the multicast routing scheme, one or both of these
aspects become important. For instance, under the present RSVP model,
reservations are established on the same path over which sender
traffic characterizations are sent, and hence there is no path
computation overhead. On the other hand, under the proposed QOSPF
model [ZSSC97] of multicast source routing, receiver discovery
overheads are incurred by MOSPF [M94] receiver location broadcasts,
and additional path computation overheads are incurred due to the
need to keep track of existing flow paths. Scaling of QoS-based
multicast depends on both these scaling issues. However, scalable
best-effort multicasting is really not in the domain of QoS-based
routing work (solutions for this are being devised by the IDMR WG
[BCF94, DEFV94]). QoS-based multicast routing may build on these
solutions to achieve overall scalability.
There are several options for QoS-based multicast routing. Multicast
source routing is one under which multicast trees are computed by the
first-hop router from the source, based on sender traffic
advertisements. The advantage of this is that it blends nicely with
the present RSVP signaling model. Also, this scheme works well when
receiver reservations are homogeneous and the same as the maximum
reservation derived from sender advertisement. The disadvantages of
this scheme are the extra effort needed to accommodate heterogeneous
reservations and the difficulties in optimizing resource allocation
based on shared reservations.
In these regards, a receiver-oriented multicast routing model seems
to have some advantage over multicast source routing. Under this
model:
1. Sender traffic advertisements are multicast over a best-effort
tree which can be different from the QoS-accommodating tree for
sender data.
2. Receiver discovery overheads are minimized by utilizing a
scalable scheme (e.g., PIM, CBT), to multicast sender traffic
characterization.
3. Each receiver-side router independently computes a QoS-
accommodating path from the source, based on the receiver
reservation. This path can be computed based on unicast routing
information only, or with additional multicast flow-specific
state information. In any case, multicast path computation is
Crawley, et. al. Informational [Page 22]
RFC 2386 A Framework for QoS-based Routing August 1998
broken up into multiple, concurrent nunicast path computations.
4. Routers processing unicast reserve messages from receivers
aggregate resource reservations from multiple receivers.
Flow-specific state information may be limited in Step 3 to achieve
scalability [RN98]. In general, limiting flow-specific information in
making multicast routing decisions is important in any routing model.
The advantages of this model are the ease with which heterogeneous
reservations can be accommodated, and the ability to handle shared
reservations. The disadvantages are the incompatibility with the
present RSVP signaling model, and the need to rely on reverse paths
when link state routing is not used. Both multicast source routing
and the receiver-oriented routing model described above utilize per-
source trees to route multicast flows. Another possibility is the
utilization of shared, per-group trees for routing flows. The
computation and usage of such trees require further work.
Finally, scalability at the interdomain level may be achieved if
QoS-based multicast paths are computed independently in each domain.
This principle is illustrated by the QOSPF multicast source routing
scheme which allows independent path computation in different OSPF
areas. It is easy to incorporate this idea in the receiver-oriented
model also. An evaluation of multicast routing strategies must take
into account the relative advantages and disadvantages of various
approaches, in terms of scalability features and functionality
supported.
6.2 Multicast Admission Control
Higher level admission control, as defined for unicast, prevents
excessive resource consumption by flows when traffic load is high.
Such an admission control strategy must be applied to multicast flows
when the flow path computation is receiver-oriented or sender-
oriented. In essence, a router computing a path for a receiver must
determine whether the incremental resource allocation for the
receiver is excessive under some administratively determined
admission control policy. Other admission control criteria, based on
the total resource consumption of a tree may be defined.
7. QOS-BASED ROUTING AND RESOURCE RESERVATION PROTOCOLS
There must clearly be a well-defined interface between routing and
resource reservation protocols. The nature of this interface, and the
interaction between routing and resource reservation has to be
determined carefully to avoid incompatibilities. The importance of
this can be readily illustrated in the case of RSVP.
Crawley, et. al. Informational [Page 23]
RFC 2386 A Framework for QoS-based Routing August 1998
RSVP has been designed to operate independent of the underlying
routing scheme. Under this model, RSVP PATH messages establish the
reverse path for RESV messages. In essence, this model is not
compatible with QoS-based routing schemes that compute paths after
receiver reservations are received. While this incompatibility can be
resolved in a simple manner for unicast flows, multicast with
heterogeneous receiver requirements is a more difficult case. For
this, reconciliation between RSVP and QoS-based routing models is
necessary. Such a reconciliation, however, may require some changes
to the RSVP model depending on the QoS-based routing model [ZES97,
ZSSC97, GOA97]. On the other hand, QoS-based routing schemes may be
designed with RSVP compatibility as a necessary goal. How this
affects scalability and other performance measures must be
considered.
8. SECURITY CONSIDERATIONS
Security issues that arise with routing in general are about
maintaining the integrity of the routing protocol in the presence of
unintentional or malicious introduction of information that may lead
to protocol failure [P88]. QoS-based routing requires additional
security measures both to validate QoS requests for flows and to
prevent resource-depletion type of threats that can arise when flows
are allowed to make arbitratry resource requests along various paths
in the network. Excessive resource consumption by an errant flow
results in denial of resources to legitimate flows. While these
situations may be prevented by setting up proper policy constraints,
charging models and policing at various points in the network, the
formalization of such protection requires work [BCCH94].
9. RELATED WORK
"Adaptive" routing, based on network state, has a long history,
especially in circuit-switched networks. Such routing has also been
implemented in early datagram and virtual circuit packet networks.
More recently, this type of routing has been the subject of study in
the context of ATM networks, where the traffic characteristics and
topology are substantially different from those of circuit-switched
networks [MMR96]. It is instructive to review the adaptive routing
methodologies, both to understand the problems encountered and
possible solutions.
Fundamentally, there are two aspects to adaptive, network state-
dependent routing:
1. Measuring and gathering network state information, and
2. Computing routes based on the available information.
Crawley, et. al. Informational [Page 24]
RFC 2386 A Framework for QoS-based Routing August 1998
Depending on how these two steps are implemented, a variety of
routing techniques are possible. These differ in the following
respects:
- what state information is used
- whether local or global state is used
- what triggers the propagation of state information
- whether routes are computed in a distributed or centralized manner
- whether routes are computed on-demand, pre-computed, or in a
hybrid manner
- what optimization criteria, if any, are used in computing routes
- whether source routing or hop by hop routing is used, and
- how alternate route choices are explored
It should be noted that most of the adaptive routing work has focused
on unicast routing. Multicast routing is one of the areas that would
be prominent with Internet QoS-based routing. We treat this
separately, and the following review considers only unicast routing.
This review is not exhaustive, but gives a brief overview of some of
the approaches.
9.1 Optimization Criteria
The most common optimization criteria used in adaptive routing is
throughput maximization or delay minimization. A general formulation
of the optimization problem is the one in which the network revenue
is maximized, given that there is a cost associated with routing a
flow over a given path [MMR96, K88]. In general, global optimization
solutions are difficult to implement, and they rely on a number of
assumptions on the characteristics of the traffic being routed
[MMR96]. Thus, the practical approach has been to treat the routing
of each flow (VC, circuit or packet stream to a given destination)
independently of the routing of other flows. Many such routing
schemes have been implemented.
9.2 Circuit Switched Networks
Many adaptive routing concepts have been proposed for circuit-
switched networks. An example of a simple adaptive routing scheme is
sequential alternate routing [T88]. This is a hop-by-hop
destination-based routing scheme where only local state information
is utilized. Under this scheme, a routing table is computed for each
node, which lists multiple output link choices for each destination.
When a call set-up request is received by a node, it tries each
output link choice in sequence, until it finds one that can
accommodate the call. Resources are reserved on this link, and the
call set-up is forwarded to the next node. The set-up either reaches
the destination, or is blocked at some node. In the latter case, the
Crawley, et. al. Informational [Page 25]
RFC 2386 A Framework for QoS-based Routing August 1998
set-up can be cranked back to the previous node or a failure
declared. Crankback allows the previous node to try an alternate
path. The routing table under this scheme can be computed in a
centralized or distributed manner, based only on the topology of the
network. For instance, a k-shortest-path algorithm can be used to
determine k alternate paths from a node with distinct initial links
[T88]. Some mechanism must be implemented during path computation or
call set-up to prevent looping.
Performance studies of this scheme illustrate some of the pitfalls of
alternate routing in general, and crankback in particular [A84, M86,
YS87]. Specifically, alternate routing improves the throughput when
traffic load is relatively light, but adversely affects the
performance when traffic load is heavy. Crankback could further
degrade the performance under these conditions. In general,
uncontrolled alternate routing (with or without crankback) can be
harmful in a heavily utilized network, since circuits tend to be
routed along longer paths thereby utilizing more capacity. This is an
obvious, but important result that applies to QoS-based Internet
routing also.
The problem with alternate routing is that both direct routed (i.e.,
over shortest paths) and alternate routed calls compete for the same
resource. At higher loads, allocating these resources to alternate
routed calls result in the displacement of direct routed calls and
hence the alternate routing of these calls. Therefore, many
approaches have been proposed to limit the flow of alternate routed
calls under high traffic loads. These schemes are designed for the
fully-connected logical topology of long distance telephone networks
(i.e., there is a logical link between every pair of nodes). In this
topology, direct routed calls always traverse a 1-hop path to the
destination and alternate routed calls traverse at most a 2-hop path.
"Trunk reservation" is a scheme whereby on each link a certain
bandwidth is reserved for direct routed calls [MS91]. Alternate
routed calls are allowed on a trunk as long as the remaining trunk
bandwidth is greater than the reserved capacity. Thus, alternate
routed calls cannot totally displace direct routed calls on a trunk.
This strategy has been shown to be very effective in preventing the
adverse effects of alternate routing.
"Dynamic alternate routing" (DAR) is a strategy whereby alternate
routing is controlled by limiting the number of choices, in addition
to trunk reservation [MS91]. Under DAR, the source first attempts to
use the direct link to the destination. When blocked, the source
attempts to alternate route the call via a pre-selected neighbor. If
the call is still blocked, a different neighbor is selected for
alternate routing to this destination in the future. The present call
Crawley, et. al. Informational [Page 26]
RFC 2386 A Framework for QoS-based Routing August 1998
is dropped. DAR thus requires only local state information. Also, it
"learns" of good alternate paths by random sampling and sticks to
them as long as possible.
More recent circuit-switched routing schemes utilize global state to
select routes for calls. An example is AT&T's Real-Time Network
Routing (RTNR) scheme [ACFH92]. Unlike schemes like DAR, RTNR handles
multiple classes of service, including voice and data at fixed rates.
RTNR utilizes a sophisticated per-class trunk reservation mechanism
with dynamic bandwidth sharing between classes. Also, when alternate
routing a call, RTNR utilizes the loading on all trunks in the
network to select a path. Because of the fully-connected topology,
disseminating status information is simple under RTNR; each node
simply exchanges status information directly with all others.
From the point of view of designing QoS-based Internet routing
schemes, there is much to be learned from circuit-switched routing.
For example, alternate routing and its control, and dynamic resource
sharing among different classes of traffic. It is, however, not
simple to apply some of the results to a general topology network
with heterogeneous multirate traffic. Work in the area of ATM network
routing described next illustrates this.
9.3 ATM Networks
The VC routing problem in ATM networks presents issues similar to
that encountered in circuit-switched networks. Not surprisingly, some
extensions of circuit-switched routing have been proposed. The goal
of these routing schemes is to achieve higher throughput as compared
to traditional shortest-path routing. The flows considered usually
have a single QoS requirement, i.e., bandwidth.
The first idea is to extend alternate routing with trunk reservation
to general topologies [SD95]. Under this scheme, a distance vector
routing protocol is used to build routing tables at each node with
multiple choices of increasing hop count to each destination. A VC
set-up is first routed along the primary ("direct") path. If
sufficient resources are not available along this path, alternate
paths are tried in the order of increasing hop count. A flag in the
VC set-up message indicates primary or alternate routing, and
bandwidth on links along an alternate path is allocated subject to
trunk reservation. The trunk reservation values are determined based
on some assumptions on traffic characteristics. Because the scheme
works only for a single data rate, the practical utility of it is
limited.
The next idea is to import the notion of controlled alternate routing
into traditional link state QoS-based routing [GKR96]. To do this,
Crawley, et. al. Informational [Page 27]
RFC 2386 A Framework for QoS-based Routing August 1998
first each VC is associated with a maximum permissible routing cost.
This cost can be set based on expected revenues in carrying the VC or
simply based on the length of the shortest path to the destination.
Each link is associated with a metric that increases exponentially
with its utilization. A switch computing a path for a VC simply
determines a least-cost feasible path based on the link metric and
the VC's QoS requirement. The VC is admitted if the cost of the path
is less than or equal to the maximum permissible routing cost. This
routing scheme thus limits the extent of "detour" a VC experiences,
thus preventing excessive resource consumption. This is a practical
scheme and the basic idea can be extended to hierarchical routing.
But the performance of this scheme has not been analyzed thoroughly.
A similar notion of admission control based on the connection route
was also incorporated in a routing scheme presented in [ACG92].
Considering the ATM Forum PNNI protocol [PNNI96], a partial list of
its stated characteristics are as follows:
o Scales to very large networks
o Supports hierarchical routing
o Supports QoS
o Uses source routed connection setup
o Supports multiple metrics and attributes
o Provides dynamic routing
The PNNI specification is sub-divided into two protocols: a signaling
and a routing protocol. The PNNI signaling protocol is used to
establish point-to-point and point to multipoint connections and
supports source routing, crankback and alternate routing. PNNI source
routing allows loop free paths. Also, it allows each implementation
to use its own path computation algorithm. Furthermore, source
routing is expected to support incremental deployment of future
enhancements such as policy routing.
The PNNI routing protocol is a dynamic, hierarchical link state
protocol that propagates topology information by flooding it through
the network. The topology information is the set of resources (e.g.,
nodes, links and addresses) which define the network. Resources are
qualified by defined sets of metrics and attributes (delay, available
bandwidth, jitter, etc.) which are grouped by supported traffic
class. Since some of the metrics used will change frequently, e.g.,
available bandwidth, threshold algorithms are used to determine if
the change in a metric or attribute is significant enough to require
propagation of updated information. Other features include, auto
configuration of the routing hierarchy, connection admission control
(as part of path calculation) and aggregation and summarization of
topology and reachability information.
Crawley, et. al. Informational [Page 28]
RFC 2386 A Framework for QoS-based Routing August 1998
Despite its functionality, the PNNI routing protocol does not address
the issues of multicast routing, policy routing and control of
alternate routing. A problem in general with link state QoS-based
routing is that of efficient broadcasting of state information. While
flooding is a reasonable choice with static link metrics it may
impact the performance adversely with dynamic metrics.
Finally, Integrated PNNI [I-PNNI] has been designed from the start to
take advantage of the QoS Routing capabilities that are available in
PNNI and integrate them with routing for layer 3. This would provide
an integrated layer 2 and layer 3 routing protocol for networks that
include PNNI in the ATM core. The I-PNNI specification has been
under development in the ATM Forum and, at this time, has not yet
incorporated QoS routing mechanisms for layer 3.
9.4 Packet Networks
Early attempts at adaptive routing in packet networks had the
objective of delay minimization by dynamically adapting to network
congestion. Alternate routing based on k-shortest path tables, with
route selection based on some local measure (e.g., shortest output
queue) has been described [R76, YS81]. The original ARPAnet routing
scheme was a distance vector protocol with delay-based cost metric
[MW77]. Such a scheme was shown to be prone to route oscillations
[B82]. For this and other reasons, a link state delay-based routing
scheme was later developed for the ARPAnet [MRR80]. This scheme
demonstrated a number of techniques such as triggered updates,
flooding, etc., which are being used in OSPF and PNNI routing today.
Although none of these schemes can be called QoS-based routing
schemes, they had features that are relevant to QoS-based routing.
IBM's System Network Architecture (SNA) introduced the concept of
Class of Service (COS)-based routing [A79, GM79]. There were several
classes of service: interactive, batch, and network control. In
addition, users could define other classes. When starting a data
session an application or device would request a COS. Routing would
then map the COS into a statically configured route which marked a
path across the physical network. Since SNA is connection oriented,
a session was set up along this path and the application's or
device's data would traverse this path for the life of the session.
Initially, the service delivered to a session was based on the
network engineering and current state of network congestion. Later,
transmission priority was added to subarea SNA. Transmission
priority allowed more important traffic (e.g. interactive) to proceed
before less time-critical traffic (e.g. batch) and improved link and
network utilization. Transmission priority of a session was based on
its COS.
Crawley, et. al. Informational [Page 29]
RFC 2386 A Framework for QoS-based Routing August 1998
SNA later evolved to support multiple or alternate paths between
nodes. But, although assisted by network design tools, the network
administrator still had to statically configure routes. IBM later
introduced SNA's Advanced Peer to Peer Networking (APPN) [B85]. APPN
added new features to SNA including dynamic routing based on a link
state database. An application would use COS to indicate it traffic
requirements and APPN would calculate a path capable of meeting these
requirements. Each COS was mapped to a table of acceptable metrics
and parameters that qualified the nodes and links contained in the
APPN topology Database. Metrics and parameters used as part of the
APPN route calculation include, but are not limited to: delay, cost
per minute, node congestion and security. The dynamic nature of APPN
allowed it to route around failures and reduce network configuration.
The service delivered by APPN was still based on the network
engineering, transmission priority and network congestion. IBM later
introduced an extension to APPN, High Performance Routing
(HPR)[IBM97]. HPR uses a congestion avoidance algorithm called
adaptive rate based (ARB) congestion control. Using predictive
feedback methods, the ARB algorithm prevents congestion and improves
network utilization. Most recently, an extension to the COS table
has been defined so that HPR routing could recognize and take
advantage of ATM QoS capabilities.
Considering IP routing, both IDRP [R92] and OSPF support type of
service (TOS)-based routing. While the IP header has a TOS field,
there is no standardized way of utilizing it for TOS specification
and routing. It seems possible to make use of the IP TOS feature,
along with TOS-based routing and proper network engineering, to do
QoS-based routing. The emerging differentiated services model is
generating renewed interest in TOS support. Among the newer schemes,
Source Demand Routing (SDR) [ELRV96] allows on-demand path
computation by routers and the implementation of strict and loose
source routing. The Nimrod architecture [CCM96] has a number of
concepts built in to handle scalability and specialized path
computation. Recently, some work has been done on QoS-based routing
schemes for the integrated services Internet. For example, in [M98],
heuristic schemes for efficient routing of flows with bandwidth
and/or delay constraints is described and evaluated.
9. SUMMARY AND CONCLUSIONS
In this document, a framework for QoS-based Internet routing was
defined. This framework adopts the traditional separation between
intra and interdomain routing. This approach is especially meaningful
in the case of QoS-based routing, since there are many views on how
QoS-based routing should be accomplished and many different needs.
The objective of this document was to encourage the development of
Crawley, et. al. Informational [Page 30]
RFC 2386 A Framework for QoS-based Routing August 1998
different solution approaches for intradomain routing, subject to
some broad requirements, while consensus on interdomain routing is
achieved. To this end, the QoS-based routing issues were described,
and some broad intradomain routing requirements and an interdomain
routing model were defined. In addition, QoS-based multicast routing
was discussed and a detailed review of related work was presented.
The deployment of QoS-based routing across multiple administrative
domains requires both the development of intradomain routing schemes
and a standard way for them to interact via a well-defined
interdomain routing mechanism. This document, while outlining the
issues that must be addressed, did not engage in the specification of
the actual features of the interdomain routing scheme. This would be
the next step in the evolution of wide-area, multidomain QoS-based
routing.
REFERENCES
[A79] V. Ahuja, "Routing and Flow Control in SNA", IBM Systems
Journal, 18 No. 2, pp. 298-314, 1979.
[A84] J. M. Akinpelu, "The Overload Performance of Engineered
Networks with Non-Hierarchical Routing", AT&T Technical
Journal, Vol. 63, pp. 1261-1281, 1984.
[ACFH92] G. R. Ash, J. S. Chen, A. E. Frey and B. D. Huang, "RealTime
Network Routing in a Dynamic Class-of-Service Network",
Proceedings of ITC 13, 1992.
[ACG92] H. Ahmadi, J. Chen, and R. Guerin, "Dynamic Routing and Call
Control in High-Speed Integrated Networks", Proceedings of
ITC-13, pp. 397-403, 1992.
[B82] D. P. Bertsekas, "Dynamic Behavior of Shortest Path Routing
Algorithms for Communication Networks", IEEE Trans. Auto.
Control, pp. 60-74, 1982.
[B85] A. E. Baratz, "SNA Networks of Small Systems", IEEE JSAC,
May, 1985.
[BBCD98] Black, D., Blake, S., Carlson, M., Davies, E., Wang, Z., and
W. Weiss, "An Architecture for Differentiated Services",
Work in Progress.
[BCCH94] Braden, R., Clark, D., Crocker, D., and C. Huitema, "Report
of IAB Workshop on Security in the Internet Architecture",
RFC 1636, June 1994.
Crawley, et. al. Informational [Page 31]
RFC 2386 A Framework for QoS-based Routing August 1998
[BCF94] A. Ballardie, J. Crowcroft and P. Francis, "Core-Based
Trees: A Scalable Multicast Routing Protocol", Proceedings
of SIGCOMM `94.
[BCS94] Braden, R., Clark, D., and S. Shenker, "Integrated Services
in the Internet Architecture: An Overview", RFC 1633, July
1994.
[BZ92] S. Bahk and M. El Zarki, "Dynamic Multi-Path Routing and How
it Compares with Other Dynamic Routing Algorithms for High
Speed Wide Area Networks", Proc. SIGCOMM `92, pp. 53-64,
1992.
[BZBH97] Braden, R., Zhang, L., Berson, S., Herzog, S., and S. Jamin,
"Resource ReSerVation Protocol (RSVP) -- Version 1
Functional Spec", RFC 2205, September 1997.
[C91] C-H. Chow, "On Multicast Path Finding Algorithms",
Proceedings of the IEEE INFOCOM `91, pp. 1274-1283, 1991.
[CCM96] Castineyra, I., Chiappa, J., and M. Steenstrup, "The Nimrod
Routing Architecture", RFC 1992, August 1996.
[DEFV94] S. E. Deering, D. Estrin, D. Farinnacci, V. Jacobson, C-G.
Liu, and L. Wei, "An Architecture for Wide-Area Multicast
Routing", Technical Report, 94-565, ISI, University of
Southern California, 1994.
[ELRV96] Estrin, D., Li, T., Rekhter, Y., Varadhan, K., and D.
Zappala, "Source Demand Routing: Packet Format and
Forwarding Specification (Version 1)", RFC 1940, May 1996.
[GKR96] R. Gawlick, C. R. Kalmanek, and K. G. Ramakrishnan, "On-Line
Routing of Permanent Virtual Circuits", Computer
Communications, March, 1996.
[GPSS98] A. Ghanwani, J. W. Pace, V. Srinivasan, A. Smith and M.
Seaman, "A Framework for Providing Integrated Services over
Shared and Switched IEEE 802 LAN Technologies", Work in
Progress.
[GM79] J. P. Gray, T. B. McNeil, "SNA Multi-System Networking", IBM
Systems Journal, 18 No. 2, pp. 263-297, 1979.
[GOA97] Y. Goto, M. Ohta and K. Araki, "Path QoS Collection for
Stable Hop-by-Hop QoS Routing", Proc. INET '97, June, 1997.
Crawley, et. al. Informational [Page 32]
RFC 2386 A Framework for QoS-based Routing August 1998
[GKOP98] R. Guerin, S. Kamat, A. Orda, T. Przygienda, and D.
Williams, "QoS Routing Mechanisms and OSPF extensions", work
in progress, March, 1998.
[IBM97] IBM Corp, SNA APPN - High Performance Routing Architecture
Reference, Version 2.0, SV40-1018, February 1997.
[IPNNI] ATM Forum Technical Committee. Integrated PNNI (I-PNNI) v1.0
Specification. af-96-0987r1, September 1996.
[ISI81] Postel, J., "Internet Protocol", STD 5, RFC 791, September
1981.
[JMW83] J. M. Jaffe, F. H. Moss, R. A. Weingarten, "SNA Routing:
Past, Present, and Possible Future", IBM Systems Journal,
pp. 417-435, 1983.
[K88] F.P. Kelly, "Routing in Circuit-Switched Networks:
Optimization, Shadow Prices and Decentralization", Adv.
Applied Prob., pp. 112-144, March, 1988.
[L95] W. C. Lee, "Topology Aggregation for Hierarchical Routing in
ATM Networks", ACM SIGCOMM Computer Communication Review,
1995.
[M86] L. G. Mason, "On the Stability of Circuit-Switched Networks
with Non-hierarchical Routing", Proc. 25th Conf. On Decision
and Control, pp. 1345-1347, 1986.
[M98] Moy, J., "OSPF Version 2", STD 54, RFC 2328, April 1998.
[M94] Moy, J., "MOSPF: Analysis and Experience", RFC 1585, March
1994.
[M98] Q. Ma, "Quality-of-Service Routing in Integrated Services
Networks", PhD thesis, Computer Science Department, Carnegie
Mellon University, 1998.
[MMR96] D. Mitra, J. Morrison, and K. G. Ramakrishnan, "ATM Network
Design and Optimization: A Multirate Loss Network
Framework", Proceedings of IEEE INFOCOM `96, 1996.
[MRR80] J. M. McQuillan, I. Richer and E. C. Rosen, "The New Routing
Algorithm for the ARPANET", IEEE Trans. Communications, pp.
711-719, May, 1980.
Crawley, et. al. Informational [Page 33]
RFC 2386 A Framework for QoS-based Routing August 1998
[MS91] D. Mitra and J. B. Seery, "Comparative Evaluations of
Randomized and Dynamic Routing Strategies for Circuit
Switched Networks", IEEE Trans. on Communications, pp. 102-
116, January, 1991.
[MW77] J. M. McQuillan and D. C. Walden, "The ARPANET Design
Decisions", Computer Networks, August, 1977.
[NC94] Nair, R. and Clemmensen, D. : "Routing in Integrated
Services Networks", Proc. 2nd International Conference on
Telecom. Systems Modeling and Analysis, March 1994.
[P88] R. Perlman, "Network Layer Protocol with Byzantine
Robustness", Ph.D. Thesis, Dept. of EE and CS, MIT, August,
1988.
[PNNI96] ATM Forum PNNI subworking group, "Private Network-Network
Interface Spec. v1.0 (PNNI 1.0)", afpnni-0055.00, March
1996.
[R76] H. Rudin, "On Routing and "Delta Routing": A Taxonomy and
Performance Comparison of Techniques for Packet-Switched
Networks", IEEE Trans. Communications, pp. 43-59, January,
1996.
[R92] Y. Rekhter, "IDRP Protocol Analysis: Storage Overhead", ACM
Comp. Comm. Review, April, 1992.
[R96] B. Rajagopalan, "Efficient Link State Routing", Work in
Progress, available from [email protected].
[RN98] B. Rajagopalan and R. Nair, "Multicast Routing with Resource
Reservation", to appear in J. of High Speed Networks, 1998.
[SD95] S. Sibal and A. Desimone, "Controlling Alternate Routing in
General-Mesh Packet Flow Networks", Proceedings of ACM
SIGCOMM, 1995.
[SPG97] Shenker, S., Partridge, C., and R. Guerin, "Specification of
Guaranteed Quality of Service", RFC 2212, September 1997.
[T88] D. M. Topkis, "A k-Shortest-Path Algorithm for Adaptive
Routing in Communications Networks", IEEE Trans.
Communications, pp. 855-859, July, 1988.
[W88] B. M. Waxman, "Routing of Multipoint Connections", IEEE
JSAC, pp. 1617-1622, December, 1988.
Crawley, et. al. Informational [Page 34]
RFC 2386 A Framework for QoS-based Routing August 1998
[W97] Wroclawski, J., "Specification of the Controlled-Load Network
Element Service", RFC 2211, September 1997.
[WC96] Z. Wang and J. Crowcroft, "QoS Routing for Supporting
Resource Reservation", IEEE JSAC, September, 1996.
[YS81] T. P. Yum and M. Schwartz, "The Join-Based Queue Rule and
its Application to Routing in Computer Communications
Networks", IEEE Trans. Communications, pp. 505-511, 1981.
[YS87] T. G. Yum and M. Schwartz, "Comparison of Routing Procedures
for Circuit-Switched Traffic in Nonhierarchical Networks",
IEEE Trans. Communications, pp. 535-544, May, 1987.
[ZES97] Zappala, D., Estrin, D., and S. Shenker, "Alternate Path
Routing and Pinning for Interdomain Multicast Routing", USC
Computer Science Technical Report #97-655, USC, 1997.
[ZSSC97] Zhang, Z., Sanchez, C., Salkewicz, B., and E. Crawley, "QoS
Extensions to OSPF", Work in Progress.
Crawley, et. al. Informational [Page 35]
RFC 2386 A Framework for QoS-based Routing August 1998
AUTHORS' ADDRESSES
Bala Rajagopalan
NEC USA, C&C Research Labs
4 Independence Way
Princeton, NJ 08540
U.S.A
Phone: +1-609-951-2969
EMail: [email protected]
Raj Nair
Arrowpoint
235 Littleton Rd.
Westford, MA 01886
U.S.A
Phone: +1-508-692-5875, x29
EMail: [email protected]
Hal Sandick
Bay Networks, Inc.
1009 Slater Rd., Suite 220
Durham, NC 27703
U.S.A
Phone: +1-919-941-1739
EMail: [email protected]
Eric S. Crawley
Argon Networks, Inc.
25 Porter Rd.
Littelton, MA 01460
U.S.A
Phone: +1-508-486-0665
EMail: [email protected]
Crawley, et. al. Informational [Page 36]
RFC 2386 A Framework for QoS-based Routing August 1998
Full Copyright Statement
Copyright (C) The Internet Society (1998). All Rights Reserved.
This document and translations of it may be copied and furnished to
others, and derivative works that comment on or otherwise explain it
or assist in its implementation may be prepared, copied, published
and distributed, in whole or in part, without restriction of any
kind, provided that the above copyright notice and this paragraph are
included on all such copies and derivative works. However, this
document itself may not be modified in any way, such as by removing
the copyright notice or references to the Internet Society or other
Internet organizations, except as needed for the purpose of
developing Internet standards in which case the procedures for
copyrights defined in the Internet Standards process must be
followed, or as required to translate it into languages other than
English.
The limited permissions granted above are perpetual and will not be
revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an
"AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Crawley, et. al. Informational [Page 37]
|
__label__pos
| 0.72591 |
manpagez: man pages & more
html files: gobject
Home | html | info | man
Signals
GObject's signals have nothing to do with standard UNIX signals: they connect arbitrary application-specific events with any number of listeners. For example, in GTK+, every user event (keystroke or mouse move) is received from the windowing system and generates a GTK+ event in the form of a signal emission on the widget object instance.
Each signal is registered in the type system together with the type on which it can be emitted: users of the type are said to connect to the signal on a given type instance when they register a closure to be invoked upon the signal emission. Users can also emit the signal by themselves or stop the emission of the signal from within one of the closures connected to the signal.
When a signal is emitted on a given type instance, all the closures connected to this signal on this type instance will be invoked. All the closures connected to such a signal represent callbacks whose signature looks like:
1
return_type function_callback (gpointer instance,, gpointer user_data);
Signal registration
To register a new signal on an existing type, we can use any of g_signal_newv, g_signal_new_valist or g_signal_new functions:
1
2
3
4
5
6
7
8
9
10
guint g_signal_newv (const gchar *signal_name,
GType itype,
GSignalFlags signal_flags,
GClosure *class_closure,
GSignalAccumulator accumulator,
gpointer accu_data,
GSignalCMarshaller c_marshaller,
GType return_type,
guint n_params,
GType *param_types);
The number of parameters to these functions is a bit intimidating but they are relatively simple:
• signal_name: is a string which can be used to uniquely identify a given signal.
• itype: is the instance type on which this signal can be emitted.
• signal_flags: partly defines the order in which closures which were connected to the signal are invoked.
• class_closure: this is the default closure for the signal: if it is not NULL upon the signal emission, it will be invoked upon this emission of the signal. The moment where this closure is invoked compared to other closures connected to that signal depends partly on the signal_flags.
• accumulator: this is a function pointer which is invoked after each closure has been invoked. If it returns FALSE, signal emission is stopped. If it returns TRUE, signal emission proceeds normally. It is also used to compute the return value of the signal based on the return value of all the invoked closures. For example, an accumulator could ignore NULL returns from closures; or it could build a list of the values returned by the closures.
• accu_data: this pointer will be passed down to each invocation of the accumulator during emission.
• c_marshaller: this is the default C marshaller for any closure which is connected to this signal.
• return_type: this is the type of the return value of the signal.
• n_params: this is the number of parameters this signal takes.
• param_types: this is an array of GTypes which indicate the type of each parameter of the signal. The length of this array is indicated by n_params.
As you can see from the above definition, a signal is basically a description of the closures which can be connected to this signal and a description of the order in which the closures connected to this signal will be invoked.
Signal connection
If you want to connect to a signal with a closure, you have three possibilities:
• You can register a class closure at signal registration: this is a system-wide operation. i.e.: the class closure will be invoked during each emission of a given signal on any of the instances of the type which supports that signal.
• You can use g_signal_override_class_closure which overrides the class closure of a given type. It is possible to call this function only on a derived type of the type on which the signal was registered. This function is of use only to language bindings.
• You can register a closure with the g_signal_connect family of functions. This is an instance-specific operation: the closure will be invoked only during emission of a given signal on a given instance.
It is also possible to connect a different kind of callback on a given signal: emission hooks are invoked whenever a given signal is emitted whatever the instance on which it is emitted. Emission hooks are used for example to get all mouse_clicked emissions in an application to be able to emit the small mouse click sound. Emission hooks are connected with g_signal_add_emission_hook and removed with g_signal_remove_emission_hook.
Signal emission
Signal emission is done through the use of the g_signal_emit family of functions.
1
2
3
4
void g_signal_emitv (const GValue *instance_and_params,
guint signal_id,
GQuark detail,
GValue *return_value);
• The instance_and_params array of GValues contains the list of input parameters to the signal. The first element of the array is the instance pointer on which to invoke the signal. The following elements of the array contain the list of parameters to the signal.
• signal_id identifies the signal to invoke.
• detail identifies the specific detail of the signal to invoke. A detail is a kind of magic token/argument which is passed around during signal emission and which is used by closures connected to the signal to filter out unwanted signal emissions. In most cases, you can safely set this value to zero. See the section called “The detail argument” for more details about this parameter.
• return_value holds the return value of the last closure invoked during emission if no accumulator was specified. If an accumulator was specified during signal creation, this accumulator is used to calculate the return value as a function of the return values of all the closures invoked during emission. If no closure is invoked during emission, the return_value is nonetheless initialized to zero/null.
Signal emission can be decomposed in 5 steps:
1. RUN_FIRST: if the G_SIGNAL_RUN_FIRST flag was used during signal registration and if there exists a class closure for this signal, the class closure is invoked.
2. EMISSION_HOOK: if any emission hook was added to the signal, they are invoked from first to last added. Accumulate return values.
3. HANDLER_RUN_FIRST: if any closure were connected with the g_signal_connect family of functions, and if they are not blocked (with the g_signal_handler_block family of functions) they are run here, from first to last connected.
4. RUN_LAST: if the G_SIGNAL_RUN_LAST flag was set during registration and if a class closure was set, it is invoked here.
5. HANDLER_RUN_LAST: if any closure were connected with the g_signal_connect_after family of functions, if they were not invoked during HANDLER_RUN_FIRST and if they are not blocked, they are run here, from first to last connected.
6. RUN_CLEANUP: if the G_SIGNAL_RUN_CLEANUP flag was set during registration and if a class closure was set, it is invoked here. Signal emission is completed here.
If, at any point during emission (except in RUN_CLEANUP state), one of the closures or emission hook stops the signal emission with g_signal_stop_emission, emission jumps to RUN_CLEANUP state.
If, at any point during emission, one of the closures or emission hook emits the same signal on the same instance, emission is restarted from the RUN_FIRST state.
The accumulator function is invoked in all states, after invocation of each closure (except in RUN_EMISSION_HOOK and RUN_CLEANUP). It accumulates the closure return value into the signal return value and returns TRUE or FALSE. If, at any point, it does not return TRUE, emission jumps to RUN_CLEANUP state.
If no accumulator function was provided, the value returned by the last handler run will be returned by g_signal_emit.
The detail argument
All the functions related to signal emission or signal connection have a parameter named the detail. Sometimes, this parameter is hidden by the API but it is always there, in one form or another.
Of the three main connection functions, only one has an explicit detail parameter as a GQuark: g_signal_connect_closure_by_id. [6]
The two other functions, g_signal_connect_closure and g_signal_connect_data hide the detail parameter in the signal name identification. Their detailed_signal parameter is a string which identifies the name of the signal to connect to. The format of this string should match signal_name::detail_name. For example, connecting to the signal named notify::cursor_position will actually connect to the signal named notify with the cursor_position detail. Internally, the detail string is transformed to a GQuark if it is present.
Of the four main signal emission functions, one hides it in its signal name parameter: g_signal_connect. The other three have an explicit detail parameter as a GQuark again: g_signal_emit, g_signal_emitv and g_signal_emit_valist.
If a detail is provided by the user to the emission function, it is used during emission to match against the closures which also provide a detail. If a closure's detail does not match the detail provided by the user, it will not be invoked (even though it is connected to a signal which is being emitted).
This completely optional filtering mechanism is mainly used as an optimization for signals which are often emitted for many different reasons: the clients can filter out which events they are interested in before the closure's marshalling code runs. For example, this is used extensively by the notify signal of GObject: whenever a property is modified on a GObject, instead of just emitting the notify signal, GObject associates as a detail to this signal emission the name of the property modified. This allows clients who wish to be notified of changes to only one property to filter most events before receiving them.
As a simple rule, users can and should set the detail parameter to zero: this will disable completely this optional filtering for that signal.
[6] A GQuark is an integer which uniquely represents a string. It is possible to transform back and forth between the integer and string representations with the functions g_quark_from_string and g_quark_to_string.
© manpagez.com 2000-2017
Individual documents may contain additional copyright information.
|
__label__pos
| 0.686157 |
JP2005149436A - Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program - Google Patents
Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program Download PDF
Info
Publication number
JP2005149436A
JP2005149436A JP2003390239A JP2003390239A JP2005149436A JP 2005149436 A JP2005149436 A JP 2005149436A JP 2003390239 A JP2003390239 A JP 2003390239A JP 2003390239 A JP2003390239 A JP 2003390239A JP 2005149436 A JP2005149436 A JP 2005149436A
Authority
JP
Japan
Prior art keywords
priority
step
device
job
logical device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2003390239A
Other languages
Japanese (ja)
Inventor
Kentetsu Eguchi
Takao Watanabe
Yasutomo Yamamoto
康友 山本
賢哲 江口
恭男 渡辺
Original Assignee
Hitachi Ltd
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd, 株式会社日立製作所 filed Critical Hitachi Ltd
Priority to JP2003390239A priority Critical patent/JP2005149436A/en
Publication of JP2005149436A publication Critical patent/JP2005149436A/en
Application status is Pending legal-status Critical
Links
Images
Classifications
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F11/00Error detection; Error correction; Monitoring
• G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
• G06F11/14Error detection or correction of the data by redundancy in operation
• G06F11/1402Saving, restoring, recovering or retrying
• G06F11/1415Saving, restoring, recovering or retrying at system level
• G06F11/1441Resetting or repowering
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F12/00Accessing, addressing or allocating within memory systems or architectures
• G06F12/02Addressing or allocation; Relocation
• G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
• G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
• G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F12/00Accessing, addressing or allocating within memory systems or architectures
• G06F12/02Addressing or allocation; Relocation
• G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
• G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
• G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
• G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
• G06F2212/26Using a specific storage system architecture
• G06F2212/261Storage comprising a plurality of storage devices
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
• G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
• G06F3/0601Dedicated interfaces to storage systems
• G06F3/0602Dedicated interfaces to storage systems specifically adapted to achieve a particular effect
• G06F3/061Improving I/O performance
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
• G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
• G06F3/0601Dedicated interfaces to storage systems
• G06F3/0602Dedicated interfaces to storage systems specifically adapted to achieve a particular effect
• G06F3/0614Improving the reliability of storage systems
• G06F3/0617Improving the reliability of storage systems in relation to availability
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
• G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
• G06F3/0601Dedicated interfaces to storage systems
• G06F3/0628Dedicated interfaces to storage systems making use of a particular technique
• G06F3/0629Configuration or reconfiguration of storage systems
• G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
• G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
• G06F3/0601Dedicated interfaces to storage systems
• G06F3/0668Dedicated interfaces to storage systems adopting a particular infrastructure
• G06F3/0671In-line storage system
• G06F3/0683Plurality of storage devices
• G06F3/0689Disk arrays, e.g. RAID, JBOD
Abstract
<P>PROBLEM TO BE SOLVED: To avoid the deletion of high-priority and important data, the decrease of whole input/output processing for storage, the occurrence of a performance influence for high-priority operation when troubles occur in a storage that has a cache memory and a disk apparatus as much as possible. <P>SOLUTION: Logic devices which are units of devices provided to a host are distributed to a plurality of physical devices which are composed of a plurality of disk apparatuses, in consideration of preset priority to the logic devices, so that dirty data are quickly reflected to the disk apparatuses based on the priority when a trouble occurs. In addition, jobs related to important operations are processed with high priority when the trouble occurs to suppress the decrease of host processing performance. <P>COPYRIGHT: (C)2005,JPO&NCIPI
Description
The present invention is connected to a disk device by a connection line, and further coupled to a cache memory and a shared memory by an interconnection network, and processes an input / output request exchanged with the host via a port which is an interface with the host. The present invention relates to a storage device (computer system) having a control device, a control method in the storage device, a job scheduling processing method, a failure processing method, and their programs, and in particular, to a data priority control technique when a failure occurs.
As the prior art 1, a technique described in Japanese Patent Laid-Open No. 11-167521 is known. By adopting a common bus system, this conventional technology 1 realizes a scalable system by connecting each logical module such as a host adapter and storage device adapter, a cache memory, and a disk device according to the system configuration (scale). In addition, each logical module, disk device, and common bus can be multiplexed to perform degenerate operation and support hot swapping of each logical module and storage medium, and can be maintained without stopping. The system is described.
As in the prior art 1, many of the disk control devices that are interposed between the host and the large-capacity disk device and control the data transfer between them are cache memories as a temporary storage mechanism for the transferred data. It has. However, since the cache memory is a volatile memory, the stored data is lost when the power supply is stopped. In addition, data may be lost due to a hardware failure of the cache memory in the disk controller. In order to avoid such data loss, there is known a disk control device that has a duplicate cache memory and stores write data in the duplicate cache memory. However, if a double failure occurs in the cache memory, the data will be lost.
Furthermore, the prior art 1 has a redundant power supply, and can be continuously operated even if one of the power supplies fails. In addition, in the event of a power failure, loss of data on the cache memory and the shared memory can be avoided by using a standby power supply that the disk control device has. In order to guarantee data on the cache memory at the time of power failure and power failure, the dirty data on the cache memory is reflected in the disk device.
Further, for subsequent write requests from the host, synchronous write, which is a write process for reporting the write completion to the host after the write data is written to the disk device, is used to ensure data. However, the synchronous write has a problem that the response performance to the host is lower than that of the write after.
Further, as the conventional technique 2, a technique described in JP-A-2002-334049 is known. In this prior art 2, the inflow restriction of the amount of side files in the storage subsystem under the multiple hosts that perform asynchronous remote copy is performed based on the priority set for the host, and less important data is cached. It describes preventing the memory from being squeezed.
Further, as the prior art 3, a technique described in Japanese Patent Laid-Open No. 2003-6016 is known. This prior art 3 describes that in asynchronous remote copy, a copy priority is assigned to a logical volume group and a copy of a logical volume group having a higher priority is given priority.
Japanese Patent Laid-Open No. 11-167521
JP 2002-334049 A JP 2003-6016 A
In the prior art 1, dirty data on the cache memory may be lost when a failure occurs. In addition, in preparation for a power failure, a standby power supply is provided to maintain dirty data on the cache memory. In addition, in preparation for a failure of the cache memory, the cache memory and the shared memory that holds data for managing the cache memory are duplicated.
However, in the above prior art 1, if the power failure continues and the standby power supply cannot be used, or if a double failure occurs in the cache memory or the shared memory, the data not reflected in the disk is the data importance level. Had a problem that disappeared unrelatedly. Further, since a synchronous write is used from the viewpoint of data security when a failure occurs, there is a problem in that the performance of the entire storage input / output process deteriorates.
The prior arts 2 and 3 do not take into account the occurrence of a failure (power failure or cache memory failure).
In order to solve the above problems, an object of the present invention is to provide a storage device and a storage device capable of quickly evacuating data in a cache memory to a disk device when a failure occurs and avoiding loss of important data with high priority Is to provide a control method, a job scheduling processing method, a failure processing method, and a program thereof.
The storage device includes a port that is an interface with a host, a cache memory, a control device that is connected to the port, the cache memory, and the shared memory by connection lines, and a disk device that is connected to the control device. The storage apparatus accepts the first priority information for the logical device provided to the host from the management terminal, and according to the first priority information, the logical device with higher priority has more physical devices than the logical device with lower priority. When a failure occurs, the logical device data stored in the cache memory is controlled to be stored in a plurality of physical devices associated with the logical device.
According to the present invention, when a failure occurs in a storage device, dirty data in the cache memory in the storage can be quickly reflected in the disk device, and the loss of important data with high priority can be avoided. Can do.
Further, according to the present invention, when a failure occurs in the storage apparatus, it is possible to avoid performance degradation related to important business with high priority as much as possible.
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
In the embodiment of the present invention, dirty data that is on the cache memory 112 and is not reflected in the disk device 114 is immediately reflected in the disk device 114 when a failure occurs in the storage 110. Therefore, the physical device to which the logical device is placed is optimized in advance. Further, when a failure occurs in the storage 110, the storage 110 reflects the dirty data to the disk device 114 in the priority order set in advance in the logical device by the management terminal 140 that is operated and managed by the input from the storage administrator. To do. Further, when a failure occurs in the storage 110, the storage 110 schedules a job in the storage based on the priority set in advance by the management terminal 140 that is operated and managed by input from the storage administrator.
Next, an embodiment of the present invention will be described with reference to FIGS.
FIG. 1 explains an embodiment of a computer system (storage device) 1 according to the present invention. The computer system 1 includes a host 100, a storage 110, and a management terminal 140. The host 100 makes an input / output request to the storage 110. The storage 110 reads / writes data from / to the disk device 114 via the cache memory 112 held by the storage 110 in response to an input / output request from the host 100. The management terminal 140 receives input from the storage administrator and manages the operation of the storage 110.
Next, the configuration of the storage 110 will be described. The storage 110 includes a channel adapter 120, a disk adapter 130, an interconnection network (consisting of connection lines) 111, a cache memory 112, a shared memory 113, a disk device 114, a connection line 115, a power supply 116, and a standby power supply (battery). 117 and a power line 118. Between the disk adapter 130 and the disk device 114, two disk adapters 130 are provided in one disk device so that the disk device 114 can be used even when a failure occurs in one disk adapter or one interconnection network. They are connected by different interconnection networks 111.
The channel adapter 120 controls data transfer between the host 100 and the cache memory 112. The disk adapter 130 controls data transfer between the cache memory 112 and the disk device 114. The cache memory 112 is a memory that temporarily stores data received from the host 100 or data read from the disk device 114. Normally, at the time of a write request from the host 100, a write-after method is used in which a write completion report is sent to the host 100 when data is written to the cache memory 112 for the purpose of improving response performance to the host 100. The shared memory 113 is a memory shared by all channel adapters 120 and disk adapters 130.
In a storage using the disk device 114, particularly a storage for controlling a disk array such as a RAID (Redundant Array of Independent Disks), the definition of the logical device to be provided to the host 100 on the disk device 114 which is a physical device actually mounted. Data is stored according to the data format.
The channel adapter 120, the disk adapter 130, the cache memory 112, the shared memory 113, the power source 116, and the standby power source 117 are duplicated to cope with a failure. The channel adapter 120 receives an input / output request from the host 100 via the port 121 and controls data transfer with the host 100. The disk adapter 130 controls data transfer with the disk device 114. Both the channel adapter 120 and the disk adapter 130 transfer data via the cache memory 112. The power supply 116 and the standby power supply 117 supply power to the channel adapter 120, the disk adapter 130, the cache memory 112, the shared memory 113, and the disk device 114 via the power line 118. The cache memory 112 is a memory that temporarily stores write data from the host 100 and read data from the disk device 114. Normally, at the time of a write request from the host 100, a write-after method is used in which a write completion report is sent to the host 100 when data is written to the cache memory 112 for the purpose of improving response performance to the host 100. Furthermore, the cache memory 112 is divided and managed in a certain amount, and the divided cache memory is called a segment. The state of each segment is managed by a segment management table 6 described later. The shared memory 113 is a memory shared by all channel adapters 120 and disk adapters 130. In the shared memory 113, information for controlling data on the cache memory 112, information for controlling a job operating on the control processor on the channel adapter 120, and a job operating on the control processor on the disk adapter 130 are stored. Contains information for controlling. That is, in the shared memory 113, as shown in FIG. 2, the logical device management table 2, LU path management table 3, physical device management table 4, slot management table 5, segment management table 6, channel job management table 7, disk Information such as the job management table 8, the host management table 9, the access pattern management table 10, the business management table 11, and the scheduling management table 12 is stored.
The management terminal 140 includes storage management program operation means 142 to 145 such as a PC and an input / output means 141 to / from a storage administrator, and the storage 110 via the I / Fs 146 and 119 such as attribute settings related to logical devices. It is an interface from the storage administrator for maintenance operations.
As described above, the storage 110 has a port 121 that is an interface with the host 100, and includes a control device that includes the channel adapter 120 and the disk adapter 130 that process input / output requests exchanged with the host. The control devices 120 and 130 are connected to the disk device 114 by a connection line 115, and are further connected to the cache memory 112 and the shared memory 113 by an interconnection network 111, and various parameters (FIGS. 14 and 15 are received from the storage administrator). And a management terminal 140 that receives the logical device definition processing, LU path definition processing, logical device priority definition processing, host priority definition processing, and business priority definition processing parameters shown in FIGS. 16, 17, and 18. Connected by I / F 119, It consists of shared memory 113 for storing the various parameters received in physical terminal 140 as a management table 2-11.
Next, an example of the logical device management table 2 will be described with reference to FIG. The logical device management table 2 includes a logical device number 201, a size 202, a device state 203, a physical device number 204, a connection host number 205, a port number / target ID / LUN 206, a logical device priority 207, extended logical device information 208, and a failure. It includes data saving information 209 at the time of occurrence.
The size 202 stores the capacity of the logical device. For example, in FIG. 3, regarding the logical device numbers 1, 2,..., 1 GB, 2 GB,.
In the device status 203, information indicating the status of the logical device is set. The status includes “online”, “offline”, “unimplemented”, and “failure offline”. “Offline” indicates that the logical device is defined and operating normally, but cannot be accessed from the host 100 because the LU path is not defined. “Not mounted” indicates that the logical device is not defined and cannot be accessed from the host 100. “Fault offline” indicates that a failure has occurred in the logical device and access from the host 100 is not possible. When a failure is detected for a logical device that is “online”, “failure offline” is set for the logical device. For example, in FIG. 3, “Online”, “Online”,... Are designated as device states for the logical device numbers 1, 2,.
The physical device number 204 stores a physical device number corresponding to the logical device. For example, in FIG. 3, regarding the logical device numbers 1, 2,..., 1, 2,.
The connected host number 205 is a host number that identifies the host 100 that is permitted to access the logical device. For example, in FIG. 3, for the logical device numbers 1, 2,..., 1, 2,.
In the port number of the entry 206, information indicating which port 121 of the plurality of ports 121 the logical device is connected to is set. Each port 121 is assigned a unique number in the storage 110, and the number of the port 121 in which the logical device is LUN-defined is recorded. The target ID and LUN of the entry are identifiers for identifying logical devices. Here, as these identifiers, SCSI-ID and LUN used when accessing the device from the host 100 on the SCSI are used. For example, for logical device numbers 1, 2,..., Port IDs 1, 2,... And target ID / LUN 0/0, 0/0,.
The logical device priority 207 is information indicating the priority of data when a failure occurs in the storage 110. For example, numerical values from the highest priority value 1 to the lowest priority value 5 are used. For logical device numbers 1, 2,..., 1, 5,.
The extended logical device information 208 is information for connecting a plurality of logical devices and providing them to the host 100 as a single logical device (referred to as an extended logical device). Specify a list of logical device numbers that make up the device, and specify undefined when not using extended logical devices. For example, in FIG. 3, the logical device number 5 is composed of logical device numbers 5, 6, and 7 and is provided to the host 100 as a 12 GB logical device that is the total size of each logical device.
The data saving information 209 at the time of failure is information for managing whether the dirty data on the cache memory 112 relating to each logical device has been completely saved to the disk device 114 when the failure occurs. The status includes “undefined”, “incomplete”, and “completed”. “Undefined” indicates that no failure has occurred. “Incomplete” indicates that a failure has occurred, but saving of dirty data in the cache memory 112 has not been completed. “Completed” indicates that a failure has occurred, but saving of dirty data in the cache memory 112 has been completed. The data saving information 209 at the time of failure occurrence is normally set to “undefined”, but “uncompleted” for all logical devices having dirty data on the cache memory 112 when a failure is detected. Is set.
Next, an example of the LU path management table 3 will be described with reference to FIG. The LU path management table 3 includes a port number 301, a target ID / LUN 302, a connection host number 303, and a corresponding logical device number 304. For each port 121 in the storage 110, information for valid LUNs is held. The target ID / LUN 302 stores the LUN address defined in the port 121. The connected host number 303 is information indicating the host 100 that is permitted to access the LUN of the port 121. When LUNs of a plurality of ports 121 are defined for one logical device, the union of the connection host numbers of all the LUNs is held in the connection host number 205 of the logical device management information 2. The corresponding logical device number 304 stores the number of the logical device to which the LUN is assigned. For example, for port numbers 1, 2,..., 0 is specified for the target ID, 0 for the LUN, 1 for the connected host number, and 1 for the corresponding logical device number.
Next, an example of the physical device management table 4 will be described with reference to FIG. The physical device management table 4 includes a physical device number 401, a size 402, a corresponding logical device number list 403, a device status 404, a RAID configuration 405, a disk number list 406, a stripe size 407, an in-disk size 408, and an in-disk start offset 409. Including. The size 402 stores the capacity of the physical device specified by the physical device number 401. The corresponding logical device number list 403 stores a list of logical device numbers in the storage 110 corresponding to the physical device. If the logical device is not assigned, an invalid value is set in the entry. In the device status 404, information indicating the status of the physical device is set. The status includes “online”, “offline”, “unimplemented”, and “failure offline”. “Online” indicates that the physical device is operating normally and assigned to the logical device. “Offline” indicates that the physical device is defined and operating normally, but is not assigned to a logical device. “Not mounted” indicates that the physical device is not defined on the disk device 114. “Failure offline” indicates that a failure has occurred in the physical device and cannot be assigned to a logical device. In this embodiment, for the sake of simplicity, it is assumed that the physical device is created on the disk device 114 in advance when the product is shipped from the factory. For this reason, the initial value of the device status 404 is “offline” for available physical devices, and the “unmounted” status is set for others. The RAID configuration 405 holds information related to the RAID configuration, such as the RAID level of the disk device 114 to which the physical device is allocated, the number of data disks and parity disks. Similarly, the stripe size 407 holds the data division unit (stripe) length in RAID. The disk number list 406 holds the numbers of a plurality of disk devices 114 constituting the RAID to which the physical device is assigned. This number is a unique value assigned to identify the disk device 114 in the storage 110. The in-disk size 408 and the in-disk start offset 409 are information indicating to which area in each disk device 114 the physical device data is allocated. In this embodiment, for the sake of simplicity, the offsets and sizes in the respective disk devices 114 constituting the RAID are unified for all physical devices.
Next, an example of the slot management table 5 will be described with reference to FIG. The host 100 uses a logical address to specify data of a logical device in the storage 110. The logical address includes, for example, a logical device number and position information within the logical device. In the storage 110, a continuous logical address space is divided and managed in a certain amount, and the divided logical addresses are called slots. The division size of the logical address space is called a slot size. For example, the slot number is obtained by adding 1 to the quotient obtained by dividing the logical address space by the slot size. The slot management table 5 includes a slot number 501, a segment number list 502, a slot attribute 503, a logical device number 504, a host number 505, and lock information 506. The segment number list 502 holds the segment numbers of the segments (divided cache memory) included in the slot in a list format. For example, FIG. 6 shows an example in which one slot is composed of four segments. If there is no segment at the corresponding position in the slot, 0 is specified as the invalid value. The slot attribute 503 holds the attribute of the slot. As attributes, there are “clean”, “dirty”, and “free”. “Clean” means that the data on the cache memory 112 held by the slot matches the data on the disk device 114. “Dirty” means that the data on the cache memory 112 held by the slot is not reflected in the disk device 114. “Free” means that the slot is not used. The logical device number 504 holds a logical device number corresponding to the slot. The host number 505 holds the host number of the host 100 that made the input / output request corresponding to the slot. Lock information 506 holds lock information for exclusion between the channel adapter 120 and the disk adapter 130 in the slot. “On” and “off” exist as states, “on” means that the slot is locked, and “off” means that the slot is not locked.
Next, an example of the segment management table 6 will be described with reference to FIG. The segment management table 6 includes a segment number 601 and block information 602. The block information 602 is a unit of access from the host 100 (hereinafter referred to as a block) and indicates whether the data held at the corresponding position of the segment is valid or invalid. In the example of the segment management table shown in FIG. 7, the segment size is 2048 bytes and the block size is 512 bytes. The segment management table 6 has four pieces of block information, and the block position 1 and the block position 3 of the segment number 1 Is valid, that is, there is valid data for 512 bytes from the beginning of the segment and for 1024 bytes to 512 bytes.
Next, an embodiment of the channel job management table 7 will be described with reference to FIG. A channel job refers to a job that operates on the channel adapter 120. The channel job management table 7 includes a job number 701, a process 702, a logical device number 703, a transfer start position 704, a transfer length 705, and a host number 706. The process 702 includes the processing contents of the job. Processing contents include “read” and “write”. The logical device number 703 holds the logical device number to be processed by the job. The transfer start position 704 holds an address on the logical device to be processed by the job. The transfer length 705 holds an access length to be processed by the job. The host number 706 holds the host number of the host 100 corresponding to the job processing.
Next, an embodiment of the disk job management table 8 will be described with reference to FIG. The disk job refers to a job that operates on the disk adapter 130. The disk job management table 8 includes job number 801, processing 802, logical device number 803, transfer start position 804, transfer length 805, host number 806, and channel adapter number 807. Regarding the contents, the description of the same parts as those described in FIG. 8 is omitted. The channel adapter number 807 holds the channel adapter number of the channel adapter corresponding to the request source of the job.
Next, an example of the host management table 9 will be described with reference to FIG. The host management table 9 includes a host number 901, a host name / WWN 902, and a host priority 903. The host name / WWN 902 is information for uniquely identifying the host. The host priority 903 reflects the importance of input / output processing in the host 100, and is used as a priority when scheduling jobs on channel adapters and disk adapters. A job with a high priority is, for example, a job that performs a process that requires a high-speed response such as an online transaction process for non-stop operation, and a job with a low priority is, for example, a night batch process. A job that performs processing that does not require a fast response.
Next, an embodiment of the access pattern management table 10 will be described with reference to FIG. The access pattern management table 10 includes a logical device number 1001, a read count 1002, a write count 1003, a read hit count 1004, a write hit count 1005, a sequential read count 1006, and a sequential write count 1007. The read count 1002 indicates the read count for the logical device. The number of writes 1003 indicates the number of writes to the logical device. The number of read hits 1004 indicates the number of read hits for the logical device. The write hit count 1005 indicates the write hit count for the logical device. The sequential read count 1006 indicates the sequential read count for the logical device. The sequential write count 1007 indicates the sequential write count for the logical device. The dirty amount management information 1008 holds information regarding the amount of dirty data on the cache memory 112 for each logical device. For example, the dirty data amount is the average value of the past 24 hours, the average value of the past hour, Holds the value.
Next, an embodiment of the business management table 11 will be described with reference to FIG. In the present embodiment, a business is defined by a set of a host and a logical device. Generally speaking, the business may have a different scale from the above definition, but in the present embodiment, the above definition is used as an example. The business management table 11 includes a business number 1101, a logical device number 1102, a host number 1103, and a business priority 1104. A logical device number 1102 and a host number 1103 indicate a set of a logical device and a host corresponding to a business. The business priority 1104 is information indicating the priority of data when a failure occurs in the storage 110. For example, numerical values from the highest priority value 1 to the lowest priority value 5 are used. For example, in FIG. 12, regarding the business number 1, 0 is specified for the logical device number, 0 for the host number, and 1 for the business priority. Based on this priority, scheduling of jobs on the channel adapter and the disk adapter is performed. An example of the meaning of high job priority is the same as that described in the host management table 9.
Next, an example of the scheduling management table 12 will be described with reference to FIG. The scheduling management table 12 manages scheduling parameters at the time of failure job scheduling, and designates scheduling ratios for the three priorities of the logical device priority 1201, the host priority 1202, and the business priority 1203. To do. This ratio is used for a job scheduling process (process shown in FIG. 20) in the channel adapter, which will be described later, and a job scheduling process (process shown in FIG. 23) in the channel adapter. For example, in FIG. 13, ratios of 0.5, 0.3, and 0.2 are set for the logical device priority, the host priority, and the job priority, respectively. In the job scheduling process, the logical device priority is set. Is used at a ratio of 5 to 10 times.
Next, the storage administrator uses the management terminal ST (140) to perform the logical device definition process 13, the LU path definition process 14, the logical device priority definition process 15, the host priority definition process 16, and the business priority definition process 17. What is performed will be described with reference to FIGS. 14 and 15 and FIGS. 16, 17 and 18. First, the storage administrator performs logical device definition processing 13 and LU path definition processing 14 using the management terminal ST (140) so that the host 100 can access the logical devices in the storage S (110). Become. Thereafter, the storage administrator performs logical device priority definition processing 15, host priority definition processing 16, and business priority definition processing 17 using the management terminal ST (140), so that the storage S (110) is set. It is possible to appropriately perform an operation when a failure occurs based on each priority (logical device priority, host priority, and business priority). Hereinafter, in the figure, ST represents the management terminal 140, and S represents the storage 110.
First, an example of logical device definition processing using the management terminal 140 will be described with reference to FIG. The logical device definition process 13 is a process of accepting an instruction from the management terminal 140 by the storage administrator and defining a logical device for the disk device 114 held by the storage 110. First, the CPU 145 of the management terminal 140 receives the definition (logical device number, physical device number, etc.) of the logical device input from the storage administrator using the input device 141, the display 142, etc., and stores it in the memory 143, etc. ( Step 1301). Thereafter, the CPU 145 of the management terminal 140 determines whether or not the storage administrator has input the designation of the physical device number of the placement destination as the definition of the logical device (step 1302), and there is a special reason for the placement destination physical device. When the number is designated, the CPU 145 of the management terminal 140 directly instructs the storage 110 to define the logical device via the I / Fs 146 and 119 (step 1305). The storage 110 that has received an instruction from the management terminal 140 sets the logical device number, size, device state, physical device number, start address, and logical device priority in the logical device management table 2 shown in FIG. A default value, for example, 3 is set for the logical device priority 207. Thereafter, a completion report is transmitted to the management terminal 140 (step 1306). As the device state, “offline” which is an initial value is set. Finally, the management terminal 140 receives a completion report from the storage 110 (step 1307).
When the physical device number of the placement destination is not specified, the CPU 145 of the management terminal 140 copies the logical device management table 2 shown in FIG. 3 and the physical device management table 4 shown in FIG. Is stored in the memory 143 of the management terminal 140 and the stored physical device is determined by referring to the stored information (step 1303), transferred to the storage 110, and the logical device management table in the shared memory 113 2 and the physical device management table 4. Specifically, the physical device of the placement destination is determined and transferred to the storage 110 so that the data on the cache memory 112 of the logical device with a high priority can be saved to the disk device 114 as soon as possible. The logical device management table 2 and the physical device management table 4 are set. As a determination method, for example, logical devices having the same priority are arranged in a plurality of different physical devices (consisting of one or a plurality of disk devices 114). As a result, access competition to the physical device does not occur when data is saved, and high-speed data saving can be expected. As another determination method, one logical device is composed of a plurality of logical devices by using the extended logical device described above. By distributing the physical devices in which the respective logical devices are arranged in this way, access competition to the physical device does not occur when data is saved, and high-speed data saving can be expected. After that, the optimal (recommended) physical device as the logical device placement destination is presented to the storage administrator via the display 142 of the management terminal 140 (step 1304). The subsequent processing is the same as that after step 1305.
That is, in the logical device definition processing 13, the management terminal 140 receives the priority (first priority) of the logical device with respect to the logical device provided to the host, and from one or more disk devices based on the priority of the logical device. Distribute and distribute multiple logical devices among multiple configured physical devices (the first high priority logical device should allocate more physical devices than the first low priority logical device) And b) processing to be transferred to the storage 110 and set in the logical device management table 2 and the physical device management table 4 in the shared memory 113. Therefore, by performing the logical device definition process 13 in advance, the control device 130 of the storage 110 is important among the dirty data that is data written from the host 100 on the cache memory 112 that is not reflected in the disk device when a failure occurs. It is possible to quickly reflect the data in the disk device.
Next, an example of LU path definition processing using the management terminal 140 will be described with reference to FIG. The LU path definition process 14 is a process for accepting an instruction from the management terminal 140 by the storage administrator and setting the logical device provided by the storage 110 in a state accessible from the host 100. First, the CPU 145 of the management terminal 140 receives an LU path definition instruction using the input device 141 or the like, and transfers the received instruction to the storage 110 (step 1401). The instruction includes a port number defining the LU, LUN, connection host number, and target logical device number. In accordance with this instruction, the storage 110 sets a value for each entry in the LU path management table 3 shown in FIG. 4, and then sends a completion report to the management terminal 140 (step 1402). Finally, the management terminal 140 receives a completion report from the storage 110 (step 1403).
Next, an example of logical device priority definition processing using the management terminal 140 will be described with reference to FIG. The logical device priority definition process 15 is a process for defining the order in which dirty data on the cache memory 112 is reflected in the disk device 114 when a failure occurs. First, the CPU 145 of the management terminal 140 receives the logical device priority from the storage administrator using the input device 141. The input parameters include a logical device number and a logical device priority. Thereafter, the management terminal 140 transmits the input parameter to the storage 110 (step 1501). The storage 110 that has received the input parameter from the management terminal 140 sets the logical device number and logical device priority in the logical device management table 2 shown in FIG. 3 based on the input parameter, and confirms that the registration to the management terminal 140 has been completed. Notification is made (step 1502). Upon receiving the registration completion report from the storage 110, the management terminal 140 reports the completion to the storage administrator (step 1503). Therefore, the control device 130 of the storage 110 reflects the dirty data on the cache memory 112 to the disk device 114 in the order based on the logical device priority set by the logical device priority definition processing 15 when a failure occurs. Can be performed.
Next, an example of host priority definition processing using the management terminal 140 will be described with reference to FIG. The host priority definition process 16 is a process for defining the process priority of the input / output request from the host 100. First, the CPU 145 of the management terminal 140 receives the priority of the host 100 by the input device 141 from the storage administrator. Input parameters include host number and priority. Thereafter, the management terminal 140 transmits the input parameter to the storage 110 (step 1601). The storage 110 that has received the input parameter from the management terminal 140 sets the host priority 903 of the host management table 9 shown in FIG. 10 based on the input parameter, and notifies the management terminal 140 that the registration has been completed (step 1602). ). Upon receiving the registration completion report from the storage 110, the management terminal 140 reports the completion to the storage administrator (step 1603).
Next, an example of business priority definition processing using the management terminal 140 will be described with reference to FIG. First, the CPU 145 of the management terminal 140 receives a business priority (second priority) from the storage administrator using the input device 141 in order to avoid performance degradation to an important business when a failure occurs. The input parameters include a business number, a combination of a logical device number and a host number, and a business priority. Thereafter, the management terminal 140 transmits the input parameter to the storage 110 (step 1701). The storage 110 that has received the input parameters from the management terminal 140 sets the logical device number 1102, the host number 1103, and the job priority 1104 of the job management table 11 shown in FIG. 12 based on the input parameters, and is registered in the management terminal 140. The completion is notified (step 1702). Upon receiving the registration completion report from the storage 110, the management terminal 140 reports the completion to the storage administrator (step 1703). As described above, since the business priority (second priority) is set by the management terminal 140 in the storage 110, the job scheduling in the storage is performed using the business priority set when a failure occurs. As a result, it is possible to avoid performance degradation to important business. The business priority may be a host priority.
Next, an example of channel adapter port processing in the channel adapter 120 will be described with reference to FIG. In the channel adapter port processing 18, the channel adapter 120 receives a command from the host 100 via the port 121 and registers a job in the channel job management table 7 shown in FIG. In this processing, the job is enqueued in a FIFO queue, logical device priority queue, host priority queue, and business priority queue (not shown) (step 1801). The priority queue is realized by a known data structure such as a balanced tree or a B-tree. The priority values when enqueuing in the logical device priority queue, host priority queue, and business priority queue correspond to the read / write commands from the host 100 in the logical device management table 2 shown in FIG. The logical device priority 207 of the logical device, the host priority 903 in the host management table 9 shown in FIG. 10, and the business priority 1104 in the business management table 11 shown in FIG. 12 are used.
Next, an example of job scheduling processing in the channel adapter 120 which is a control apparatus according to the present invention will be described with reference to FIG. In the job scheduling process 19 in the channel adapter 120, the channel adapter 120 first determines whether or not a failure has occurred (step 1901). If no failure has occurred, the channel job management shown in FIG. The job located at the end of the FIFO queue (not shown) to which the job of the table 7 is connected is dequeued and the job is executed (step 1902). At this time, the job is also dequeued from a logical device priority queue, a host priority queue, and a business priority queue (not shown). When a failure occurs, the channel adapter 120 is one of a logical device priority queue, a host priority queue, and a business priority queue (not shown) to which a job of the channel job management table 7 shown in FIG. 8 is connected. The job having the highest priority in the priority queue is dequeued and the job is executed (step 1903). The selection of the priority queue uses the ratio (set by the management terminal 140) of the logical device priority 1201, the host priority 1202, and the job priority 1203 in the scheduling management table 12 shown in FIG. At this time, the job is also dequeued from a FIFO queue (not shown). However, scheduling a job using only the priority queue will execute only the job with a higher priority and may not execute the job with a lower priority for a long period of time. The FIFO queue is used for The job executed in the job scheduling process 19 in the channel adapter 120 described above executes the read job process 20 shown in FIG. 21 or the write job process 21 shown in FIG.
Next, an example of read job processing in the channel adapter 120 will be described with reference to FIG. In the read job processing 20 in the channel adapter 120, if there is read request target data from the host 100 on the cache memory 112, the data is transferred to the host 100, and if not, the disk adapter 130 requests the staging of the read request target data. . First, the channel adapter 120 checks the device status of the read request target logical device (step 2002). If it is other than “online” (error), it sends an error to the host 100 and ends the process. If there is no error, the channel adapter 120 analyzes the request from the job scheduling process 19 and calculates the slot number, segment position, and block position corresponding to the read request target data (step 2002). Thereafter, the slot corresponding to the slot number is referred to and updated, but the lock of the slot is acquired so that other channel adapters 120 and disk adapters 130 do not access the slot at the same time (step 2003). Specifically, “ON” is set in the lock information of the slot management table 5 shown in FIG. Since the lock acquisition process is the same as the process described here, the description thereof will be omitted. Next, hit miss determination of the read request target data is performed (step 2004). Specifically, the segment number list corresponding to the target slot number is referred to from the slot management table 5 to obtain the segment number of the target segment position. Next, block information corresponding to the target segment number is referred to from the segment management table 6 to determine whether the data at the target block position is valid or invalid.
If the data is valid (hit) in step 2004, the channel adapter 120 first updates the access information (step 2005). Specifically, the logical device read count and read hit count in the access pattern management table 10 corresponding to the read request target data are incremented by one. Further, it is determined from the sequential learning information (read access information for the continuous area) on the shared memory 113 (not shown) whether the read request is a sequential read. A sequential read is a series of continuous read accesses from the host 100 to the logical device in the logical device address space. When a sequential read is detected, data after the last read access to the logical device is prefetched asynchronously with the request from the host 100. Thereby, in the synchronous read processing, the probability that the read request is hit increases, and high-speed read access can be expected. If it is a sequential read, the sequential read count of the logical device in the access pattern management table 10 corresponding to the read request target data is incremented by one. Thereafter, the slot is shifted to the end of the MRU of the clean queue (step 2006). Next, based on the sequential learning information, if the read request is a sequential read, one or more jobs are registered in the disk job management table 8 for prefetching (step 2007). Specifically, the job is enqueued in the FIFO queue, logical device priority queue, host priority queue, and business priority queue in the same manner as described in FIG. Finally, the target data is transferred to the host 100 (step 2008), and the slot lock is released (step 2009). Specifically, “OFF” is set in the lock information of the slot management table 5. Since the lock release process is the same as the process described here, the description thereof will be omitted. The above is the processing in the case of a hit.
If the data is invalid (miss) in step 2004, the channel adapter 120 first updates the access information (step 2010). More specifically, the logical device read count in the access pattern management table 10 corresponding to the read request target data is incremented by one. The number of sequential reads in the access pattern management table 10 is updated in the same way as hits. Next, a necessary number of cache segments are newly secured (step 2011). A new allocation of a cache segment is a cache attribute connected to a queue (free queue) that manages an unused cache segment (not shown), or a slot attribute that is queue-managed by a known technique such as an LRU algorithm. This is performed using a cache segment held by a slot having “clean”. Next, the job is registered in the disk job management table 8 (step 2012). Specifically, the job is enqueued in the FIFO queue and the priority queue in the same manner as the procedure described in FIG. Next, prefetch processing is performed based on the sequential learning information, and the specific procedure is the same as the procedure described at the time of hit (step 2013). Thereafter, the slot lock is released (step 2014), and the disk adapter 130 waits until the staging is completed (step 2015). After receiving the staging completion report from the disk adapter 130 (step 2016), the process restarts from step 2003 again. In the subsequent hit miss determination processing, staging has already been completed, and therefore the subsequent processing is the same as in the case of a hit.
Next, an example of write job processing in the channel adapter 120 will be described with reference to FIG. In the write job processing 21 in the channel adapter 120, after receiving the write request target data from the host 100 and storing it in the cache memory 112, if necessary, the disk adapter 130 is requested to destage the target data, and the host 100 is notified of completion. I do. First, the channel adapter 120 checks the device status of the write request target logical device (step 2101). If it is other than “online” (error), it sends an error to the host 100 and ends the process. If there is no error, the channel adapter 120 analyzes the request from the job scheduling process 19 and calculates the slot number, segment position, and block position corresponding to the write request target data (step 2102). Thereafter, the lock of the slot corresponding to the slot number is acquired for the same reason as described in FIG. 21 (step 2103). Next, the hit-miss determination of the write request target data is performed in the same manner as described with reference to FIG. 21 (step 2104).
In the case of a hit, the channel adapter 120 first updates the access information (step 2105). Specifically, the logical device write count and write hit count in the access pattern management table 10 corresponding to the write request target data are incremented by one. Further, it is determined from the sequential learning information on the shared memory 113 (not shown) whether the write request is a sequential write. A sequential write is a series of continuous write accesses from the host 100 to the logical device in the logical device address space. If it is a sequential write, the logical device sequential write count in the access pattern management table 10 corresponding to the write request target data is incremented by one. Next, a transfer completion message is transmitted to the host 100 (step 2106). Thereafter, the write request target data is received from the host 100, stored in the cache memory 112, and queue transition is performed at the end of the dirty queue MRU (step 2107).
If there is a mistake, the channel adapter 120 first updates the access information (step 2108). More specifically, the logical device write count in the access pattern management table 10 corresponding to the write request target data is incremented by one. The number of sequential writes in the access pattern management table 10 is updated in the same manner as in the case of a hit. Next, after a necessary number of cache segments are newly secured (step 2109), a transfer preparation completion message is transmitted to the host 100 (step 2110). Thereafter, the write request target data is received from the host 100, stored in the cache memory 112, and enqueued at the end of the dirty queue MRU (step 2111).
The subsequent processing differs depending on whether synchronous writing is necessary (step 2112). The case where the synchronous write is necessary is when a failure occurrence flag on the shared memory 113 described later is “ON”, and the case where it is not necessary is when the flag is “OFF”. If synchronous writing is not required, a write completion report is transmitted to the host 100 (step 2117). If synchronous writing is required, the job is registered in the disk job management table 8 (step 2113). Specifically, the job is enqueued in the FIFO queue and the priority queue in the same manner as the procedure described in FIG. Thereafter, the slot lock is released (step 2114), and the process waits until the destaging is completed by the disk adapter (step 2115). After receiving the destaging completion report from the disk adapter 130 (step 2116), the completion report is transmitted to the host 100 (step 2117). In the synchronous write process, it is ensured that data is reliably written to the disk device 114.
The job scheduling process 19 by the channel adapter 120 has been described above.
Next, an example of job scheduling processing in the disk adapter 130 which is a control apparatus according to the present invention will be described with reference to FIG. The job scheduling process 22 in the disk adapter 130 is different from the procedure described in FIG. 20 in that the disk job management table 8 is used instead of the channel job management table 7 and the following process related to dirty data when a failure occurs is added. Except for this point, processing is performed in the same procedure as described in FIG. That is, the disk adapter 130 first determines whether or not a failure has occurred (step 2201). If no failure has occurred, the job of the disk job management table 8 shown in FIG. 9 is connected. The job located at the end of the FIFO queue (not shown) is dequeued and the job is executed (step 2202). When a failure occurs, the disk adapter 130 is one of a logical device priority queue, a host priority queue, and a business priority queue (not shown) to which a job of the disk job management table 8 shown in FIG. 9 is connected. The job having the highest priority in the priority queue is dequeued and the job is executed (step 2203). In step 2204, the disk adapter 130 checks the data saving information 209 at the time of failure in the logical device management table 2 when a failure occurs, and the dirty data on the cache memory 112 is not completely saved in the disk device 114. For the logical device for which “incomplete” indicating this is set, the current value of the dirty data amount in the dirty amount management information 1008 of the access pattern management table 10 is referred to. “Complete” is set in the information 209. The job executed in the job scheduling process 22 in the disk adapter 130 described above executes the read job process 24 shown in FIG. 25 or the write job process 25 shown in FIG.
Next, an embodiment of asynchronous write job registration processing in the disk adapter 130 will be described with reference to FIG. In the asynchronous write job registration process 23 in the disk adapter 130, the write data stored in the cache memory 112 is written to the physical device. First, a target slot is selected from the end of the dirty queue LRU and dequeued from the dirty queue (step 2301). Next, a job is registered in the disk job management table 8 (step 2302). Specifically, the job is enqueued in the FIFO queue and the priority queue in the same manner as the procedure described in FIG.
Next, an example of read job processing in the disk adapter 130 will be described with reference to FIG. In the read job process 24 in the disk adapter 130, first, the request from the job scheduling process 22 is analyzed, and the slot number, segment position, and block position corresponding to the read request target data are calculated (step 2401). Thereafter, the slot corresponding to the slot number is referred to and updated, but the lock of the slot is acquired so that the other disk adapter 130 and channel adapter 120 do not access the slot at the same time (step 2402). Next, the read request target data is read from the physical device, stored in the cache memory 112, enqueued at the end of the MRU of the clean queue (step 2403), and then the slot lock is released (step 2404), and disk job management is performed. A staging completion report is transmitted to the channel adapter 120 corresponding to the channel job specified by the channel job number 807 in Table 8 (step 2405).
Next, an example of write job processing in the disk adapter 130 will be described with reference to FIG. In the write job process 25 in the disk adapter 130, first, the request from the job scheduling process 22 is analyzed, and the slot number, segment position, and block position corresponding to the write request target data are calculated (step 2501). Next, the slot lock is acquired (step 2502), and the dirty data held in the slot is written to the physical device (step 2503). Thereafter, the slot attribute of the slot is updated to “clean” and enqueued in the clean queue (step 2504). Finally, the lock of the slot is released (step 2505), and if the write job is a synchronous write, the channel adapter 120 corresponding to the channel job specified by the channel job number 807 of the disk job management table 8 is de-loaded. A staging completion report is transmitted (step 2506).
The job scheduling process 22 by the disk adapter 130 has been described above.
Next, an example of failure processing according to the present invention will be described with reference to FIG. The failure process 26 is a process for causing the control devices 120 and 130 to reflect the dirty data on the cache memory 112 relating to each logical device to the physical device when a failure occurrence is detected. First, the disk adapter 130 determines whether or not to finish the failure processing (step 2601). Specifically, the process ends when the operation of the storage 110 is stopped. If the process is not terminated, the disk adapter 130 checks whether a failure has occurred in each part in the storage 110 (step 2602). Next, the disk adapter 130 determines whether or not a failure has occurred (step 2603). If a failure has been detected, the failure data recovery information 209 is set as described with reference to FIG. A failure occurrence flag on the shared memory 113 is set to “ON” (step 2604). Here, the failure occurrence flag is normally set to “OFF”. Thereafter, the channel adapter 120 performs a write job enqueue process on the dirty data of each logical device (step 2605). Specifically, the channel adapter 120 refers to the current value of the dirty amount management information 1008 of the access pattern management table 10 for each logical device, and scans the logical address space of the logical device for a logical device whose value is not zero. , Search for a slot whose slot attribute is “dirty”. Then, the write job corresponding to the slot is registered in the disk job management table 8. Specifically, the job is enqueued in the FIFO queue and the priority queue in the same manner as the procedure described in FIG. If the disk adapter 130 does not detect the occurrence of a failure in step 2603, the process returns to step 2601. After step 2605, the channel adapter 120 waits for completion of the target write job. If the disk adapter 130 detects a shortage of standby power in this process or a double failure of the cache memory with respect to the target cache data (step 2606), the data of the target logical device has been lost. The device state of the logical device is set to “failure offline” (step 2607). If all the writes are completed in step 2606, the process returns to step 2601.
As another embodiment according to the present invention, when a failure occurs in the storage 110 and the reflection of the write data by the failure processing 26 shown in FIG. 27 is not completed or a double failure occurs in the cache, When the dirty data on the memory 112 is lost, the slot position of the data is displayed on the display 142 of the management terminal 140. In the present embodiment, the logical device management table 2 is referred to, and the slot position of dirty data corresponding to the logical device whose failure data save information is “incomplete” is calculated by referring to the slot management table 5. All corresponding slot numbers are displayed on the display 142 together with the logical device numbers. As a result, the storage administrator can recover the lost data area to shorten the recovery time.
As described above, according to the present embodiment, when a failure occurs in the storage, the dirty data in the cache memory in the storage can be quickly reflected in the disk device, and important data with a high priority. Disappearance can be avoided. In addition, when a failure occurs in the storage, it is possible to avoid performance degradation related to important business with high priority as much as possible.
It is a block diagram which shows one Embodiment of the computer system containing the storage which concerns on this invention. It is a figure which shows the various management tables memorize | stored in the shared memory which concerns on this invention. It is an exemplary configuration diagram showing an embodiment of a logical device management table according to the present invention. It is an example of composition showing an example of an LU path management table concerning the present invention. It is an example of composition shown in one example of a physical device management table concerning the present invention. FIG. 4 is a structural example diagram showing an embodiment of a slot management table according to the present invention. It is an example of composition showing an example of a segment management table concerning the present invention. It is an exemplary configuration diagram showing an embodiment of a channel job management table according to the present invention. It is an example of composition showing an example of a disk job management table concerning the present invention. It is a structural illustration figure which shows one Example of the host management table which concerns on this invention. It is an exemplary configuration showing an embodiment of an access pattern management table according to the present invention. It is a structural example figure which shows one Example of the work management table which concerns on this invention. It is a structural illustration figure which shows one Example of the scheduling management table which concerns on this invention. It is a processing flowchart which shows one Example of the logical device definition process which concerns on this invention. It is a processing flow figure showing an example of LU path definition processing concerning the present invention. It is a processing flow figure showing one example of logical device priority definition processing concerning the present invention. It is a processing flow figure showing an example of host priority definition processing concerning the present invention. It is a processing flowchart which shows one Example of the work priority definition process which concerns on this invention. It is a processing flowchart which shows one Example of the channel adapter port process in the channel adapter which concerns on this invention. It is a processing flowchart which shows one Example of the job scheduling process in the channel adapter which concerns on this invention. It is a processing flow figure showing an example of read job processing in a channel adapter concerning the present invention. It is a processing flowchart which shows one Example of the write job process in the channel adapter which concerns on this invention. It is a processing flow figure showing an example of job scheduling processing in a disk adapter concerning the present invention. It is a processing flow figure showing an example of asynchronous write job registration processing concerning the present invention. It is a processing flow figure showing an example of read job processing in a disk adapter concerning the present invention. It is a processing flow figure showing an example of write job processing in a disk adapter concerning the present invention. It is a processing flow figure showing one example of failure processing concerning the present invention.
Explanation of symbols
DESCRIPTION OF SYMBOLS 100 ... Host, 110 ... Storage, 111 ... Interconnection network, 112 ... Cache memory, 113 ... Shared memory, 114 ... Disk device, 115 ... Connection line, 116 ... Power supply, 117 ... Standby power supply, 118 ... Power line, 120 ... Channel Adapter, 121 ... Port, 130 ... Disk adapter, 140 ... Management terminal, 141 ... Input device, 142 ... Display, 143 ... Memory, 144 ... Storage device, 145 ... CPU, 146 ... I / F,
2 ... Logical device management table, 3 ... LU path management table, 4 ... Physical device management table, 5 ... Slot management table, 6 ... Segment management table, 7 ... Channel job management table, 8 ... Disk job management table, 9 ... Host Management table, 10 ... Access pattern management table, 11 ... Business management table, 12 ... Scheduling management table, 13 ... Logical device definition processing, 14 ... LU path definition processing, 15 ... Logical device priority definition processing, 18 ... Channel adapter port Processing (channel adapter), 19 ... job scheduling processing (channel adapter), 20 ... read job processing (channel adapter), 21 ... write job processing (channel adapter), 22 ... job scheduling processing (disk adapter), 24 Processing the read job (disk adapter), 25 ... processing of write job (disk adapter), 16 ... host priority definition processing, 17 ... operational priority definition processing.
Claims (24)
1. A storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by connection lines, and a disk device connected to the control device. And
Connect a management terminal that receives first priority information for a logical device provided to the host,
In accordance with the first priority information received from the management terminal, the control device allocates more physical devices to logical devices with higher priorities than logical devices with lower priorities, and in the cache memory when a failure occurs A storage apparatus that controls to store data of a logical device stored in a plurality of physical devices associated with the logical device.
2. The storage apparatus according to claim 1, wherein a management table indicating a correspondence relationship between a logical device assigned according to the first priority information and a plurality of physical devices is stored in the shared memory.
3. Further, the management terminal is configured to receive second priority information indicating a priority of work,
The control device controls to dequeue a job having a high priority in the priority queue in the storage and execute the job according to the second priority information received from the management terminal when a failure occurs. The storage apparatus according to claim 1 or 2.
4. 4. The storage apparatus according to claim 3, wherein a management table including second priority information received by the management terminal is stored in the shared memory.
5. A storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by connection lines, and a disk device connected to the control device. And
Connect a management terminal that accepts second priority information indicating the priority of the work,
The control device controls to dequeue a job having a high priority in the priority queue in the storage and execute the job in accordance with the second priority information received by the management terminal when a failure occurs. Storage device.
6. 6. The storage apparatus according to claim 5, wherein a management table composed of the second priority information is stored in the shared memory.
7. The storage apparatus according to claim 5 or 6, wherein the second priority information is the host priority information.
8. The control device registers information for identifying a logical device corresponding to the lost data in the management table when the data of the target logical device is lost when a failure occurs. The storage device according to any one of the above.
9. When the data of the target logical device is lost when a failure occurs, the control device displays information for identifying the logical device corresponding to the lost data and the position in the logical device on the display device. The storage apparatus according to any one of claims 1 to 6.
10. Control in a storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by a connection line, and a disk device connected to the control device A method,
A first step of receiving first priority information for a logical device provided to the host;
In accordance with the first priority information received in the first step, a logical device with a higher priority is assigned more physical devices than a logical device with a lower priority, and is stored in the cache memory when a failure occurs. And a second step of storing data of a logical device in a plurality of physical devices associated with the logical device.
11. And a third step of storing, in the shared memory, a management table indicating a correspondence relationship between the logical device assigned in accordance with the first priority information received in the first step and a plurality of physical devices. The control method for a storage apparatus according to claim 10.
12. A fourth step of receiving second priority information indicating the priority of the job;
A fifth step of controlling to dequeue a job having a high priority in the priority queue in the storage and execute the job in accordance with the second priority information received from the fourth step when a failure occurs; 12. A control method in a storage apparatus according to claim 10 or 11, characterized by comprising:
13. 13. The control method for a storage apparatus according to claim 12, further comprising a sixth step of storing a management table including the second priority information received in the fourth step in the shared memory. .
14. Control in a storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by a connection line, and a disk device connected to the control device A program,
A first step of receiving first priority information for a logical device provided to the host;
In accordance with the first priority information received in the first step, a logical device with a higher priority is assigned more physical devices than a logical device with a lower priority, and is stored in the cache memory when a failure occurs. And a second step of storing data of a logical device in a plurality of physical devices associated with the logical device.
15. And a third step of storing, in the shared memory, a management table indicating a correspondence relationship between the logical device assigned in accordance with the first priority information received in the first step and a plurality of physical devices. 15. The control program in the storage apparatus according to claim 14,
16. A fourth step of receiving second priority information indicating the priority of the job;
A fifth step of controlling to dequeue a job having a high priority in the priority queue in the storage and to execute the job according to the second priority information received from the fourth step when a failure occurs; 16. The control program for a storage apparatus according to claim 14, further comprising:
17. The control program for a storage apparatus according to claim 16, further comprising a sixth step of storing, in the shared memory, a management table comprising the second priority information received in the fourth step. .
18. A job in a storage device comprising a port that is an interface with a host, a cache memory, a control device that is connected to the port, the cache memory, and the shared memory by a connection line, and a disk device that is connected to the control device A scheduling processing method comprising:
A seventh step of determining whether a failure has occurred;
If it is determined in the seventh step that a failure has occurred, the logical device priority queue, host priority queue, business priority queue to which the job registered in the job management table is connected are stored. A job scheduling processing method in a storage apparatus, comprising: an eighth step of dequeuing a job having a high priority in any priority queue and executing the dequeued job.
19. 19. The job scheduling processing method in the storage device according to claim 18, wherein, in the eighth step, the priority queue selection uses a ratio of the priority of the logical device, the host priority, and the business priority.
20. In addition, check the data save information at the time of failure, whether the dirty data in the cache memory registered in the logical device management table has been completely saved to the disk unit, The present invention has a ninth step of referring to the current value of the dirty data amount registered in the access pattern management table for the logical device to be indicated, and updating the data saving information at the time of failure when the value is zero. 20. A job scheduling processing method in a storage apparatus according to claim 18 or 19.
21. A job in a storage device comprising a port that is an interface with a host, a cache memory, a control device that is connected to the port, the cache memory, and the shared memory by a connection line, and a disk device that is connected to the control device A scheduling processing program comprising:
A seventh step of determining whether a failure has occurred;
If it is determined in the seventh step that a failure has occurred, the logical device priority queue, host priority queue, business priority queue to which the job registered in the job management table is connected are stored. A job scheduling processing program in a storage apparatus, comprising: an eighth step of dequeuing a job having a high priority in any priority queue and executing the dequeued job.
22. 22. The job scheduling processing program in the storage apparatus according to claim 21, wherein, in the eighth step, the priority queue selection uses a ratio of a logical device priority, a host priority, and a business priority.
23. Failure in a storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by a connection line, and a disk device connected to the control device A processing method,
A tenth step for determining whether or not to terminate the fault processing;
An eleventh step of checking whether a failure has occurred in each part in the storage device if the process is not terminated in the tenth step;
If a failure is detected in the check result of the eleventh step, a twelfth step of setting failure occurrence data saving information related to a logical device having dirty data on the cache memory;
Thereafter, a thirteenth step of enqueuing a write job relating to the dirty data of each logical device in order to reflect the dirty data in the cache memory relating to each logical device to the physical device;
Thereafter, the process waits for completion of the target write job, and has a fourteenth step of setting the device state of the target logical device to failure offline when a standby power shortage or a double failure of the cache memory is detected in this process. A failure processing method in a storage apparatus characterized by
24. Failure in a storage device comprising a port that is an interface with a host, a cache memory, a control device connected to the port, the cache memory, and the shared memory by a connection line, and a disk device connected to the control device A processing program,
A tenth step for determining whether or not to terminate the fault processing;
An eleventh step of checking whether a failure has occurred in each part in the storage device if the process is not terminated in the tenth step;
If a failure is detected in the check result of the eleventh step, a twelfth step of setting failure occurrence data saving information related to a logical device having dirty data on the cache memory;
Thereafter, a thirteenth step of enqueuing a write job relating to the dirty data of each logical device in order to reflect the dirty data in the cache memory relating to each logical device to the physical device;
Thereafter, the process waits for completion of the target write job, and has a fourteenth step of setting the device state of the target logical device to failure offline when a standby power shortage or a double failure of the cache memory is detected in this process. A failure processing program in a storage apparatus characterized by
JP2003390239A 2003-11-20 2003-11-20 Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program Pending JP2005149436A (en)
Priority Applications (1)
Application Number Priority Date Filing Date Title
JP2003390239A JP2005149436A (en) 2003-11-20 2003-11-20 Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program
Applications Claiming Priority (2)
Application Number Priority Date Filing Date Title
JP2003390239A JP2005149436A (en) 2003-11-20 2003-11-20 Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program
US10/771,453 US7100074B2 (en) 2003-11-20 2004-02-05 Storage system, and control method, job scheduling processing method, and failure handling method therefor, and program for each method
Publications (1)
Publication Number Publication Date
JP2005149436A true JP2005149436A (en) 2005-06-09
Family
ID=34649766
Family Applications (1)
Application Number Title Priority Date Filing Date
JP2003390239A Pending JP2005149436A (en) 2003-11-20 2003-11-20 Storage apparatus, control method for storage apparatus, job scheduling processing method, troubleshooting method and their program
Country Status (2)
Country Link
US (1) US7100074B2 (en)
JP (1) JP2005149436A (en)
Cited By (5)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010506275A (en) * 2006-10-02 2010-02-25 ソニー デーアーデーツェー オーストリア アクチェンゲゼルシャフトSony DADC Austria AG Method, control logic and system for detecting virtual storage volume and data carrier
US8234415B2 (en) 2008-07-14 2012-07-31 Fujitsu Limited Storage device and control unit
JP5104855B2 (en) * 2007-03-23 2012-12-19 富士通株式会社 Load distribution program, load distribution method, and storage management apparatus
US8453009B2 (en) 2009-10-16 2013-05-28 Fujitsu Limited Storage apparatus and control method for storage apparatus
JP2013532862A (en) * 2010-12-27 2013-08-19 株式会社日立製作所 Storage system and control method thereof
Families Citing this family (24)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000135820A (en) * 1997-12-11 2000-05-16 Canon Inc Printer, printing system, controlling method for printing, memory medium containing printing control program for controlling the same and output device for outputting printing control program for controlling printer
US7269645B2 (en) * 2003-10-31 2007-09-11 International Business Machines Corporation Seamless migration of one or more business processes and their work environment between computing devices and a network
EP1555772A3 (en) * 2004-01-15 2013-07-17 Yamaha Corporation Remote control method of external devices
JP4141391B2 (en) 2004-02-05 2008-08-27 株式会社日立製作所 Storage subsystem
JP2005301442A (en) * 2004-04-07 2005-10-27 Hitachi Ltd Storage device
US7386692B1 (en) * 2004-08-20 2008-06-10 Sun Microsystems, Inc. Method and apparatus for quantized deadline I/O scheduling
US7260703B1 (en) * 2004-08-20 2007-08-21 Sun Microsystems, Inc. Method and apparatus for I/O scheduling
JP4441929B2 (en) * 2005-01-19 2010-03-31 日本電気株式会社 Disk device and hot swap method
US7305537B1 (en) 2005-03-01 2007-12-04 Sun Microsystems, Inc. Method and system for I/O scheduler activations
US20060294412A1 (en) * 2005-06-27 2006-12-28 Dell Products L.P. System and method for prioritizing disk access for shared-disk applications
JP4831599B2 (en) * 2005-06-28 2011-12-07 ルネサスエレクトロニクス株式会社 Processing equipment
US7253606B2 (en) * 2005-07-18 2007-08-07 Agilent Technologies, Inc. Framework that maximizes the usage of testhead resources in in-circuit test system
US7657671B2 (en) * 2005-11-04 2010-02-02 Sun Microsystems, Inc. Adaptive resilvering I/O scheduling
US7478179B2 (en) * 2005-11-04 2009-01-13 Sun Microsystems, Inc. Input/output priority inheritance wherein first I/O request is executed based on higher priority
US20070106849A1 (en) * 2005-11-04 2007-05-10 Sun Microsystems, Inc. Method and system for adaptive intelligent prefetch
US8055865B2 (en) * 2007-08-06 2011-11-08 International Business Machines Corporation Managing write requests to data sets in a primary volume subject to being copied to a secondary volume
US8032689B2 (en) * 2007-12-18 2011-10-04 Hitachi Global Storage Technologies Netherlands, B.V. Techniques for data storage device virtualization
US7975169B2 (en) * 2008-06-03 2011-07-05 International Business Machines Corporation Memory preserved cache to prevent data loss
JP5729746B2 (en) * 2009-09-17 2015-06-03 日本電気株式会社 Storage system and disk array device
JP5402693B2 (en) * 2010-02-05 2014-01-29 富士通株式会社 Disk array device control method and disk array device
US9152490B2 (en) * 2013-04-02 2015-10-06 Western Digital Technologies, Inc. Detection of user behavior using time series modeling
US9934083B2 (en) * 2015-10-14 2018-04-03 International Business Machines Corporation Requesting manual intervention on failure of initial microcode load attempts during recovery of modified customer data
US10261837B2 (en) 2017-06-30 2019-04-16 Sas Institute Inc. Two-part job scheduling with capacity constraints and preferences
US10310896B1 (en) 2018-03-15 2019-06-04 Sas Institute Inc. Techniques for job flow processing
Family Cites Families (21)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5148432A (en) * 1988-11-14 1992-09-15 Array Technology Corporation Arrayed disk drive system and method
US5166939A (en) * 1990-03-02 1992-11-24 Micro Technology, Inc. Data storage apparatus and method
US5341493A (en) * 1990-09-21 1994-08-23 Emc Corporation Disk storage system with write preservation during power failure
JP3160106B2 (en) * 1991-12-23 2001-04-23 エヌシーアール インターナショナル インコーポレイテッド Partitioning method for a disk array
US5448719A (en) 1992-06-05 1995-09-05 Compaq Computer Corp. Method and apparatus for maintaining and retrieving live data in a posted write cache in case of power failure
JP3264465B2 (en) 1993-06-30 2002-03-11 株式会社日立製作所 Storage system
US5657468A (en) * 1995-08-17 1997-08-12 Ambex Technologies, Inc. Method and apparatus for improving performance in a reduntant array of independent disks
US5854942A (en) * 1996-09-06 1998-12-29 International Business Machines Corporation Method and system for automatic storage subsystem configuration
US6061761A (en) * 1997-10-06 2000-05-09 Emc Corporation Method for exchanging logical volumes in a disk array storage device in response to statistical analyses and preliminary testing
US6571354B1 (en) * 1999-12-15 2003-05-27 Dell Products, L.P. Method and apparatus for storage unit replacement according to array priority
JP3845239B2 (en) 1999-12-21 2006-11-15 日本電気株式会社 Disk array device and failure recovery method in disk array device
JP4115060B2 (en) 2000-02-02 2008-07-09 株式会社日立製作所 Data recovery method for information processing system and disk subsystem
US6618798B1 (en) * 2000-07-11 2003-09-09 International Business Machines Corporation Method, system, program, and data structures for mapping logical units to a storage space comprises of at least one array of storage units
WO2002065275A1 (en) * 2001-01-11 2002-08-22 Yottayotta, Inc. Storage virtualization system and methods
US6687787B1 (en) * 2001-03-05 2004-02-03 Emc Corporation Configuration of a data storage system
US6834315B2 (en) 2001-03-26 2004-12-21 International Business Machines Corporation Method, system, and program for prioritizing input/output (I/O) requests submitted to a device driver
JP3997061B2 (en) 2001-05-11 2007-10-24 株式会社日立製作所 Storage subsystem and storage subsystem control method
US6567892B1 (en) * 2001-05-23 2003-05-20 3Ware, Inc. Use of activity bins to increase the performance of disk arrays
JP2003006016A (en) 2001-06-26 2003-01-10 Hitachi Ltd Disk subsystem and method of asynchronous copy between disk subsystems
JP2003330762A (en) 2002-05-09 2003-11-21 Hitachi Ltd Control method for storage system, storage system, switch and program
US20040078508A1 (en) * 2002-10-02 2004-04-22 Rivard William G. System and method for high performance data storage and retrieval
Cited By (7)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010506275A (en) * 2006-10-02 2010-02-25 ソニー デーアーデーツェー オーストリア アクチェンゲゼルシャフトSony DADC Austria AG Method, control logic and system for detecting virtual storage volume and data carrier
US8429345B2 (en) 2006-10-02 2013-04-23 Sony Dadc Austria Ag Method, control logic and system for detecting a virtual storage volume and data carrier
JP5104855B2 (en) * 2007-03-23 2012-12-19 富士通株式会社 Load distribution program, load distribution method, and storage management apparatus
US8516070B2 (en) 2007-03-23 2013-08-20 Fujitsu Limited Computer program and method for balancing processing load in storage system, and apparatus for managing storage devices
US8234415B2 (en) 2008-07-14 2012-07-31 Fujitsu Limited Storage device and control unit
US8453009B2 (en) 2009-10-16 2013-05-28 Fujitsu Limited Storage apparatus and control method for storage apparatus
JP2013532862A (en) * 2010-12-27 2013-08-19 株式会社日立製作所 Storage system and control method thereof
Also Published As
Publication number Publication date
US7100074B2 (en) 2006-08-29
US20050132256A1 (en) 2005-06-16
Similar Documents
Publication Publication Date Title
US7461196B2 (en) Computer system having an expansion device for virtualizing a migration source logical unit
US8595431B2 (en) Storage control system including virtualization and control method for same
JP4508612B2 (en) Cluster storage system and management method thereof
US6912669B2 (en) Method and apparatus for maintaining cache coherency in a storage system
JP4341897B2 (en) Storage device system and data replication method
DE69724846T2 (en) Multi-way I / O storage systems with multi-way I / O request mechanism
US5987566A (en) Redundant storage with mirroring by logical volume with diverse reading process
US9495105B2 (en) System managing a plurality of flash memory devices
US7640390B2 (en) Flash memory storage system
US8041909B2 (en) Disk array system and method for migrating from one storage system to another
JP2007193573A (en) Storage device system and storage control method
US20080104344A1 (en) Storage system comprising volatile cache memory and nonvolatile memory
US7093043B2 (en) Data array having redundancy messaging between array controllers over the host bus
JP4551096B2 (en) Storage subsystem
US8190846B2 (en) Data management method in storage pool and virtual volume in DKC
JP2005326935A (en) Management server for computer system equipped with virtualization storage and failure preventing/restoring method
JP4575028B2 (en) Disk array device and control method thereof
US7228380B2 (en) Storage system that is connected to external storage
JP2005165702A (en) Device connection method of cluster storage
US8595549B2 (en) Information system and I/O processing method
US8788786B2 (en) Storage system creating cache and logical volume areas in flash memory
US6009481A (en) Mass storage system using internal system-level mirroring
JP3944449B2 (en) Computer system, magnetic disk device, and disk cache control method
US7464236B2 (en) Storage system and storage management method
US20060224826A1 (en) Disk array apparatus and method of controlling the same
Legal Events
Date Code Title Description
A621 Written request for application examination
Free format text: JAPANESE INTERMEDIATE CODE: A621
Effective date: 20060111
RD02 Notification of acceptance of power of attorney
Free format text: JAPANESE INTERMEDIATE CODE: A7422
Effective date: 20060111
A977 Report on retrieval
Free format text: JAPANESE INTERMEDIATE CODE: A971007
Effective date: 20081218
A131 Notification of reasons for refusal
Free format text: JAPANESE INTERMEDIATE CODE: A131
Effective date: 20090106
A521 Written amendment
Free format text: JAPANESE INTERMEDIATE CODE: A523
Effective date: 20090304
A131 Notification of reasons for refusal
Free format text: JAPANESE INTERMEDIATE CODE: A131
Effective date: 20090825
A02 Decision of refusal
Free format text: JAPANESE INTERMEDIATE CODE: A02
Effective date: 20100209
|
__label__pos
| 0.541196 |
What is Skipjack?
Skipjack is the encryption algorithm contained in the Clipper chip (see Question 151), and it was designed by the NSA (see Question 148). It uses an 80-bit key to encrypt 64-bit blocks of data. Skipjack can be more secure than DES (see Question 64), since it uses 80-bit keys and scrambles the data for 32 steps, or "rounds"; by contrast, DES uses 56-bit keys and scrambles the data for only 16 rounds.
The details of Skipjack are classified. The decision not to make the details of the algorithm publicly available has been widely criticized. Many people are suspicious that Skipjack is not secure, either due to oversight by its designers, or by the deliberate introduction of a secret trapdoor. By contrast, there have been many attempts to find weaknesses in DES over the years, since its details are public. These numerous attempts (and the fact that they have failed) have made people confident in the security of DES. Since Skipjack is not public, the same scrutiny cannot be applied towards it, and thus a corresponding level of confidence may not arise.
Aware of such criticism, the government invited a small group of independent cryptographers to examine the Skipjack algorithm. They issued a report which stated that, although their study was too limited to reach a definitive conclusion, they nevertheless believed that Skipjack was secure.
Another consequence of Skipjack's classified status is that it cannot be implemented in software, but only in hardware by government-authorized chip manufacturers. An algorithm called S1 was anonymously posted over the Internet in summer 1995, and it was claimed that S1 was the Skipjack algorithm. It is believed, however, to be a hoax.
| Question 81|
|
__label__pos
| 0.952732 |
Questions tagged [stratifications]
The tag has no usage guidance.
Filter by
Sorted by
Tagged with
5 votes
0 answers
31 views
If a subset $X$ of a $C^k$ manifold $M$ is semialgebraic in the charts of $M$, is it Whitney stratifiable?
Let $M$ be a $C^k$ manifold for some $k\geq 1$ and $X$ be a subset of $M$. Assume that there is an atlas of charts $(\phi_\alpha, U_\alpha)_\alpha$ of $M$ such that in the coordinates of each of these ...
Charles Arnal's user avatar
5 votes
0 answers
94 views
Torsion in the spectral sequence for a stratified complex variety
Let $X$ be a (possibly singular) complex projective algebraic variety, endowed with a stratification $\{X_{\Delta}\}_{\Delta\in I}$ by smooth algebraic varieties. Then there is a spectral sequence $$...
Emiliano Ambrosi's user avatar
1 vote
1 answer
102 views
Smooth extension of piecewise smooth function on a corner
Imprecise Question: Suppose I have a function defined on non-codimension-zero strata of a smooth manifold with a stratification, and I know the function is smooth when restricted to each of these ...
Whitney Junior's user avatar
5 votes
0 answers
109 views
Under what assumption on a proper map does the preimage of sufficiently small neighborhood is homotopy equivalent to the fiber?
Let $\pi\colon X\rightarrow Y$ be a proper map of topological spaces. Let's assume that both $X$ and $Y$ are paracompact, Hausdorff and locally weakly contractible. Then is it enough to conclude that ...
user42024's user avatar
• 790
5 votes
0 answers
77 views
Stacks v.s. Stratifolds?
Both stacks and stratifolds can model spaces with singularities (say, singular quotient space for example). In my little experience, stacks are widely used in moduli problems and stratifolds are often ...
Ruizhi liu's user avatar
1 vote
0 answers
22 views
Connected components of Isotropy types as strata of Poisson leaves
Let $X$ be a smooth affine variety with an algebraic symplectic form $\omega$. Let $G$ be a finite subgroup of the group of symplectomorphisms of $X$. We can say that $X$ is trivially a normal variety ...
Flavius Aetius's user avatar
1 vote
0 answers
61 views
Smooth affine variety as a symplectic resolutions
Given a smooth affine variety $X$ over $\mathbb C$ with an algebraic symplectic form $\omega$ and a finite group $G$ acting on $X$ by symplectomorphisms, then Is it true that $X$ is trivially a ...
Flavius Aetius's user avatar
6 votes
1 answer
265 views
Exit path categories of regular CW complexes
Given a finite, regular CW complex $X$ (by regular, I mean that the gluing maps $D^n \to X$ from the closed unit ball to $X$ are homeomorphisms onto their image), denote by $S$ the finite partially ...
Markus Zetto's user avatar
3 votes
0 answers
127 views
Topology types in families of real or complex varieties
In René Thom, "Structural Stability and Morphogenesis" on p. 21ff there is the following statement: Let $$P_j(x_i,s_k) = 0$$ be a set of polynomial equations over the real or complex numbers,...
Jürgen Böhm's user avatar
2 votes
1 answer
119 views
On the zero-dimensional strata of the Fulton-MacPherson conpactification
Let $\operatorname{Conf}_n(\mathbb{R})$ be the configuration space of $n$ marked points on the real line. What is the difference between $\operatorname{Conf}_n(\mathbb{R})$ and the locus of zero-...
Banana23's user avatar
3 votes
0 answers
122 views
Riemann-Hilbert-type correspondence for locally constant factorization algebras
This is related to a previous post, but a bit softer and should probably stand on its own. In Appendix A of "Higher Algebra", Lurie shows that for a reasonably good topological space, there ...
Markus Zetto's user avatar
4 votes
0 answers
222 views
Blow-up of a stratified space
Let $X$ be a smooth projective variety over $\mathbb{C}$, and $D_1, \ldots, D_n$ be a collection of simple normal crossing divisors. The divisors induce a stratification $\mathcal{T}_X$ of $X$. Let $...
calc's user avatar
• 243
3 votes
0 answers
176 views
Factorization algebras as factorizable cosheaves on the (extended) Ran Space
A basic fact in the theory of factorization algebras is that, to state it in a rough way, the exit path category of the Ran space of a topological manifold $M$ is equivalent to the category consisting ...
Markus Zetto's user avatar
4 votes
0 answers
121 views
Seeking a Weyl tube formula for Whitney stratified spaces
Background: Let $X$ be a smooth, compact Riemannian submanifold of euclidean space $\mathbb{R}^n$. H Weyl's tube formula asserts that for sufficiently small $t > 0$, the volume $V(X;t)$ of the ...
Vidit Nanda's user avatar
• 15.2k
11 votes
2 answers
485 views
Local topology of Whitney stratified spaces
Let $M$ be a smooth manifold, let $\mathcal{P}$ be a Whitney stratification of $M$ and let $S\subset M$ be a stratum with closure $\overline{S}$. Question: Does there exist an open neighborhood $U\...
Jesse Wolfson's user avatar
3 votes
2 answers
370 views
Piecewise isomorphism versus equivalence in Grothendieck ring
$\DeclareMathOperator\Var{Var}$Let $K_{0}(\Var_{\mathbb{C}})$ be the Grothendieck ring of varieties over $\mathbb{C}$. The class of a variety, $X$, in $K_{0}$ is denoted $[\,X\,]$. If $X$ and $Y$ are ...
user avatar
1 vote
0 answers
27 views
Stratification which makes the defining functions isotrivial
Let $0\in X\subset\mathbb{C}^N$ be a germ of complex space and $0\in Z\subset X$ be a closed analytic subset (globally) defined by holomorphic functions $f_1,\dots,f_r$. Is there a complex analytic ...
stjc's user avatar
• 1,052
5 votes
0 answers
75 views
subanalytic realization of smooth abstract stratification
Consider an $C^\infty$ abstract stratification $A$ (in the Thom-Mather sense, see Mather's note). Can we embed $A$ in some $\mathbb{R}^n$ (or in an analytic manifold) as a subanalytic set? If not, ...
Quentin's user avatar
• 83
7 votes
1 answer
320 views
Non-example for Whitney (a) stratifications
Given a $C^1$ stratification $\mathscr{S}$ of a $C^1$ manifold $M$, we write $N^\ast \mathscr{S}$ for the union of conormals to the strata. The stratification is said to be Whitney (a) if $N^\ast \...
Chris Kuo's user avatar
• 515
3 votes
0 answers
190 views
Genus two curves on abelian surfaces
Considering a smooth genus two curve $C_2$, let $J(C_2)$ be its Jacobian surface, and take $p \in J(C_2)$ an $m$-torsion point. Let $A = J(C_2)/Z_m$, where $Z_m$ acts by $x \mapsto x+p$. The image of $...
Rodion N. Déev's user avatar
6 votes
1 answer
381 views
Whitney stratification of algebraic varieties
When do the orbits of an action on an algebraic variety make a Whitney stratification?
Maicom Douglas Varella Costa's user avatar
8 votes
0 answers
240 views
Why mu-stratifications?
In the microlocal theory of sheaves developed by Kashiwara and Schapira, there is the notion of a $\mu$-stratification, which is a stratification satisfying a stronger property ("$\mu$") than Whitney'...
John Pardon's user avatar
• 18.1k
4 votes
1 answer
115 views
Image of a quiver variety under natural morphism
We know that the natural morphism $\pi:\mathfrak{M}_{\theta}(Q,\mathbf{v},\mathbf{w})\rightarrow \mathfrak{M}_0(Q,\mathbf{v},\mathbf{w})$ between a smooth and affine quiver variety is not necessarily ...
Filip's user avatar
• 1,537
2 votes
0 answers
172 views
Grothendieck group of constructible sets
Let $K_0$ be the Grothendieck group of complex algebraic varieties. This is the group generated by all complex algebraic varieties, subject to the relations: (i) $[X]=[Y]$ if $X,Y$ are isomorphic, (...
user142700's user avatar
4 votes
0 answers
360 views
A cell decomposition of a CW-complex and, stratification of a topological space
What is the difference between the notion of cell decomposition of a CW-complex, and the notion of stratification of a topological space ? I know that cell decomposition of a CW-complex is usefull to ...
YoYo's user avatar
• 325
17 votes
0 answers
631 views
Proof of MacPherson's result about set-valued constructible sheaves and exit paths
I'm looking for a proof of a theorem that is attributed to MacPherson. Treumann (Section 1.1 in Exit paths and constructible stacks, 2009) states the theorem as: Theorem 1.2 (MacPherson). Let $(X,S)...
Jānis Lazovskis's user avatar
8 votes
0 answers
168 views
Stratification of space of labelled circles in the plane
Consider the space of $n$ round circles in the plane to be the open subset of $\mathbb R^{3n}$: $$C_n = \{ (v_1, v_2, \cdots, v_n, r_1, r_2, \cdots, r_n ) : v_i \in \mathbb R^2, r_i \in (0, \infty) \ ...
Ryan Budney's user avatar
• 42.1k
1 vote
1 answer
130 views
Confusion about locally cone-like spaces
Definition: A filtered space $X$ of formal dimension $n$ is locally cone-like if for all $i$, $0 \le i \le n$, and for each $x \in X^i - X^{i-1} = X_i$ there is an open neighborhood $U$ of $x$ in $X_i$...
gf.c's user avatar
• 35
1 vote
0 answers
171 views
Isn't stratification by orbit types actually a stratification by stibilizer types?
I asked this question on Math Exchange but considering the law number of people who viewed the question, I think that the question is difficult enough to post it on math overflow. I hope I am right. ...
Flavius Aetius's user avatar
18 votes
1 answer
574 views
Local homology of a space of unitary matrices
Let $U(n)$ denote the unitary group (this is a manifold of dimension $n^2$). Let $$ {\cal D} \subset U(n) $$ denote the subspace of those matrices having a non-trivial $(+1)$-eigenspace. ...
John Klein's user avatar
• 18.4k
3 votes
2 answers
652 views
Whitney Conditions vs Equisingularity
In studying singular spaces, it is often important to pick an appropriate stratification which encodes the singularity structure. One class of such stratifications are called "Whitney stratifications" ...
Aswin's user avatar
• 1,063
3 votes
1 answer
247 views
On the notion of conelike stratified (cs-) space
The notion of cs-stratification of a topological space is apparently due to Siebenmann, see also the paper by N. Habegger and L. Saper in the paper "Intersection cohomology of cs-spaces and Zeeman's ...
asv's user avatar
• 20.4k
3 votes
0 answers
381 views
Where should I look for computing the intersection homology of projective varieties?
I'm learning about intersection cohomology topologically through MacPherson's "New York Times Article". This is a very nice guide which gives a nice idea on how to use these methods for low-...
54321user's user avatar
• 1,666
1 vote
0 answers
29 views
Sufficient conditions for a conormal vector to be regular for an orbit stratification
Let a complex reductive group $G$ act on a $\mathbb{C}^{n}$ with finitely many orbits. Let $\mathcal{S}$ be the stratification of $\mathbb{C}^{n}$ according to these orbits. Let $(x,\xi) \in T_S^{*}\...
James Mracek's user avatar
11 votes
2 answers
388 views
Homotopy property of constructible sheaves on stratified spaces
Let $X$ be a stratified topological space (in my case $X$ is a compact space presented as a finite union of locally closed topological manifolds of finite dimension (strata) such that the closure of ...
asv's user avatar
• 20.4k
8 votes
1 answer
500 views
Topology on the space of constructible sheaves
Let $X$ be a nice compact topological space with a fixed finite stratification by locally closed topological manifolds. At the beginning one may assume that $X$ is a complex algebraic manifold with ...
asv's user avatar
• 20.4k
6 votes
1 answer
2k views
Stratification of complex algebraic varieties
Let $V$ be a complex quasi-projective variety, we know from H. Whitney's and B Teissier works on stratifications of algebraic varieties that $V$ has an intrinsic stratification $$X_0\subset X_2\...
David C's user avatar
• 9,732
5 votes
0 answers
318 views
Stratification of a smooth map
So, this is an exercise. But from math.stackexchange I have been suggested to post this question here. To find the Thom-Boardman stratification of the smooth map $f(x,y,a,b,c,d)=x^2y+y^3+a(x^2+y^2)+...
PepeToro's user avatar
• 231
7 votes
1 answer
533 views
Iterated Milnor fibrations and Thom's a_f condition
Ok so there's a lot of litterature about nearby cycles functor since it was introduced by Grothendieck and Deligne but I couldn't find any clear answer to the following natural question: Problem: Let ...
AFK's user avatar
• 7,297
3 votes
2 answers
347 views
intersection of Whitney stratifications
Let $X$ be an oriented smooth manifold with dimension $n$. If $U$ and $V$ are two oriented closed submanifolds of $X$ and $U$ is transverse to $V$ in $X$. Then $U\cap V$ (suppose the intersection is ...
yangyang's user avatar
• 237
0 votes
0 answers
216 views
transverse intersection of Whitney stratifications
Let $M$ be a smooth manifold. If $X$ and $Y$ are two Whitney objects, i.e. subsets with a given Whitney stratification, then $X$ and $Y$ are transverse if each stratum of $X$ is transverse to each ...
yangyang's user avatar
• 237
2 votes
0 answers
184 views
When a Whitney stratification has no stratum of codimension one?
Let $G$ be a compact Lie group, and $M$ be a smooth $n$-dimensional $G$-manifold which admits an orientation preserving the $G$-action. Then $M$ has a natural Whitney stratification induced by the ...
yangyang's user avatar
• 237
5 votes
0 answers
587 views
singular support of D-module smooth w.r.t. a stratification
(1) Suppose that $X$ is a smooth complex algebraic variety, stratified by some nice smooth stratification $S$. Let $M$ be a $D$-module on $X$, s.t. its shriek-pullback (or star... whatever is ...
Sasha's user avatar
• 5,422
5 votes
1 answer
452 views
Is the Alexander-Pontryagin duality applicable to stratified spaces
If $D$ is the discriminant of the space of all planar curves of a fixed degree, and $D'$ is the subspace whose only singularities are nodes or cusps, then is it possible to apply Alexander-Pontryagin ...
user1289492's user avatar
6 votes
2 answers
1k views
Stratified pseudomanifold
In the definition of an $n$-dimensional stratified pseudomanifold one demands the following filtration $X=X_n \supset X_{n-1}=X_{n-2} \supset X_{n-3}\supset ... \supset X_0 \supset X_{-1}=\emptyset$. ...
Levi's user avatar
• 63
|
__label__pos
| 0.51273 |
Home My Page Projects Code Snippets Project Openings SML/NJ
Summary Activity Forums Tracker Lists Tasks Docs Surveys News SCM Files
SCM Repository
[smlnj] View of /sml/trunk/src/compiler/FLINT/opt/collect.sml
ViewVC logotype
View of /sml/trunk/src/compiler/FLINT/opt/collect.sml
Parent Directory Parent Directory | Revision Log Revision Log
Revision 163 - (download) (annotate)
Thu Oct 29 21:00:27 1998 UTC (22 years, 11 months ago) by monnier
File size: 13513 byte(s)
added dropping of dead-arguments
(* copyright 1998 YALE FLINT PROJECT *)
(* [email protected] *)
signature COLLECT =
sig
(* Collect information about variables and function uses.
* The info is accumulated in the map `m' *)
val collect : FLINT.fundec -> unit
(* query functions *)
val escaping : FLINT.lvar -> bool (* non-call uses *)
val usenb : FLINT.lvar -> int (* nb of non-recursive uses *)
val called : FLINT.lvar -> bool (* known call uses *)
val insidep : FLINT.lvar -> bool (* are we inside f right now ? *)
val recursive : FLINT.lvar -> bool (* self-recursion test *)
(* inc the "true=call,false=use" count *)
val use : bool -> FLINT.lvar -> unit
(* dec the "true=call,false=use" count and call the function if zero *)
val unuse : (FLINT.lvar -> unit) -> bool -> FLINT.lvar -> unit
(* transfer the counts of var1 to var2 *)
val transfer : FLINT.lvar * FLINT.lvar -> unit
(* add the counts of var1 to var2 *)
val addto : FLINT.lvar * FLINT.lvar -> unit
(* delete the last reference to a variable *)
val kill : FLINT.lvar -> unit
(* create a new var entry (true=fun, false=other) initialized to zero *)
val new : bool -> FLINT.lvar -> unit
(* move all the internal counts to external *)
val extcounts : FLINT.lvar -> unit
(* when creating a new var. Used when alpha-renaming *)
(* val copy : FLINT.lvar * FLINT.lvar -> unit *)
(* fix up function to keep counts up-to-date when getting rid of code.
* the arg is only called for *free* variables becoming dead.
* the first function returned just unuses an exp, while the
* second unuses a function declaration (f,args,body) *)
val unuselexp : (FLINT.lvar -> unit) ->
((FLINT.lexp -> unit) *
((FLINT.lvar * FLINT.lvar list * FLINT.lexp) -> unit))
(* function to collect info about a newly created lexp *)
val uselexp : FLINT.lexp -> unit
(* This allows to execute some code and have all the resulting
* changes made to the internal (for recursion) counters instead
* of the external ones. For instance:
* inside f (fn () => call ~1 f)
* would decrement the count of recursive function calls of f *)
val inside : FLINT.lvar -> (unit -> 'a) -> 'a
(* mostly useful for PPFlint *)
val LVarString : FLINT.lvar -> string
end
structure Collect :> COLLECT =
struct
local
structure F = FLINT
structure M = Intmap
structure LV = LambdaVar
structure PP = PPFlint
in
val say = Control.Print.say
fun bug msg = ErrorMsg.impossible ("Collect: "^msg)
fun buglexp (msg,le) = (say "\n"; PP.printLexp le; say " "; bug msg)
fun bugval (msg,v) = (say "\n"; PP.printSval v; say " "; bug msg)
fun ASSERT (true,_) = ()
| ASSERT (FALSE,msg) = bug ("assertion "^msg^" failed")
datatype info
(* for functions we keep track of calls and escaping uses
* and separately for external and internal (recursive) references *)
= Fun of {ecalls: int ref, euses: int ref,
inside: bool ref,
icalls: int ref, iuses: int ref}
| Var of int ref (* for other vars, a simple use count is kept *)
| Transfer of FLINT.lvar (* for vars who have been transfered *)
exception NotFound
val m : info M.intmap = M.new(128, NotFound)
(* map related helper functions *)
fun get lv = (M.map m lv)
(* handle x as NotFound =>
(say "\nCollect:get unknown var ";
PP.printSval (F.VAR lv);
say ". Assuming dead...";
raise x;
Var (ref 0)) *)
fun new true lv = M.add m (lv, Fun{ecalls=ref 0, euses=ref 0,
inside=ref false,
icalls=ref 0, iuses=ref 0})
| new false lv = M.add m (lv, Var(ref 0))
fun LVarString lv =
(LV.lvarName lv)^
((case get lv of
Var uses => "{"^(Int.toString (!uses))^"}"
| Fun {ecalls,euses,icalls,iuses,...} =>
concat
["{(", Int.toString (!ecalls), ",", Int.toString (!euses),
"),(", Int.toString (!icalls), ",", Int.toString (!iuses), ")}"]
| Transfer _ => "{-}")
handle NotFound => "{?}")
(* adds the counts of lv1 to those of lv2 *)
fun addto (lv1,lv2) =
let val info2 = get lv2
val info1 = get lv1
in case info1
of Var uses1 =>
(case info2
of Var uses2 => uses2 := !uses2 + !uses1
| Fun {euses=eu2,inside=i2,iuses=iu2,...} =>
if !i2 then iu2 := !iu2 + !uses1
else eu2 := !eu2 + !uses1
| Transfer _ => bugval("transfering to a Transfer", F.VAR lv2))
| Fun {inside=i1,euses=eu1,iuses=iu1,ecalls=ec1,icalls=ic1,...} =>
(ASSERT(!iu1 + !ic1 = 0 andalso not(!i1), "improper fun transfer");
case info2
of Fun {inside=i2,euses=eu2,iuses=iu2,ecalls=ec2,icalls=ic2,...} =>
if !i2 then (iu2 := !iu2 + !eu1; ic2 := !ic2 + !ec1)
else (eu2 := !eu2 + !eu1; ec2 := !ec2 + !ec1)
| Var uses => uses := !uses + !eu1
| Transfer _ => bugval("transfering to a Transfer", F.VAR lv2))
| Transfer _ => bugval("transfering from a Transfer", F.VAR lv1)
end
fun transfer (lv1,lv2) =
(addto(lv1, lv2);
M.add m (lv1, Transfer lv2)) (* note the transfer *)
fun inc ri = (ri := !ri + 1)
fun dec ri = (ri := !ri - 1)
fun use call lv =
case get lv
of Var uses => inc uses
| (Fun {inside=ref true, iuses=uses,icalls=calls,...} |
Fun {inside=ref false,euses=uses,ecalls=calls,...} ) =>
(if call then inc calls else (); inc uses)
| Transfer lv => use call lv
fun unuse undertaker call lv =
let fun check uses =
if !uses < 0 then
bugval("decrementing too much", F.VAR lv)
else if !uses = 0 then
undertaker lv
else ()
in case get lv
of Var uses => (dec uses; check uses)
| Fun {inside=ref false,euses=uses,ecalls=calls,...} =>
(dec uses; if call then dec calls else ASSERT(!uses >= !calls, "unknown sanity"); check uses)
| Fun {inside=ref true, iuses=uses,icalls=calls,...} =>
(dec uses; if call then dec calls else ASSERT(!uses >= !calls, "unknown rec-sanity"))
| Transfer lv => unuse undertaker call lv
end
fun insidep lv =
case get lv
of Fun{inside=ref x,...} => x
| Var us => false
| Transfer lv => (say "\nCollect insidep on transfer"; insidep lv)
(* move internal counts to external *)
fun extcounts lv =
case get lv
of Fun{iuses,euses,icalls,ecalls,...}
=> (euses := !euses + !iuses; iuses := 0;
ecalls := !ecalls + !icalls; icalls := 0)
| Var us => ()
| Transfer lv => (say "\nCollect extcounts on transfer"; extcounts lv)
fun usenb lv = case get lv of (Fun{euses=uses,...} | Var uses) => !uses
| Transfer _ => 0
fun used lv = usenb lv > 0
fun recursive lv = case get lv of (Fun{iuses=uses,...} | Var uses) => !uses > 0
| Transfer lv => (say "\nCollect:recursive on transfer"; recursive lv)
(* fun callnb lv = case get lv of Fun{ecalls,...} => !ecalls | Var us => !us *)
fun escaping lv =
case get lv
of Fun{iuses,euses,icalls,ecalls,...}
=> !euses + !iuses > !ecalls + !icalls
| Var us => !us > 0 (* arbitrary, but I opted for the "safe" choice *)
| Transfer lv => (say "\nCollect escaping on transfer"; escaping lv)
fun called lv =
case get lv
of Fun{icalls,ecalls,...} => !ecalls + !icalls > 0
| Var us => false (* arbitrary, but consistent with escaping *)
| Transfer lv => (say "\nCollect escaping on transfer"; called lv)
(* census of the internal part *)
fun inside f thunk =
case get f
of Fun{inside=inside as ref false,...} =>
(inside := true; thunk() before inside := false)
| Fun _ => (say "\nalready inside "; PP.printSval(F.VAR f); thunk())
| _ => bugval("trying to get inside a non-function", F.VAR f)
(* Ideally, we should check that usenb = 1, but we may have been a bit
* conservative when keeping the counts uptodate *)
fun kill lv = (ASSERT(usenb lv >= 1, concat ["usenb lv >= 1 ", !PP.LVarString lv]); M.rmv m lv)
fun census new use = let
(* val use = if inc then use else unuse *)
fun call lv = use true lv
val use = fn F.VAR lv => use false lv | _ => ()
val newv = new false
val newf = new true
fun id x = x
fun impurePO po = true (* if a PrimOP is pure or not *)
(* here, the use resembles a call, but it's safer to consider it as a use *)
fun cpo (NONE:F.dict option,po,lty,tycs) = ()
| cpo (SOME{default,table},po,lty,tycs) =
(use (F.VAR default); app (use o F.VAR o #2) table)
fun cdcon (s,Access.EXN(Access.LVAR lv),lty) = use (F.VAR lv)
| cdcon _ = ()
(* the actual function:
* `uvs' is an optional list of booleans representing which of
* the return values are actually used *)
fun cexp uvs lexp =
case lexp
of F.RET vs => app use vs
(* (case uvs *)
(* of SOME uvs => (* only count vals that are actually used *) *)
(* app (fn(v,uv)=>if uv then use v else ()) (ListPair.zip(vs,uvs)) *)
(* | NONE => app use vs) *)
| F.LET (lvs,le1,le2) =>
(app newv lvs; cexp uvs le2; cexp (SOME(map used lvs)) le1)
| F.FIX (fs,le) =>
let fun cfun ((_,f,args,body):F.fundec) = (* census of a fundec *)
(app (newv o #1) args; inside f (fn()=> cexp NONE body))
fun cfix fs = let (* census of a list of fundecs *)
val (ufs,nfs) = List.partition (used o #2) fs
in if List.null ufs then ()
else (app cfun ufs; cfix nfs)
end
in app (newf o #2) fs; cexp uvs le; cfix fs
end
| F.APP (F.VAR f,vs) =>
(call f; app use vs)
| F.TFN ((tf,args,body),le) =>
(newf tf; cexp uvs le;
if used tf then inside tf (fn()=> cexp NONE body) else ())
| F.TAPP (F.VAR tf,tycs) => call tf
| F.SWITCH (v,cs,arms,def) =>
(use v; Option.map (cexp uvs) def;
(* here we don't absolutely have to keep track of vars bound within
* each arm since these vars can't be eliminated anyway *)
app (fn (F.DATAcon(dc,_,lv),le) => (cdcon dc; newv lv; cexp uvs le)
| (_,le) => cexp uvs le)
arms)
| F.CON (dc,_,v,lv,le) =>
(cdcon dc; newv lv; cexp uvs le; if used lv then use v else ())
| F.RECORD (_,vs,lv,le) =>
(newv lv; cexp uvs le; if used lv then app use vs else ())
| F.SELECT (v,_,lv,le) =>
(newv lv; cexp uvs le; if used lv then use v else ())
| F.RAISE (v,_) => use v
| F.HANDLE (le,v) => (use v; cexp uvs le)
| F.BRANCH (po,vs,le1,le2) =>
(app use vs; cpo po; cexp uvs le1; cexp uvs le2)
| F.PRIMOP (po,vs,lv,le) =>
(newv lv; cexp uvs le;
if impurePO po orelse used lv then (cpo po; app use vs) else ())
| le => buglexp("unexpected lexp", le)
in
cexp
end
(* The code is almost the same for uncounting, except that calling
* undertaker should not be done for non-free variables. For that we
* artificially increase the usage count of each variable when it's defined
* (accomplished via the "def" calls)
* so that its counter never reaches 0 while processing its scope.
* Once its scope has been processed, we can completely get rid of
* the variable and corresponding info (after verifying that the count
* is indeed exactly 1 (accomplished by the "kill" calls) *)
fun unuselexp undertaker = let
(* val use = if inc then use else unuse *)
fun uncall lv = unuse undertaker true lv
val unuse = fn F.VAR lv => unuse undertaker false lv | _ => ()
val def = use false
fun id x = x
fun impurePO po = true (* if a PrimOP is pure or not *)
fun cpo (NONE:F.dict option,po,lty,tycs) = ()
| cpo (SOME{default,table},po,lty,tycs) =
(unuse(F.VAR default); app (unuse o F.VAR o #2) table)
fun cdcon (s,Access.EXN(Access.LVAR lv),lty) = unuse(F.VAR lv)
| cdcon _ = ()
fun cfun (f,args,body) = (* census of a fundec *)
(app def args;
inside f (fn()=> cexp body);
app kill args)
and cexp lexp =
case lexp
of F.RET vs => app unuse vs
| F.LET (lvs,le1,le2) =>
(app def lvs; cexp le2; cexp le1; app kill lvs)
| F.FIX (fs,le) =>
let val usedfs = (List.filter (used o #2) fs)
in app (def o #2) fs;
cexp le;
app (fn (_,lv,args,le) => cfun(lv, map #1 args, le)) usedfs;
app (kill o #2) fs
end
| F.APP (F.VAR f,vs) =>
(uncall f; app unuse vs)
| F.TFN ((tf,args,body),le) =>
(if used tf then inside tf (fn()=> cexp body) else ();
def tf; cexp le; kill tf)
| F.TAPP (F.VAR tf,tycs) => uncall tf
| F.SWITCH (v,cs,arms,default) =>
(unuse v; Option.map cexp default;
(* here we don't absolutely have to keep track of vars bound within
* each arm since these vars can't be eliminated anyway *)
app (fn (F.DATAcon(dc,_,lv),le) =>
(cdcon dc; def lv; cexp le; kill lv)
| (_,le) => cexp le)
arms)
| F.CON (dc,_,v,lv,le) =>
(cdcon dc; if used lv then unuse v else ();
def lv; cexp le; kill lv)
| F.RECORD (_,vs,lv,le) =>
(if used lv then app unuse vs else ();
def lv; cexp le; kill lv)
| F.SELECT (v,_,lv,le) =>
(if used lv then unuse v else ();
def lv; cexp le; kill lv)
| F.RAISE (v,_) => unuse v
| F.HANDLE (le,v) => (unuse v; cexp le)
| F.BRANCH (po,vs,le1,le2) =>
(app unuse vs; cpo po; cexp le1; cexp le2)
| F.PRIMOP (po,vs,lv,le) =>
(if impurePO po orelse used lv then (cpo po; app unuse vs) else ();
def lv; cexp le; kill lv)
| le => buglexp("unexpected lexp", le)
in
(cexp, cfun)
end
val uselexp = census new use NONE
fun collect (fdec as (_,f,_,_)) =
(M.clear m; (* start from a fresh state *)
uselexp (F.FIX([fdec], F.RET[F.VAR f])))
end
end
[email protected]
ViewVC Help
Powered by ViewVC 1.0.0
|
__label__pos
| 0.910072 |
5
$\begingroup$
Many elliptic-curve cryptosystems today use GF(p) or GF(2^m). What if, say, we use big floating numbers with the classical point addition formulas - is a cryptosystem possible to build on that?
$\endgroup$
• 2
$\begingroup$ Yes, but you'll lose a property: the security. The point addition formulas are the same: usually the graphical explanation is given on a curve over the real numbers. So you can imagine to define your cryptosystem as usual over the Real. But it won't be secure. $\endgroup$ – ddddavidee Aug 30 '17 at 11:39
• 1
$\begingroup$ Never forget to ask yourself if what you're trying to do actually makes sense, or if it wouldn't be smarter to rely on existing well-vetted solutions. No matter how often I read your question, I fail to see why you would want to modify an existing cryptographic design in a way so that is uses floating point numbers. What are you hoping to gain by doing so? Which cryptographic problem(s) would such a design solve? $\endgroup$ – e-sushi Aug 30 '17 at 11:45
• 2
$\begingroup$ @ddddavidee: I can foresee a feasibility problem, but why the security loss? I do not see that the discrete logarithm problem (or should we say point division) is trivial on the group defined by point addition on an elliptic curve on $\mathbb R^2$, especially if the coordinates of $k\times g$ are truncated to little more than necessary to uniquely define the secret integer $k$. If it is indeed a hard problem, and if it was possible to work around the serious issues of numerical stability and how wide the coordinates should be, I do not rule out that ECDH or ECIES could be made to work. $\endgroup$ – fgrieu Aug 30 '17 at 11:51
• 3
$\begingroup$ @fgrieu: if you're doing ECC, it's important that properties such as $a(bG) = b(aG)$ are preserved. If you truncate the intermediate values $bG, aG$, does this still hold? $\endgroup$ – poncho Aug 30 '17 at 12:52
• 2
$\begingroup$ To build on the comment of @Poncho I guess there is a rather fundamental issue. Modern crypto is performed over bits / bytes. These can be easily mapped to large integer values, for instance by simply interpreting them as an unsigned number. Although we can do the same for floating point numbers, any loss of precision will mean loss of data. Any scheme that uses floating point must be programmed in such a way that loss of data due to loss of precision isn't possible. Even if that is possible it would probably not be all that easy nor efficient. FP usually doesn't make sense in crypto. $\endgroup$ – Maarten Bodewes Aug 30 '17 at 13:02
5
$\begingroup$
Summary: ECC over real numbers can be made to work, including for toy-sized security parameters and real numbers arithmetic as directly supported by typical CPUs and spreadsheets. But much more precision would be needed for private keys large enough to hope for security. That would be inefficient if secure; and perhaps just insecure.
We can mathematically define the infinite group of points on an elliptic curve on the continuous plane, e.g. with Cartesian representation in $\mathbb R^2$ under point addition (with an additional neutral element). We can define scalar multiplication on that. For any integers $k_A$ and $k_B$ and any point $g$ on the curve, it mathematically holds that $k_A\times(k_B\times g)=k_B\times(k_A\times g)$, as used by ECDH and ECIES.
However there is at least one huge problem facing a cryptosystem based on that line of thought: when using Floating Point to represent reals in $\mathbb R$ that are coordinates on the curve, $k_A\times(k_B\times g)=k_B\times(k_A\times g)$ no longer exactly holds due to round-off error, and that worsens with larger $k_A$ and $k_B$ for reasons of numerical stability.
The underlying problems are that
1. When using FP arithmetic, each point addition on the curve involves (among other operations) addition, and the FP version of that is not associative; otherwise said, there are exceptions, occasionally quite notable, to $(x+y)+z=x+(y+z)$.
2. Independent of FP arithmetic, the effect of a small variation of $g$ on $k\times g$ grows with $k$ (somewhat like the effect of a small variation of $x\in\mathbb R$ on $x^k$ grows about linearly with $k$).
With the $k_j$ in the hundreds of bits, there are many hundreds chained point additions in the computations of $k_A\times(k_B\times g)$ and $k_B\times(k_A\times g)$, so that discrepancies between the two values at each step (due to 1) will have many chances to appear and grow (due to 2) to the point where the results are extremely different (much like for evaluation of the logistic map in the chaotic region for large number of iterations).
Increasing FP precision will help, but it is hard to tell exactly how precise a FP representation of coordinates on the curve we shall use for $g$, and intermediary values including $k_A\times g$ and $k_B\times g$, so that $k_A\times(k_B\times g)$ and $k_B\times(k_A\times g)$ coincide well-enough to derive a shared secret.
Note: the practical issue of defining that well-enough and turning it into a consistent shared secret is non-trivial, but it can be solved by rounding to agreed-upon number of bits, combined with appropriate error correction like: detecting disagreement and, should that occur, retrying or trying nearby values; sending a small public additive correction, or other form of Forward Error Correction.
Floating Point arithmetic with direct hardware support on modern CPUs (that is typically at most 80-bit with 64-bit mantissa, or 64-bit with 53-bit mantissa) will not be precise enough that results coincide well-enough for realistic $k_j$; but it can be made to work most often for $k_j$ and shared key of a few bits. Making large shared keys from small ones is easy (just make multiple passes and concatenate); but I doubt we could make large private/public key pairs from small ones.
Arbitrary-precision FP arithmetic is possible, but inneficient. Typically it uses arbitrary-precision integer arithmetic, so we would be back to this.
Note: there are other serious numerical issues, including the risk of overflow, and how to extract the shared key from potentially very large coordinates.
I do not know if there is an additional issue with security.
The problem of solving for integer $k$ given $k\times g$ and $g$ (which breaks ECDH) might not be as hard with coordinates of givens in $\mathbb R$ (as approximated by FP arithmetic) as it is for coordinates in $GF(p)$ or $GF(2^m)$. As an illustration that it can't be ruled out summarily, solving for $k$ given $x^k$ and $x$ is believed hard with givens in $GF(p)$ for appropriate choice of $p$, but is easy with givens in $\mathbb R$ (as approximated by FP arithmetic, and assuming no overflow) because $k=\log(x^k)/\log(x)$ (computed with rounding to the nearest for FP).
Also, there are conflicting interactions between the precision used and security:
• Too low a precision on $g$ or/and $k\times g$ restricts the size of $k$ for a unique solution $k$, and short $k$ makes finding $k$ easy; there's no solution to that in sight beyond increasing precision.
• Overly increasing precision for $k\times g$ (for constant range of $k$) could conceivably make the problem of finding $k$ easier, as we are getting more information about $k$ from low-order bits/digits of $k\times g$.
| improve this answer | |
$\endgroup$
• 2
$\begingroup$ Do the ‘$\mathbb{FP}$-rational points of $E/\mathbb{R}$’ even form a group, where $\mathbb{FP}$ is the set of floating-point numbers? I can't imagine they do. A priori I would expect the coordinates of $[n]P$ to overflow on most curves $E$ for practical values of $n$. $\endgroup$ – Squeamish Ossifrage Aug 30 '17 at 14:06
• $\begingroup$ When I talked about real number CSPRNGs in crypto.stackexchange.com/questions/46910/…, I got backlash suggesting that rounding errors are no longer an issue with modern CPUs. I'm with you but... $\endgroup$ – Paul Uszak Aug 30 '17 at 14:07
• $\begingroup$ ‘Discrepancies in hardware’ are practically nonexistent today, unless you're trying to compute on a VAX or something similarly esoteric. With IEEE 754, for fixed format such as binary64 (double-precision) as long as you prescribe the precise set of basic floating-point operations to perform a computation (add, sub, mul, div—and a few others, but that's all you need for rational functions), and you don't change the rounding mode or other things from their default, you will get bit-for-bit identical results on practically all computers today. $\endgroup$ – Squeamish Ossifrage Aug 30 '17 at 14:10
• 1
$\begingroup$ @PaulUszak: Rounding errors in basic operations are ‘not an issue’ not in the sense that they don't exist, but rather, they are precisely specified by IEEE 754 and implemented identically on practically all ordinary CPUs. (By ‘basic operations’, I mean the ones specified in §5, including add, sub, mul, div, sqrt, etc., but not the ones in §9, such as the transcendental functions. Of course, you can always prescribe a specific rational approximation to a transcendental function to get bit-for-bit identical results.) $\endgroup$ – Squeamish Ossifrage Aug 30 '17 at 14:16
• 3
$\begingroup$ @PaulUszak: I fully accept that discrepancies in hardware can be avoided, perhaps because modern CPUs agree on FP for basic operations (at least for 32-bit, 64-bit and 80-bit FP when avaiable, and addition subtraction; though I would not bet the house on the last digit for division). Nevertheless it remains that rounding errors make addition non-associative, and that will grow into discrepancies between what the two sides compute in ECDH: $k_A\times(k_B\times g)$ on one side, and $k_B\times(k_A\times g)$ on the other. $\endgroup$ – fgrieu Aug 30 '17 at 14:38
1
$\begingroup$
It is difficult to define cryptography over any set which has some kind of a non-trivial metric .
The reason is simply that with a metric you can easily decide if you are near a solution to any equation. You can then perform Newton-Iteration to approximate your solution up to an required accuracy.
For instance, solving the "discrete logarithm problem" $y=g^n$ over the reals is simple.
This fact renders crypto-system hard to define over real numbers, complex numbers , p-adic numbers, Quternions over real numbers, etc.
| improve this answer | |
$\endgroup$
• 1
$\begingroup$ What is the metric on the subgroup generated by a base point that corresponds to the scalars? $\endgroup$ – Squeamish Ossifrage Aug 31 '17 at 5:46
• $\begingroup$ @SqueamishOssifrage good point. One first has to define a kind of continuous scalar multiplication with scalars from the reals. I am not sure if this works out. $\endgroup$ – user27950 Aug 31 '17 at 6:12
• $\begingroup$ Why does one have to do that? The question was about curves over floating-point numbers (which would presumably mean floating-point approximations to curves over (field extensions of) $\mathbb{Q}$), not scalars that are floating-point numbers. Computing $a$ from $g$ and $g^a$ for $g \in \mathbb{R} \setminus \{0\}$ and $a \in \mathbb{Z}$ is relatively easy (assuming the exponentiation is representable!), but it's not clear that computing $n$ from $P$ and $[n]P$ for $P \in E(k)$ and $n \in \mathbb{Z}$ is easy for arbitrary curves $E/\mathbb{Q}$ and field extensions $k$ of $\mathbb{Q}$. $\endgroup$ – Squeamish Ossifrage Aug 31 '17 at 6:24
• $\begingroup$ Assume, one can extend scalar multiplication of EC over the reals by having the scalar also from the reals and this multiplication is continuous. Assume also that one has a good numerical approximation to calculate the multiplication $P = x G$ by some closed formula. Then one could solve for x by numerical means. $\endgroup$ – user27950 Aug 31 '17 at 6:53
• $\begingroup$ What part of the question suggests extending the concept of scalar multiplication to non-integer scalars? $\endgroup$ – Squeamish Ossifrage Aug 31 '17 at 13:56
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.839976 |
blob: f6452420850efdf4f1ef5dde38736834316b8a54 [file] [log] [blame]
/*-------------------------------------------------------------------------
* OpenGL Conformance Test Suite
* -----------------------------
*
* Copyright (c) 2014-2016 The Khronos Group Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/ /*!
* \file
* \brief
*/ /*-------------------------------------------------------------------*/
#include "gl4cComputeShaderTests.hpp"
#include "glwEnums.hpp"
#include "glwFunctions.hpp"
#include "tcuMatrix.hpp"
#include "tcuMatrixUtil.hpp"
#include "tcuRenderTarget.hpp"
#include <cmath>
#include <cstdarg>
#include <sstream>
namespace gl4cts
{
using namespace glw;
using tcu::Vec2;
using tcu::Vec3;
using tcu::Vec4;
using tcu::UVec4;
using tcu::UVec3;
using tcu::Mat4;
namespace
{
typedef Vec3 vec2;
typedef Vec3 vec3;
typedef Vec4 vec4;
typedef UVec3 uvec3;
typedef UVec4 uvec4;
typedef Mat4 mat4;
const char* const kGLSLVer = "#version 430 core\n";
class ComputeShaderBase : public deqp::SubcaseBase
{
public:
virtual ~ComputeShaderBase()
{
}
ComputeShaderBase()
: renderTarget(m_context.getRenderContext().getRenderTarget()), pixelFormat(renderTarget.getPixelFormat())
{
float epsilon_zero = 1.f / (1 << 13);
if (pixelFormat.redBits != 0 && pixelFormat.greenBits != 0 && pixelFormat.blueBits != 0 &&
pixelFormat.alphaBits != 0)
{
g_color_eps = vec4(1.f / ((float)(1 << pixelFormat.redBits) - 1.0f),
1.f / ((float)(1 << pixelFormat.greenBits) - 1.0f),
1.f / ((float)(1 << pixelFormat.blueBits) - 1.0f),
1.f / ((float)(1 << pixelFormat.alphaBits) - 1.0f)) +
vec4(epsilon_zero);
}
else if (pixelFormat.redBits != 0 && pixelFormat.greenBits != 0 && pixelFormat.blueBits != 0)
{
g_color_eps = vec4(1.f / ((float)(1 << pixelFormat.redBits) - 1.0f),
1.f / ((float)(1 << pixelFormat.greenBits) - 1.0f),
1.f / ((float)(1 << pixelFormat.blueBits) - 1.0f), 1.f) +
vec4(epsilon_zero);
}
else
{
g_color_eps = vec4(epsilon_zero);
}
}
const tcu::RenderTarget& renderTarget;
const tcu::PixelFormat& pixelFormat;
vec4 g_color_eps;
uvec3 IndexTo3DCoord(GLuint idx, GLuint max_x, GLuint max_y)
{
const GLuint x = idx % max_x;
idx /= max_x;
const GLuint y = idx % max_y;
idx /= max_y;
const GLuint z = idx;
return uvec3(x, y, z);
}
bool CheckProgram(GLuint program, bool* compile_error = NULL)
{
GLint compile_status = GL_TRUE;
GLint status = GL_TRUE;
glGetProgramiv(program, GL_LINK_STATUS, &status);
if (status == GL_FALSE)
{
GLint attached_shaders;
glGetProgramiv(program, GL_ATTACHED_SHADERS, &attached_shaders);
if (attached_shaders > 0)
{
std::vector<GLuint> shaders(attached_shaders);
glGetAttachedShaders(program, attached_shaders, NULL, &shaders[0]);
for (GLint i = 0; i < attached_shaders; ++i)
{
GLenum type;
glGetShaderiv(shaders[i], GL_SHADER_TYPE, reinterpret_cast<GLint*>(&type));
switch (type)
{
case GL_VERTEX_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Vertex Shader ***" << tcu::TestLog::EndMessage;
break;
case GL_TESS_CONTROL_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Tessellation Control Shader ***"
<< tcu::TestLog::EndMessage;
break;
case GL_TESS_EVALUATION_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Tessellation Evaluation Shader ***"
<< tcu::TestLog::EndMessage;
break;
case GL_GEOMETRY_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Geometry Shader ***" << tcu::TestLog::EndMessage;
break;
case GL_FRAGMENT_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Fragment Shader ***" << tcu::TestLog::EndMessage;
break;
case GL_COMPUTE_SHADER:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Compute Shader ***" << tcu::TestLog::EndMessage;
break;
default:
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "*** Unknown Shader ***" << tcu::TestLog::EndMessage;
break;
}
GLint res;
glGetShaderiv(shaders[i], GL_COMPILE_STATUS, &res);
if (res != GL_TRUE)
compile_status = res;
GLint length;
glGetShaderiv(shaders[i], GL_SHADER_SOURCE_LENGTH, &length);
if (length > 0)
{
std::vector<GLchar> source(length);
glGetShaderSource(shaders[i], length, NULL, &source[0]);
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << &source[0] << tcu::TestLog::EndMessage;
}
glGetShaderiv(shaders[i], GL_INFO_LOG_LENGTH, &length);
if (length > 0)
{
std::vector<GLchar> log(length);
glGetShaderInfoLog(shaders[i], length, NULL, &log[0]);
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << &log[0] << tcu::TestLog::EndMessage;
}
}
}
GLint length;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &length);
if (length > 0)
{
std::vector<GLchar> log(length);
glGetProgramInfoLog(program, length, NULL, &log[0]);
m_context.getTestContext().getLog() << tcu::TestLog::Message << &log[0] << tcu::TestLog::EndMessage;
}
}
if (compile_error)
*compile_error = (compile_status == GL_TRUE ? false : true);
if (compile_status != GL_TRUE)
return false;
return status == GL_TRUE ? true : false;
}
GLuint CreateComputeProgram(const std::string& cs)
{
const GLuint p = glCreateProgram();
if (!cs.empty())
{
const GLuint sh = glCreateShader(GL_COMPUTE_SHADER);
glAttachShader(p, sh);
glDeleteShader(sh);
const char* const src[2] = { kGLSLVer, cs.c_str() };
glShaderSource(sh, 2, src, NULL);
glCompileShader(sh);
}
return p;
}
GLuint CreateProgram(const std::string& vs, const std::string& fs)
{
const GLuint p = glCreateProgram();
if (!vs.empty())
{
const GLuint sh = glCreateShader(GL_VERTEX_SHADER);
glAttachShader(p, sh);
glDeleteShader(sh);
const char* const src[2] = { kGLSLVer, vs.c_str() };
glShaderSource(sh, 2, src, NULL);
glCompileShader(sh);
}
if (!fs.empty())
{
const GLuint sh = glCreateShader(GL_FRAGMENT_SHADER);
glAttachShader(p, sh);
glDeleteShader(sh);
const char* const src[2] = { kGLSLVer, fs.c_str() };
glShaderSource(sh, 2, src, NULL);
glCompileShader(sh);
}
return p;
}
GLuint BuildShaderProgram(GLenum type, const std::string& source)
{
const char* const src[2] = { kGLSLVer, source.c_str() };
return glCreateShaderProgramv(type, 2, src);
}
GLfloat distance(GLfloat p0, GLfloat p1)
{
return de::abs(p0 - p1);
}
inline bool ColorEqual(const vec4& c0, const vec4& c1, const vec4& epsilon)
{
if (distance(c0.x(), c1.x()) > epsilon.x())
return false;
if (distance(c0.y(), c1.y()) > epsilon.y())
return false;
if (distance(c0.z(), c1.z()) > epsilon.z())
return false;
if (distance(c0.w(), c1.w()) > epsilon.w())
return false;
return true;
}
inline bool ColorEqual(const vec3& c0, const vec3& c1, const vec4& epsilon)
{
if (distance(c0.x(), c1.x()) > epsilon.x())
return false;
if (distance(c0.y(), c1.y()) > epsilon.y())
return false;
if (distance(c0.z(), c1.z()) > epsilon.z())
return false;
return true;
}
bool ValidateReadBuffer(int x, int y, int w, int h, const vec4& expected)
{
std::vector<vec4> display(w * h);
glReadPixels(x, y, w, h, GL_RGBA, GL_FLOAT, &display[0]);
for (int j = 0; j < h; ++j)
{
for (int i = 0; i < w; ++i)
{
if (!ColorEqual(display[j * w + i], expected, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Color at (" << (x + i) << ", " << (y + j) << ") is ["
<< display[j * w + i].x() << ", " << display[j * w + i].y() << ", " << display[j * w + i].z()
<< ", " << display[j * w + i].w() << "] should be [" << expected.x() << ", " << expected.y()
<< ", " << expected.z() << ", " << expected.w() << "]." << tcu::TestLog::EndMessage;
return false;
}
}
}
return true;
}
bool ValidateReadBufferCenteredQuad(int width, int height, const vec3& expected)
{
bool result = true;
std::vector<vec3> fb(width * height);
glReadPixels(0, 0, width, height, GL_RGB, GL_FLOAT, &fb[0]);
int startx = int(((float)width * 0.1f) + 1);
int starty = int(((float)height * 0.1f) + 1);
int endx = int((float)width - 2 * (((float)width * 0.1f) + 1) - 1);
int endy = int((float)height - 2 * (((float)height * 0.1f) + 1) - 1);
for (int y = starty; y < endy; ++y)
{
for (int x = startx; x < endx; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], expected, g_color_eps))
{
return false;
}
}
}
if (!ColorEqual(fb[2 * width + 2], vec3(0), g_color_eps))
{
result = false;
}
if (!ColorEqual(fb[2 * width + (width - 3)], vec3(0), g_color_eps))
{
result = false;
}
if (!ColorEqual(fb[(height - 3) * width + (width - 3)], vec3(0), g_color_eps))
{
result = false;
}
if (!ColorEqual(fb[(height - 3) * width + 2], vec3(0), g_color_eps))
{
result = false;
}
return result;
}
int getWindowWidth()
{
return renderTarget.getWidth();
}
int getWindowHeight()
{
return renderTarget.getHeight();
}
bool ValidateWindow4Quads(const vec3& lb, const vec3& rb, const vec3& rt, const vec3& lt)
{
int width = 100;
int height = 100;
std::vector<vec3> fb(width * height);
glReadPixels(0, 0, width, height, GL_RGB, GL_FLOAT, &fb[0]);
bool status = true;
// left-bottom quad
for (int y = 10; y < height / 2 - 10; ++y)
{
for (int x = 10; x < width / 2 - 10; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], lb, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "First bad color (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
// right-bottom quad
for (int y = 10; y < height / 2 - 10; ++y)
{
for (int x = width / 2 + 10; x < width - 10; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], rb, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Bad color at (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
// right-top quad
for (int y = height / 2 + 10; y < height - 10; ++y)
{
for (int x = width / 2 + 10; x < width - 10; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], rt, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Bad color at (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
// left-top quad
for (int y = height / 2 + 10; y < height - 10; ++y)
{
for (int x = 10; x < width / 2 - 10; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], lt, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Bad color at (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
// middle horizontal line should be black
for (int y = height / 2 - 2; y < height / 2 + 2; ++y)
{
for (int x = 0; x < width; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], vec3(0), g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Bad color at (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
// middle vertical line should be black
for (int y = 0; y < height; ++y)
{
for (int x = width / 2 - 2; x < width / 2 + 2; ++x)
{
const int idx = y * width + x;
if (!ColorEqual(fb[idx], vec3(0), g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Bad color at (" << x << ", " << y << "): " << fb[idx].x() << " "
<< fb[idx].y() << " " << fb[idx].z() << tcu::TestLog::EndMessage;
status = false;
}
}
}
return status;
}
bool IsEqual(vec4 a, vec4 b)
{
return (a.x() == b.x()) && (a.y() == b.y()) && (a.z() == b.z()) && (a.w() == b.w());
}
bool IsEqual(uvec4 a, uvec4 b)
{
return (a.x() == b.x()) && (a.y() == b.y()) && (a.z() == b.z()) && (a.w() == b.w());
}
};
class SimpleCompute : public ComputeShaderBase
{
virtual std::string Title()
{
return "Simplest possible Compute Shader";
}
virtual std::string Purpose()
{
return "1. Verify that CS can be created, compiled and linked.\n"
"2. Verify that local work size can be queried with GetProgramiv command.\n"
"3. Verify that CS can be dispatched with DispatchCompute command.\n"
"4. Verify that CS can write to SSBO.";
}
virtual std::string Method()
{
return "Create and dispatch CS. Verify SSBO content.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_buffer;
virtual long Setup()
{
const char* const glsl_cs =
NL "layout(local_size_x = 1, local_size_y = 1) in;" NL "layout(std430) buffer Output {" NL " vec4 data;" NL
"} g_out;" NL "void main() {" NL " g_out.data = vec4(1.0, 2.0, 3.0, 4.0);" NL "}";
m_program = CreateComputeProgram(glsl_cs);
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return ERROR;
GLint v[3];
glGetProgramiv(m_program, GL_COMPUTE_WORK_GROUP_SIZE, v);
if (v[0] != 1 || v[1] != 1 || v[2] != 1)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Got " << v[0] << ", " << v[1] << ", " << v[2]
<< ", expected: 1, 1, 1 in GL_COMPUTE_WORK_GROUP_SIZE check" << tcu::TestLog::EndMessage;
return ERROR;
}
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4), NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
return NO_ERROR;
}
virtual long Run()
{
glUseProgram(m_program);
glDispatchCompute(1, 1, 1);
vec4* data;
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
data = static_cast<vec4*>(glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, sizeof(vec4), GL_MAP_READ_BIT));
long error = NO_ERROR;
if (!IsEqual(data[0], vec4(1.0f, 2.0f, 3.0f, 4.0f)))
{
error = ERROR;
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
return error;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_buffer);
return NO_ERROR;
}
};
class BasicOneWorkGroup : public ComputeShaderBase
{
virtual std::string Title()
{
return "One work group with various local sizes";
}
virtual std::string Purpose()
{
return NL "1. Verify that declared local work size has correct effect." NL
"2. Verify that the number of shader invocations is correct." NL
"3. Verify that the built-in variables: gl_WorkGroupSize, gl_WorkGroupID, gl_GlobalInvocationID," NL
" gl_LocalInvocationID and gl_LocalInvocationIndex has correct values." NL
"4. Verify that DispatchCompute and DispatchComputeIndirect commands work as expected.";
}
virtual std::string Method()
{
return NL "1. Create several CS with various local sizes." NL
"2. Dispatch each CS with DispatchCompute and DispatchComputeIndirect commands." NL
"3. Verify SSBO content.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
GLuint m_dispatch_buffer;
std::string GenSource(int x, int y, int z, GLuint binding)
{
std::stringstream ss;
ss << NL "layout(local_size_x = " << x << ", local_size_y = " << y << ", local_size_z = " << z
<< ") in;" NL "layout(std430, binding = " << binding
<< ") buffer Output {" NL " uvec4 local_id[];" NL "} g_out;" NL "void main() {" NL
" if (gl_WorkGroupSize == uvec3("
<< x << ", " << y << ", " << z
<< ") && gl_WorkGroupID == uvec3(0) &&" NL " gl_GlobalInvocationID == gl_LocalInvocationID) {" NL
" g_out.local_id[gl_LocalInvocationIndex] = uvec4(gl_LocalInvocationID, 0);" NL " } else {" NL
" g_out.local_id[gl_LocalInvocationIndex] = uvec4(0xffff);" NL " }" NL "}";
return ss.str();
}
bool RunIteration(int local_size_x, int local_size_y, int local_size_z, GLuint binding, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size_x, local_size_y, local_size_z, binding));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
GLint v[3];
glGetProgramiv(m_program, GL_COMPUTE_WORK_GROUP_SIZE, v);
if (v[0] != local_size_x || v[1] != local_size_y || v[2] != local_size_z)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "GL_COMPUTE_LOCAL_WORK_SIZE is (" << v[0] << " " << v[1] << " " << v[2]
<< ") should be (" << local_size_x << " " << local_size_y << " " << local_size_z << ")"
<< tcu::TestLog::EndMessage;
return false;
}
const int kSize = local_size_x * local_size_y * local_size_z;
if (m_storage_buffer == 0)
glGenBuffers(1, &m_storage_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, binding, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(uvec4) * kSize, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
glUseProgram(m_program);
if (dispatch_indirect)
{
const GLuint num_groups[3] = { 1, 1, 1 };
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), num_groups, GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(1, 1, 1);
}
uvec4* data;
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
data =
static_cast<uvec4*>(glMapBufferRange(GL_SHADER_STORAGE_BUFFER, 0, kSize * sizeof(uvec4), GL_MAP_READ_BIT));
bool ret = true;
for (int z = 0; z < local_size_z; ++z)
{
for (int y = 0; y < local_size_y; ++y)
{
for (int x = 0; x < local_size_x; ++x)
{
const int index = z * local_size_x * local_size_y + y * local_size_x + x;
if (!IsEqual(data[index], uvec4(x, y, z, 0)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Invalid data at offset " << index << tcu::TestLog::EndMessage;
ret = false;
}
}
}
}
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
return ret;
}
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
if (!RunIteration(16, 1, 1, 0, true))
return ERROR;
if (!RunIteration(8, 8, 1, 1, false))
return ERROR;
if (!RunIteration(4, 4, 4, 2, true))
return ERROR;
if (!RunIteration(1, 2, 3, 3, false))
return ERROR;
if (!RunIteration(1024, 1, 1, 3, true))
return ERROR;
if (!RunIteration(16, 8, 8, 3, false))
return ERROR;
if (!RunIteration(32, 1, 32, 7, true))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_storage_buffer);
glDeleteBuffers(1, &m_dispatch_buffer);
return NO_ERROR;
}
};
class BasicResourceUBO : public ComputeShaderBase
{
virtual std::string Title()
{
return "Compute Shader resources - UBOs";
}
virtual std::string Purpose()
{
return "Verify that CS is able to read data from UBOs and write it to SSBO.";
}
virtual std::string Method()
{
return NL "1. Create CS which uses array of UBOs." NL
"2. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL
"3. Read data from each UBO and write it to SSBO." NL "4. Verify SSBO content." NL
"5. Repeat for different buffer and CS work sizes.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
GLuint m_uniform_buffer[12];
GLuint m_dispatch_buffer;
std::string GenSource(const uvec3& local_size, const uvec3& num_groups)
{
const uvec3 global_size = local_size * num_groups;
std::stringstream ss;
ss << NL "layout(local_size_x = " << local_size.x() << ", local_size_y = " << local_size.y()
<< ", local_size_z = " << local_size.z() << ") in;" NL "const uvec3 kGlobalSize = uvec3(" << global_size.x()
<< ", " << global_size.y() << ", " << global_size.z()
<< ");" NL "layout(std140) uniform InputBuffer {" NL " vec4 data["
<< global_size.x() * global_size.y() * global_size.z()
<< "];" NL "} g_in_buffer[12];" NL "layout(std430) buffer OutputBuffer {" NL " vec4 data0["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data1["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data2["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data3["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data4["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data5["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data6["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data7["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data8["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data9["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data10["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data11["
<< global_size.x() * global_size.y() * global_size.z()
<< "];" NL "} g_out_buffer;" NL "void main() {" NL " const uint global_index = gl_GlobalInvocationID.x +" NL
" gl_GlobalInvocationID.y * kGlobalSize.x +" NL
" gl_GlobalInvocationID.z * kGlobalSize.x * kGlobalSize.y;" NL
" g_out_buffer.data0[global_index] = g_in_buffer[0].data[global_index];" NL
" g_out_buffer.data1[global_index] = g_in_buffer[1].data[global_index];" NL
" g_out_buffer.data2[global_index] = g_in_buffer[2].data[global_index];" NL
" g_out_buffer.data3[global_index] = g_in_buffer[3].data[global_index];" NL
" g_out_buffer.data4[global_index] = g_in_buffer[4].data[global_index];" NL
" g_out_buffer.data5[global_index] = g_in_buffer[5].data[global_index];" NL
" g_out_buffer.data6[global_index] = g_in_buffer[6].data[global_index];" NL
" g_out_buffer.data7[global_index] = g_in_buffer[7].data[global_index];" NL
" g_out_buffer.data8[global_index] = g_in_buffer[8].data[global_index];" NL
" g_out_buffer.data9[global_index] = g_in_buffer[9].data[global_index];" NL
" g_out_buffer.data10[global_index] = g_in_buffer[10].data[global_index];" NL
" g_out_buffer.data11[global_index] = g_in_buffer[11].data[global_index];" NL "}";
return ss.str();
}
bool RunIteration(const uvec3& local_size, const uvec3& num_groups, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size, num_groups));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
for (GLuint i = 0; i < 12; ++i)
{
char name[32];
sprintf(name, "InputBuffer[%u]", i);
const GLuint index = glGetUniformBlockIndex(m_program, name);
glUniformBlockBinding(m_program, index, i);
GLint p = 0;
glGetActiveUniformBlockiv(m_program, index, GL_UNIFORM_BLOCK_REFERENCED_BY_COMPUTE_SHADER, &p);
if (p == GL_FALSE)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "UNIFORM_BLOCK_REFERENCED_BY_COMPUTE_SHADER should be TRUE."
<< tcu::TestLog::EndMessage;
return false;
}
}
const GLuint kBufferSize =
local_size.x() * num_groups.x() * local_size.y() * num_groups.y() * local_size.z() * num_groups.z();
if (m_storage_buffer == 0)
glGenBuffers(1, &m_storage_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * kBufferSize * 12, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
if (m_uniform_buffer[0] == 0)
glGenBuffers(12, m_uniform_buffer);
for (GLuint i = 0; i < 12; ++i)
{
std::vector<vec4> data(kBufferSize);
for (GLuint j = 0; j < kBufferSize; ++j)
{
data[j] = vec4(static_cast<float>(i * kBufferSize + j));
}
glBindBufferBase(GL_UNIFORM_BUFFER, i, m_uniform_buffer[i]);
glBufferData(GL_UNIFORM_BUFFER, sizeof(vec4) * kBufferSize, &data[0], GL_DYNAMIC_DRAW);
}
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glUseProgram(m_program);
if (dispatch_indirect)
{
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), &num_groups[0], GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(num_groups.x(), num_groups.y(), num_groups.z());
}
std::vector<vec4> data(kBufferSize * 12);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(vec4) * kBufferSize * 12, &data[0]);
for (GLuint z = 0; z < local_size.z() * num_groups.z(); ++z)
{
for (GLuint y = 0; y < local_size.y() * num_groups.y(); ++y)
{
for (GLuint x = 0; x < local_size.x() * num_groups.x(); ++x)
{
const GLuint index = z * local_size.x() * num_groups.x() * local_size.y() * num_groups.y() +
y * local_size.x() * num_groups.x() + x;
for (int i = 0; i < 1; ++i)
{
if (!IsEqual(data[index * 12 + i], vec4(static_cast<float>(index * 12 + i))))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Incorrect data at offset "
<< index * 12 + i << "." << tcu::TestLog::EndMessage;
return false;
}
}
}
}
}
return true;
}
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
memset(m_uniform_buffer, 0, sizeof(m_uniform_buffer));
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
if (!RunIteration(uvec3(64, 1, 1), uvec3(8, 1, 1), false))
return ERROR;
if (!RunIteration(uvec3(2, 2, 2), uvec3(2, 2, 2), true))
return ERROR;
if (!RunIteration(uvec3(2, 4, 2), uvec3(2, 4, 1), false))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_storage_buffer);
glDeleteBuffers(12, m_uniform_buffer);
glDeleteBuffers(1, &m_dispatch_buffer);
return NO_ERROR;
}
};
class BasicResourceTexture : public ComputeShaderBase
{
virtual std::string Title()
{
return NL "Compute Shader resources - Textures";
}
virtual std::string Purpose()
{
return NL "Verify that texture access works correctly in CS.";
}
virtual std::string Method()
{
return NL "1. Create CS which uses all sampler types (sampler1D, sampler2D, sampler3D, sampler2DRect," NL
" sampler1DArray, sampler2DArray, samplerBuffer, sampler2DMS, sampler2DMSArray)." NL
"2. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL
"3. Sample each texture and write sampled value to SSBO." NL "4. Verify SSBO content." NL
"5. Repeat for different texture and CS work sizes.";
}
virtual std::string PassCriteria()
{
return NL "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
GLuint m_texture[9];
GLuint m_texture_buffer;
GLuint m_dispatch_buffer;
std::string GenSource(const uvec3& local_size, const uvec3& num_groups)
{
const uvec3 global_size = local_size * num_groups;
std::stringstream ss;
ss << NL "layout(local_size_x = " << local_size.x() << ", local_size_y = " << local_size.y()
<< ", local_size_z = " << local_size.z() << ") in;" NL "const uvec3 kGlobalSize = uvec3(" << global_size.x()
<< ", " << global_size.y() << ", " << global_size.z()
<< ");" NL "uniform sampler1D g_sampler0;" NL "uniform sampler2D g_sampler1;" NL
"uniform sampler3D g_sampler2;" NL "uniform sampler2DRect g_sampler3;" NL
"uniform sampler1DArray g_sampler4;" NL "uniform sampler2DArray g_sampler5;" NL
"uniform samplerBuffer g_sampler6;" NL "uniform sampler2DMS g_sampler7;" NL
"uniform sampler2DMSArray g_sampler8;" NL "layout(std430) buffer OutputBuffer {" NL " vec4 data0["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data1["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data2["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data3["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data4["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data5["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data6["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data7["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " vec4 data8["
<< global_size.x() * global_size.y() * global_size.z()
<< "];" NL "} g_out_buffer;" NL "void main() {" NL " const uint global_index = gl_GlobalInvocationID.x +" NL
" gl_GlobalInvocationID.y * kGlobalSize.x +" NL
" gl_GlobalInvocationID.z * kGlobalSize.x * kGlobalSize.y;" NL
" g_out_buffer.data0[global_index] = texelFetch(g_sampler0, int(gl_GlobalInvocationID), 0);" NL
" g_out_buffer.data1[global_index] = texture(g_sampler1, vec2(gl_GlobalInvocationID) / "
"vec2(kGlobalSize));" NL " g_out_buffer.data2[global_index] = textureProj(g_sampler2, "
"vec4(vec3(gl_GlobalInvocationID) / vec3(kGlobalSize), 1.0));" NL
" g_out_buffer.data3[global_index] = textureProjOffset(g_sampler3, vec3(vec2(gl_GlobalInvocationID), "
"1.0), ivec2(0));" NL " g_out_buffer.data4[global_index] = textureLodOffset(g_sampler4, "
"vec2(gl_GlobalInvocationID.x / kGlobalSize.x, gl_GlobalInvocationID.y), 0.0, "
"0);" NL " g_out_buffer.data5[global_index] = texelFetchOffset(g_sampler5, "
"ivec3(gl_GlobalInvocationID), 0, ivec2(0));" NL
" g_out_buffer.data6[global_index] = texelFetch(g_sampler6, int(global_index));" NL
" g_out_buffer.data7[global_index] = texelFetch(g_sampler7, ivec2(gl_GlobalInvocationID), 1);" NL
" g_out_buffer.data8[global_index] = texelFetch(g_sampler8, ivec3(gl_GlobalInvocationID), 2);" NL "}";
return ss.str();
}
bool RunIteration(const uvec3& local_size, const uvec3& num_groups, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size, num_groups));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
glUseProgram(m_program);
for (int i = 0; i < 9; ++i)
{
char name[32];
sprintf(name, "g_sampler%d", i);
glUniform1i(glGetUniformLocation(m_program, name), i);
}
glUseProgram(0);
const GLuint kBufferSize =
local_size.x() * num_groups.x() * local_size.y() * num_groups.y() * local_size.z() * num_groups.z();
const GLint kWidth = static_cast<GLint>(local_size.x() * num_groups.x());
const GLint kHeight = static_cast<GLint>(local_size.y() * num_groups.y());
const GLint kDepth = static_cast<GLint>(local_size.z() * num_groups.z());
std::vector<vec4> buffer_data(kBufferSize * 9);
if (m_storage_buffer == 0)
glGenBuffers(1, &m_storage_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4) * kBufferSize * 9, &buffer_data[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
std::vector<vec4> texture_data(kBufferSize, vec4(123.0f));
if (m_texture[0] == 0)
glGenTextures(9, m_texture);
if (m_texture_buffer == 0)
glGenBuffers(1, &m_texture_buffer);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_1D, m_texture[0]);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA32F, kWidth, 0, GL_RGBA, GL_FLOAT, &texture_data[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, m_texture[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, kWidth, kHeight, 0, GL_RGBA, GL_FLOAT, &texture_data[0]);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_3D, m_texture[2]);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, kWidth, kHeight, kDepth, 0, GL_RGBA, GL_FLOAT, &texture_data[0]);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_RECTANGLE, m_texture[3]);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA32F, kWidth, kHeight, 0, GL_RGBA, GL_FLOAT, &texture_data[0]);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_1D_ARRAY, m_texture[4]);
glTexParameteri(GL_TEXTURE_1D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_1D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_1D_ARRAY, 0, GL_RGBA32F, kWidth, kHeight, 0, GL_RGBA, GL_FLOAT, &texture_data[0]);
glActiveTexture(GL_TEXTURE5);
glBindTexture(GL_TEXTURE_2D_ARRAY, m_texture[5]);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA32F, kWidth, kHeight, kDepth, 0, GL_RGBA, GL_FLOAT,
&texture_data[0]);
glActiveTexture(GL_TEXTURE6);
glBindBuffer(GL_TEXTURE_BUFFER, m_texture_buffer);
glBufferData(GL_TEXTURE_BUFFER, kBufferSize * sizeof(vec4), &texture_data[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glBindTexture(GL_TEXTURE_BUFFER, m_texture[6]);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, m_texture_buffer);
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_texture[7]);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA32F, kWidth, kHeight, GL_FALSE);
glActiveTexture(GL_TEXTURE8);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE_ARRAY, m_texture[8]);
glTexImage3DMultisample(GL_TEXTURE_2D_MULTISAMPLE_ARRAY, 4, GL_RGBA32F, kWidth, kHeight, kDepth, GL_FALSE);
// clear MS textures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, m_texture[7], 0);
glClearBufferfv(GL_COLOR, 0, &vec4(123.0f)[0]);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, m_texture[8], 0);
glClearBufferfv(GL_COLOR, 0, &vec4(123.0f)[0]);
glDeleteFramebuffers(1, &fbo);
glUseProgram(m_program);
if (dispatch_indirect)
{
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), &num_groups[0], GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(num_groups.x(), num_groups.y(), num_groups.z());
}
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(vec4) * kBufferSize * 9, &buffer_data[0]);
for (GLuint index = 0; index < kBufferSize * 9; ++index)
{
if (!IsEqual(buffer_data[index], vec4(123.0f)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Incorrect data at index " << index << "." << tcu::TestLog::EndMessage;
return false;
}
}
return true;
}
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
memset(m_texture, 0, sizeof(m_texture));
m_texture_buffer = 0;
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
if (!RunIteration(uvec3(4, 4, 4), uvec3(8, 1, 1), false))
return ERROR;
if (!RunIteration(uvec3(2, 4, 2), uvec3(2, 4, 1), true))
return ERROR;
if (!RunIteration(uvec3(2, 2, 2), uvec3(2, 2, 2), false))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glActiveTexture(GL_TEXTURE0);
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_storage_buffer);
glDeleteTextures(9, m_texture);
glDeleteBuffers(1, &m_texture_buffer);
glDeleteBuffers(1, &m_dispatch_buffer);
return NO_ERROR;
}
};
class BasicResourceImage : public ComputeShaderBase
{
virtual std::string Title()
{
return NL "Compute Shader resources - Images";
}
virtual std::string Purpose()
{
return NL "Verify that reading/writing GPU memory via image variables work as expected.";
}
virtual std::string Method()
{
return NL "1. Create CS which uses two image2D variables to read and write underlying GPU memory." NL
"2. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL
"3. Verify memory content." NL "4. Repeat for different texture and CS work sizes.";
}
virtual std::string PassCriteria()
{
return NL "Everything works as expected.";
}
GLuint m_program;
GLuint m_draw_program;
GLuint m_texture[2];
GLuint m_dispatch_buffer;
GLuint m_vertex_array;
std::string GenSource(const uvec3& local_size, const uvec3& num_groups)
{
const uvec3 global_size = local_size * num_groups;
std::stringstream ss;
ss << NL "layout(local_size_x = " << local_size.x() << ", local_size_y = " << local_size.y()
<< ", local_size_z = " << local_size.z()
<< ") in;" NL "layout(rgba32f) coherent uniform image2D g_image1;" NL
"layout(rgba32f) uniform image2D g_image2;" NL "const uvec3 kGlobalSize = uvec3("
<< global_size.x() << ", " << global_size.y() << ", " << global_size.z()
<< ");" NL "void main() {" NL
" if (gl_GlobalInvocationID.x >= kGlobalSize.x || gl_GlobalInvocationID.y >= kGlobalSize.y) return;" NL
" vec4 color = vec4(gl_GlobalInvocationID.x + gl_GlobalInvocationID.y) / 255.0;" NL
" imageStore(g_image1, ivec2(gl_GlobalInvocationID), color);" NL
" vec4 c = imageLoad(g_image1, ivec2(gl_GlobalInvocationID));" NL
" imageStore(g_image2, ivec2(gl_GlobalInvocationID), c);" NL "}";
return ss.str();
}
bool RunIteration(const uvec3& local_size, const uvec3& num_groups, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size, num_groups));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
glUseProgram(m_program);
glUniform1i(glGetUniformLocation(m_program, "g_image1"), 0);
glUniform1i(glGetUniformLocation(m_program, "g_image2"), 1);
glUseProgram(0);
const GLint kWidth = static_cast<GLint>(local_size.x() * num_groups.x());
const GLint kHeight = static_cast<GLint>(local_size.y() * num_groups.y());
const GLint kDepth = static_cast<GLint>(local_size.z() * num_groups.z());
const GLuint kSize = kWidth * kHeight * kDepth;
std::vector<vec4> data(kSize);
if (m_texture[0] == 0)
glGenTextures(2, m_texture);
for (int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, m_texture[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, kWidth, kHeight, 0, GL_RGBA, GL_FLOAT, &data[0]);
}
glBindTexture(GL_TEXTURE_2D, 0);
glBindImageTexture(0, m_texture[0], 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glBindImageTexture(1, m_texture[1], 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(m_program);
if (dispatch_indirect)
{
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), &num_groups[0], GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(num_groups.x(), num_groups.y(), num_groups.z());
}
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, m_texture[1]);
glUseProgram(m_draw_program);
glBindVertexArray(m_vertex_array);
glViewport(0, 0, kWidth, kHeight);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, 1);
std::vector<vec4> display(kWidth * kHeight);
glReadPixels(0, 0, kWidth, kHeight, GL_RGBA, GL_FLOAT, &display[0]);
for (int y = 0; y < kHeight; ++y)
{
for (int x = 0; x < kWidth; ++x)
{
if (y >= getWindowHeight() || x >= getWindowWidth())
{
continue;
}
const vec4 c = vec4(float(y + x) / 255.0f);
if (!ColorEqual(display[y * kWidth + x], c, g_color_eps))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Got " << display[y * kWidth + x].x() << ", "
<< display[y * kWidth + x].y() << ", " << display[y * kWidth + x].z() << ", "
<< display[y * kWidth + x].w() << ", expected " << c.x() << ", " << c.y() << ", " << c.z()
<< ", " << c.w() << " at " << x << ", " << y << tcu::TestLog::EndMessage;
return false;
}
}
}
return true;
}
virtual long Setup()
{
m_program = 0;
memset(m_texture, 0, sizeof(m_texture));
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
const char* const glsl_vs =
NL "out StageData {" NL " vec2 texcoord;" NL "} vs_out;" NL
"const vec2 g_quad[] = vec2[](vec2(-1, -1), vec2(1, -1), vec2(-1, 1), vec2(1, 1));" NL "void main() {" NL
" gl_Position = vec4(g_quad[gl_VertexID], 0, 1);" NL
" vs_out.texcoord = 0.5 + 0.5 * g_quad[gl_VertexID];" NL "}";
const char* glsl_fs =
NL "in StageData {" NL " vec2 texcoord;" NL "} fs_in;" NL "layout(location = 0) out vec4 o_color;" NL
"uniform sampler2D g_image1;" NL "uniform sampler2D g_image2;" NL "void main() {" NL
" vec4 c1 = texture(g_image1, fs_in.texcoord);" NL " vec4 c2 = texture(g_image2, fs_in.texcoord);" NL
" if (c1 == c2) o_color = c1;" NL " else o_color = vec4(1, 0, 0, 1);" NL "}";
m_draw_program = CreateProgram(glsl_vs, glsl_fs);
glLinkProgram(m_draw_program);
if (!CheckProgram(m_draw_program))
return ERROR;
glUseProgram(m_draw_program);
glUniform1i(glGetUniformLocation(m_draw_program, "g_image1"), 0);
glUniform1i(glGetUniformLocation(m_draw_program, "g_image2"), 1);
glUseProgram(0);
glGenVertexArrays(1, &m_vertex_array);
if (!pixelFormat.alphaBits)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Test requires default framebuffer alpha bits" << tcu::TestLog::EndMessage;
return NO_ERROR;
}
if (!RunIteration(uvec3(8, 16, 1), uvec3(8, 4, 1), true))
return ERROR;
if (!RunIteration(uvec3(4, 32, 1), uvec3(16, 2, 1), false))
return ERROR;
if (!RunIteration(uvec3(16, 4, 1), uvec3(4, 16, 1), false))
return ERROR;
if (!RunIteration(uvec3(8, 8, 1), uvec3(8, 8, 1), true))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteProgram(m_draw_program);
glDeleteVertexArrays(1, &m_vertex_array);
glDeleteTextures(2, m_texture);
glDeleteBuffers(1, &m_dispatch_buffer);
glViewport(0, 0, getWindowWidth(), getWindowHeight());
return NO_ERROR;
}
};
class BasicResourceAtomicCounter : public ComputeShaderBase
{
virtual std::string Title()
{
return "Compute Shader resources - Atomic Counters";
}
virtual std::string Purpose()
{
return NL
"1. Verify that Atomic Counters work as expected in CS." NL
"2. Verify that built-in functions: atomicCounterIncrement and atomicCounterDecrement work correctly." NL
"3. Verify that GL_ATOMIC_COUNTER_BUFFER_REFERENCED_BY_COMPUTE_SHADER is accepted by" NL
" GetActiveAtomicCounterBufferiv command.";
}
virtual std::string Method()
{
return NL
"1. Create CS which uses two atomic_uint variables." NL
"2. In CS write values returned by atomicCounterIncrement and atomicCounterDecrement functions to SSBO." NL
"3. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL "4. Verify SSBO content." NL
"5. Repeat for different buffer and CS work sizes.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
GLuint m_counter_buffer[2];
GLuint m_dispatch_buffer;
std::string GenSource(const uvec3& local_size, const uvec3& num_groups)
{
const uvec3 global_size = local_size * num_groups;
std::stringstream ss;
ss << NL "layout(local_size_x = " << local_size.x() << ", local_size_y = " << local_size.y()
<< ", local_size_z = " << local_size.z()
<< ") in;" NL "layout(std430, binding = 0) buffer Output {" NL " uint inc_data["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uint dec_data["
<< global_size.x() * global_size.y() * global_size.z()
<< "];" NL "};" NL "layout(binding = 0, offset = 0) uniform atomic_uint g_inc_counter;" NL
"layout(binding = 1, offset = 0) uniform atomic_uint g_dec_counter;" NL "void main() {" NL
" const uint index = atomicCounterIncrement(g_inc_counter);" NL " inc_data[index] = index;" NL
" dec_data[index] = atomicCounterDecrement(g_dec_counter);" NL "}";
return ss.str();
}
bool RunIteration(const uvec3& local_size, const uvec3& num_groups, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size, num_groups));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
GLint p[2] = { 0 };
glGetActiveAtomicCounterBufferiv(m_program, 0, GL_ATOMIC_COUNTER_BUFFER_REFERENCED_BY_COMPUTE_SHADER, &p[0]);
glGetActiveAtomicCounterBufferiv(m_program, 1, GL_ATOMIC_COUNTER_BUFFER_REFERENCED_BY_COMPUTE_SHADER, &p[1]);
if (p[0] == GL_FALSE || p[1] == GL_FALSE)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "ATOMIC_COUNTER_BUFFER_REFERENCED_BY_COMPUTE_SHADER should be TRUE."
<< tcu::TestLog::EndMessage;
return false;
}
const GLint kWidth = static_cast<GLint>(local_size.x() * num_groups.x());
const GLint kHeight = static_cast<GLint>(local_size.y() * num_groups.y());
const GLint kDepth = static_cast<GLint>(local_size.z() * num_groups.z());
const GLuint kSize = kWidth * kHeight * kDepth;
if (m_storage_buffer == 0)
glGenBuffers(1, &m_storage_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLuint) * kSize * 2, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
if (m_counter_buffer[0] == 0)
glGenBuffers(2, m_counter_buffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, m_counter_buffer[0]);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL, GL_STREAM_DRAW);
*static_cast<GLuint*>(glMapBuffer(GL_ATOMIC_COUNTER_BUFFER, GL_WRITE_ONLY)) = 0;
glUnmapBuffer(GL_ATOMIC_COUNTER_BUFFER);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 1, m_counter_buffer[1]);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL, GL_STREAM_DRAW);
*static_cast<GLuint*>(glMapBuffer(GL_ATOMIC_COUNTER_BUFFER, GL_WRITE_ONLY)) = kSize;
glUnmapBuffer(GL_ATOMIC_COUNTER_BUFFER);
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, 0);
glUseProgram(m_program);
if (dispatch_indirect)
{
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), &num_groups[0], GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(num_groups.x(), num_groups.y(), num_groups.z());
}
std::vector<GLuint> data(kSize);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLuint) * kSize, &data[0]);
for (GLuint i = 0; i < kSize; ++i)
{
if (data[i] != i)
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Value at index " << i << " is "
<< data[i] << " should be " << i << "." << tcu::TestLog::EndMessage;
return false;
}
}
GLuint value;
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, m_counter_buffer[0]);
glGetBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &value);
if (value != kSize)
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Final atomic counter value (buffer 0) is "
<< value << " should be " << kSize << "." << tcu::TestLog::EndMessage;
return false;
}
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, m_counter_buffer[1]);
glGetBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &value);
if (value != 0)
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Final atomic counter value (buffer 1) is "
<< value << " should be 0." << tcu::TestLog::EndMessage;
return false;
}
return true;
}
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
memset(m_counter_buffer, 0, sizeof(m_counter_buffer));
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
if (!RunIteration(uvec3(4, 3, 2), uvec3(2, 3, 4), false))
return ERROR;
if (!RunIteration(uvec3(1, 1, 1), uvec3(1, 1, 1), true))
return ERROR;
if (!RunIteration(uvec3(1, 6, 1), uvec3(1, 1, 8), false))
return ERROR;
if (!RunIteration(uvec3(4, 1, 2), uvec3(10, 3, 4), true))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(2, m_counter_buffer);
glDeleteBuffers(1, &m_dispatch_buffer);
glDeleteBuffers(1, &m_storage_buffer);
return NO_ERROR;
}
};
class BasicResourceSubroutine : public ComputeShaderBase
{
virtual std::string Title()
{
return "Compute Shader resources - Subroutines";
}
virtual std::string Purpose()
{
return NL "1. Verify that subroutines work as expected in CS." NL
"2. Verify that subroutines array can be indexed with gl_WorkGroupID built-in variable." NL
"3. Verify that atomicCounterIncrement, imageLoad and texelFetch functions" NL
" work as expected when called in CS from subroutine.";
}
virtual std::string Method()
{
return NL "1. Create CS which uses array of subroutines." NL
"2. In CS index subroutine array with gl_WorkGroupID built-in variable." NL
"3. In each subroutine load data from SSBO0 and write it to SSBO1." NL
"3. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL
"4. Verify SSBO1 content." NL "5. Repeat for different buffer and CS work sizes.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_atomic_counter_buffer;
GLuint m_storage_buffer[2];
GLuint m_buffer[2];
GLuint m_texture_buffer[2];
virtual long Setup()
{
m_program = 0;
m_atomic_counter_buffer = 0;
memset(m_storage_buffer, 0, sizeof(m_storage_buffer));
memset(m_buffer, 0, sizeof(m_buffer));
memset(m_texture_buffer, 0, sizeof(m_texture_buffer));
return NO_ERROR;
}
virtual long Run()
{
const char* const glsl_cs =
NL "layout(local_size_x = 16) in;" NL "layout(binding = 1, std430) buffer Input {" NL " uvec4 data[16];" NL
"} g_input;" NL "layout(std430, binding = 0) buffer Output {" NL " uvec4 g_output[64];" NL "};" NL
"subroutine void ComputeType();" NL "subroutine uniform ComputeType Compute[4];" NL
"layout(binding = 0, offset = 0) uniform atomic_uint g_atomic_counter;" NL
"layout(rgba32ui) readonly uniform uimageBuffer g_image_buffer;" NL
"uniform usamplerBuffer g_sampler_buffer;" NL "subroutine(ComputeType)" NL "void Compute0() {" NL
" const uint index = atomicCounterIncrement(g_atomic_counter);" NL
" g_output[index] = uvec4(index);" NL "}" NL "subroutine(ComputeType)" NL "void Compute1() {" NL
" g_output[gl_GlobalInvocationID.x] = g_input.data[gl_LocalInvocationIndex];" NL "}" NL
"subroutine(ComputeType)" NL "void Compute2() {" NL
" g_output[gl_GlobalInvocationID.x] = imageLoad(g_image_buffer, int(gl_LocalInvocationIndex));" NL
"}" NL "subroutine(ComputeType)" NL "void Compute3() {" NL
" g_output[gl_GlobalInvocationID.x] = texelFetch(g_sampler_buffer, int(gl_LocalInvocationIndex));" NL
"}" NL "void main() {" NL " Compute[gl_WorkGroupID.x]();" NL "}";
m_program = CreateComputeProgram(glsl_cs);
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return ERROR;
glGenBuffers(2, m_storage_buffer);
/* output buffer */
{
std::vector<uvec4> data(64, uvec4(0xffff));
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer[0]);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(uvec4) * 64, &data[0], GL_DYNAMIC_DRAW);
}
/* input buffer */
{
std::vector<uvec4> data(16);
for (GLuint i = 0; i < 16; ++i)
data[i] = uvec4(i + 16);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer[1]);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(uvec4) * 16, &data[0], GL_DYNAMIC_DRAW);
}
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
glGenBuffers(1, &m_atomic_counter_buffer);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, m_atomic_counter_buffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL, GL_STREAM_DRAW);
*static_cast<GLuint*>(glMapBuffer(GL_ATOMIC_COUNTER_BUFFER, GL_WRITE_ONLY)) = 0;
glUnmapBuffer(GL_ATOMIC_COUNTER_BUFFER);
glGenBuffers(2, m_buffer);
/* image buffer */
{
std::vector<uvec4> data(16);
for (GLuint i = 0; i < 16; ++i)
data[i] = uvec4(i + 32);
glBindBuffer(GL_TEXTURE_BUFFER, m_buffer[0]);
glBufferData(GL_TEXTURE_BUFFER, sizeof(uvec4) * 16, &data[0], GL_STATIC_DRAW);
}
/* texture buffer */
{
std::vector<uvec4> data(16);
for (GLuint i = 0; i < 16; ++i)
data[i] = uvec4(i + 48);
glBindBuffer(GL_TEXTURE_BUFFER, m_buffer[1]);
glBufferData(GL_TEXTURE_BUFFER, sizeof(uvec4) * 16, &data[0], GL_STATIC_DRAW);
}
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(2, m_texture_buffer);
glBindTexture(GL_TEXTURE_BUFFER, m_texture_buffer[0]);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32UI, m_buffer[0]);
glBindTexture(GL_TEXTURE_BUFFER, m_texture_buffer[1]);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32UI, m_buffer[1]);
glBindTexture(GL_TEXTURE_BUFFER, 0);
const GLuint index_compute0 = glGetSubroutineIndex(m_program, GL_COMPUTE_SHADER, "Compute0");
const GLuint index_compute1 = glGetSubroutineIndex(m_program, GL_COMPUTE_SHADER, "Compute1");
const GLuint index_compute2 = glGetSubroutineIndex(m_program, GL_COMPUTE_SHADER, "Compute2");
const GLuint index_compute3 = glGetSubroutineIndex(m_program, GL_COMPUTE_SHADER, "Compute3");
const GLint loc_compute0 = glGetSubroutineUniformLocation(m_program, GL_COMPUTE_SHADER, "Compute[0]");
const GLint loc_compute1 = glGetSubroutineUniformLocation(m_program, GL_COMPUTE_SHADER, "Compute[1]");
const GLint loc_compute2 = glGetSubroutineUniformLocation(m_program, GL_COMPUTE_SHADER, "Compute[2]");
const GLint loc_compute3 = glGetSubroutineUniformLocation(m_program, GL_COMPUTE_SHADER, "Compute[3]");
// bind resources
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer[0]);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, m_storage_buffer[1]);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, m_atomic_counter_buffer);
glBindImageTexture(0, m_texture_buffer[0], 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA32UI);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_BUFFER, m_texture_buffer[1]);
glUseProgram(m_program);
// setup subroutines
GLuint indices[4];
indices[loc_compute0] = index_compute0;
indices[loc_compute1] = index_compute1;
indices[loc_compute2] = index_compute2;
indices[loc_compute3] = index_compute3;
glUniformSubroutinesuiv(GL_COMPUTE_SHADER, 4, indices);
glDispatchCompute(4, 1, 1);
std::vector<uvec4> data(64);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer[0]);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(uvec4) * 64, &data[0]);
for (GLuint i = 0; i < 64; ++i)
{
if (!IsEqual(data[i], uvec4(i)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Invalid value at index " << i << "." << tcu::TestLog::EndMessage;
return ERROR;
}
}
GLuint value;
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, m_atomic_counter_buffer);
glGetBufferSubData(GL_ATOMIC_COUNTER_BUFFER, 0, sizeof(GLuint), &value);
if (value != 16)
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Final atomic counter value is " << value
<< " should be 16." << tcu::TestLog::EndMessage;
return ERROR;
}
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_atomic_counter_buffer);
glDeleteBuffers(2, m_storage_buffer);
glDeleteBuffers(2, m_buffer);
glDeleteTextures(2, m_texture_buffer);
return NO_ERROR;
}
};
class BasicResourceUniform : public ComputeShaderBase
{
virtual std::string Title()
{
return "Compute Shader resources - Uniforms";
}
virtual std::string Purpose()
{
return NL "1. Verify that all types of uniform variables work as expected in CS." NL
"2. Verify that uniform variables can be updated with Uniform* and ProgramUniform* commands." NL
"3. Verify that re-linking CS program works as expected.";
}
virtual std::string Method()
{
return NL "1. Create CS which uses all (single precision and integer) types of uniform variables." NL
"2. Update uniform variables with ProgramUniform* commands." NL
"3. Verify that uniform variables were updated correctly." NL "4. Re-link CS program." NL
"5. Update uniform variables with Uniform* commands." NL
"6. Verify that uniform variables were updated correctly.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
const char* const glsl_cs = NL
"layout(local_size_x = 1) in;" NL "buffer Result {" NL " int g_result;" NL "};" NL "uniform float g_0;" NL
"uniform vec2 g_1;" NL "uniform vec3 g_2;" NL "uniform vec4 g_3;" NL "uniform mat2 g_4;" NL
"uniform mat2x3 g_5;" NL "uniform mat2x4 g_6;" NL "uniform mat3x2 g_7;" NL "uniform mat3 g_8;" NL
"uniform mat3x4 g_9;" NL "uniform mat4x2 g_10;" NL "uniform mat4x3 g_11;" NL "uniform mat4 g_12;" NL
"uniform int g_13;" NL "uniform ivec2 g_14;" NL "uniform ivec3 g_15;" NL "uniform ivec4 g_16;" NL
"uniform uint g_17;" NL "uniform uvec2 g_18;" NL "uniform uvec3 g_19;" NL "uniform uvec4 g_20;" NL NL
"void main() {" NL " g_result = 1;" NL NL " if (g_0 != 1.0) g_result = 0;" NL
" if (g_1 != vec2(2.0, 3.0)) g_result = 0;" NL " if (g_2 != vec3(4.0, 5.0, 6.0)) g_result = 0;" NL
" if (g_3 != vec4(7.0, 8.0, 9.0, 10.0)) g_result = 0;" NL NL
" if (g_4 != mat2(11.0, 12.0, 13.0, 14.0)) g_result = 0;" NL
" if (g_5 != mat2x3(15.0, 16.0, 17.0, 18.0, 19.0, 20.0)) g_result = 0;" NL
" if (g_6 != mat2x4(21.0, 22.0, 23.0, 24.0, 25.0, 26.0, 27.0, 28.0)) g_result = 0;" NL NL
" if (g_7 != mat3x2(29.0, 30.0, 31.0, 32.0, 33.0, 34.0)) g_result = 0;" NL
" if (g_8 != mat3(35.0, 36.0, 37.0, 38.0, 39.0, 40.0, 41.0, 42.0, 43.0)) g_result = 0;" NL
" if (g_9 != mat3x4(44.0, 45.0, 46.0, 47.0, 48.0, 49.0, 50.0, 51.0, 52.0, 53.0, 54.0, 55.0)) g_result = "
"0;" NL NL " if (g_10 != mat4x2(56.0, 57.0, 58.0, 59.0, 60.0, 61.0, 62.0, 63.0)) g_result = 0;" NL
" if (g_11 != mat4x3(63.0, 64.0, 65.0, 66.0, 67.0, 68.0, 69.0, 70.0, 71.0, 27.0, 73, 74.0)) g_result = "
"0;" NL " if (g_12 != mat4(75.0, 76.0, 77.0, 78.0, 79.0, 80.0, 81.0, 82.0, 83.0, 84.0, 85.0, 86.0, 87.0, "
"88.0, 89.0, 90.0)) g_result = 0;" NL NL " if (g_13 != 91) g_result = 0;" NL
" if (g_14 != ivec2(92, 93)) g_result = 0;" NL " if (g_15 != ivec3(94, 95, 96)) g_result = 0;" NL
" if (g_16 != ivec4(97, 98, 99, 100)) g_result = 0;" NL NL " if (g_17 != 101u) g_result = 0;" NL
" if (g_18 != uvec2(102u, 103u)) g_result = 0;" NL
" if (g_19 != uvec3(104u, 105u, 106u)) g_result = 0;" NL
" if (g_20 != uvec4(107u, 108u, 109u, 110u)) g_result = 0;" NL "}";
m_program = CreateComputeProgram(glsl_cs);
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return ERROR;
glGenBuffers(1, &m_storage_buffer);
/* create buffer */
{
const int data = 123;
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(data), &data, GL_STATIC_DRAW);
}
glProgramUniform1f(m_program, glGetUniformLocation(m_program, "g_0"), 1.0f);
glProgramUniform2f(m_program, glGetUniformLocation(m_program, "g_1"), 2.0f, 3.0f);
glProgramUniform3f(m_program, glGetUniformLocation(m_program, "g_2"), 4.0f, 5.0f, 6.0f);
glProgramUniform4f(m_program, glGetUniformLocation(m_program, "g_3"), 7.0f, 8.0f, 9.0f, 10.0f);
/* mat2 */
{
const GLfloat value[4] = { 11.0f, 12.0f, 13.0f, 14.0f };
glProgramUniformMatrix2fv(m_program, glGetUniformLocation(m_program, "g_4"), 1, GL_FALSE, value);
}
/* mat2x3 */
{
const GLfloat value[6] = { 15.0f, 16.0f, 17.0f, 18.0f, 19.0f, 20.0f };
glProgramUniformMatrix2x3fv(m_program, glGetUniformLocation(m_program, "g_5"), 1, GL_FALSE, value);
}
/* mat2x4 */
{
const GLfloat value[8] = { 21.0f, 22.0f, 23.0f, 24.0f, 25.0f, 26.0f, 27.0f, 28.0f };
glProgramUniformMatrix2x4fv(m_program, glGetUniformLocation(m_program, "g_6"), 1, GL_FALSE, value);
}
/* mat3x2 */
{
const GLfloat value[6] = { 29.0f, 30.0f, 31.0f, 32.0f, 33.0f, 34.0f };
glProgramUniformMatrix3x2fv(m_program, glGetUniformLocation(m_program, "g_7"), 1, GL_FALSE, value);
}
/* mat3 */
{
const GLfloat value[9] = { 35.0f, 36.0f, 37.0f, 38.0f, 39.0f, 40.0f, 41.0f, 42.0f, 43.0f };
glProgramUniformMatrix3fv(m_program, glGetUniformLocation(m_program, "g_8"), 1, GL_FALSE, value);
}
/* mat3x4 */
{
const GLfloat value[12] = { 44.0f, 45.0f, 46.0f, 47.0f, 48.0f, 49.0f,
50.0f, 51.0f, 52.0f, 53.0f, 54.0f, 55.0f };
glProgramUniformMatrix3x4fv(m_program, glGetUniformLocation(m_program, "g_9"), 1, GL_FALSE, value);
}
/* mat4x2 */
{
const GLfloat value[8] = { 56.0f, 57.0f, 58.0f, 59.0f, 60.0f, 61.0f, 62.0f, 63.0f };
glProgramUniformMatrix4x2fv(m_program, glGetUniformLocation(m_program, "g_10"), 1, GL_FALSE, value);
}
/* mat4x3 */
{
const GLfloat value[12] = {
63.0f, 64.0f, 65.0f, 66.0f, 67.0f, 68.0f, 69.0f, 70.0f, 71.0f, 27.0f, 73, 74.0f
};
glProgramUniformMatrix4x3fv(m_program, glGetUniformLocation(m_program, "g_11"), 1, GL_FALSE, value);
}
/* mat4 */
{
const GLfloat value[16] = { 75.0f, 76.0f, 77.0f, 78.0f, 79.0f, 80.0f, 81.0f, 82.0f,
83.0f, 84.0f, 85.0f, 86.0f, 87.0f, 88.0f, 89.0f, 90.0f };
glProgramUniformMatrix4fv(m_program, glGetUniformLocation(m_program, "g_12"), 1, GL_FALSE, value);
}
glProgramUniform1i(m_program, glGetUniformLocation(m_program, "g_13"), 91);
glProgramUniform2i(m_program, glGetUniformLocation(m_program, "g_14"), 92, 93);
glProgramUniform3i(m_program, glGetUniformLocation(m_program, "g_15"), 94, 95, 96);
glProgramUniform4i(m_program, glGetUniformLocation(m_program, "g_16"), 97, 98, 99, 100);
glProgramUniform1ui(m_program, glGetUniformLocation(m_program, "g_17"), 101);
glProgramUniform2ui(m_program, glGetUniformLocation(m_program, "g_18"), 102, 103);
glProgramUniform3ui(m_program, glGetUniformLocation(m_program, "g_19"), 104, 105, 106);
glProgramUniform4ui(m_program, glGetUniformLocation(m_program, "g_20"), 107, 108, 109, 110);
glUseProgram(m_program);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
{
int data;
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(data), &data);
if (data != 1)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Data is " << data << " should be 1." << tcu::TestLog::EndMessage;
return ERROR;
}
}
// re-link program (all uniforms will be set to zero)
glLinkProgram(m_program);
{
const int data = 123;
glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(data), &data);
}
glUniform1f(glGetUniformLocation(m_program, "g_0"), 1.0f);
glUniform2f(glGetUniformLocation(m_program, "g_1"), 2.0f, 3.0f);
glUniform3f(glGetUniformLocation(m_program, "g_2"), 4.0f, 5.0f, 6.0f);
glUniform4f(glGetUniformLocation(m_program, "g_3"), 7.0f, 8.0f, 9.0f, 10.0f);
/* mat2 */
{
const GLfloat value[4] = { 11.0f, 12.0f, 13.0f, 14.0f };
glUniformMatrix2fv(glGetUniformLocation(m_program, "g_4"), 1, GL_FALSE, value);
}
/* mat2x3 */
{
const GLfloat value[6] = { 15.0f, 16.0f, 17.0f, 18.0f, 19.0f, 20.0f };
glUniformMatrix2x3fv(glGetUniformLocation(m_program, "g_5"), 1, GL_FALSE, value);
}
/* mat2x4 */
{
const GLfloat value[8] = { 21.0f, 22.0f, 23.0f, 24.0f, 25.0f, 26.0f, 27.0f, 28.0f };
glUniformMatrix2x4fv(glGetUniformLocation(m_program, "g_6"), 1, GL_FALSE, value);
}
/* mat3x2 */
{
const GLfloat value[6] = { 29.0f, 30.0f, 31.0f, 32.0f, 33.0f, 34.0f };
glUniformMatrix3x2fv(glGetUniformLocation(m_program, "g_7"), 1, GL_FALSE, value);
}
/* mat3 */
{
const GLfloat value[9] = { 35.0f, 36.0f, 37.0f, 38.0f, 39.0f, 40.0f, 41.0f, 42.0f, 43.0f };
glUniformMatrix3fv(glGetUniformLocation(m_program, "g_8"), 1, GL_FALSE, value);
}
/* mat3x4 */
{
const GLfloat value[12] = { 44.0f, 45.0f, 46.0f, 47.0f, 48.0f, 49.0f,
50.0f, 51.0f, 52.0f, 53.0f, 54.0f, 55.0f };
glUniformMatrix3x4fv(glGetUniformLocation(m_program, "g_9"), 1, GL_FALSE, value);
}
/* mat4x2 */
{
const GLfloat value[8] = { 56.0f, 57.0f, 58.0f, 59.0f, 60.0f, 61.0f, 62.0f, 63.0f };
glUniformMatrix4x2fv(glGetUniformLocation(m_program, "g_10"), 1, GL_FALSE, value);
}
/* mat4x3 */
{
const GLfloat value[12] = {
63.0f, 64.0f, 65.0f, 66.0f, 67.0f, 68.0f, 69.0f, 70.0f, 71.0f, 27.0f, 73, 74.0f
};
glUniformMatrix4x3fv(glGetUniformLocation(m_program, "g_11"), 1, GL_FALSE, value);
}
/* mat4 */
{
const GLfloat value[16] = { 75.0f, 76.0f, 77.0f, 78.0f, 79.0f, 80.0f, 81.0f, 82.0f,
83.0f, 84.0f, 85.0f, 86.0f, 87.0f, 88.0f, 89.0f, 90.0f };
glUniformMatrix4fv(glGetUniformLocation(m_program, "g_12"), 1, GL_FALSE, value);
}
glUniform1i(glGetUniformLocation(m_program, "g_13"), 91);
glUniform2i(glGetUniformLocation(m_program, "g_14"), 92, 93);
glUniform3i(glGetUniformLocation(m_program, "g_15"), 94, 95, 96);
glUniform4i(glGetUniformLocation(m_program, "g_16"), 97, 98, 99, 100);
glUniform1ui(glGetUniformLocation(m_program, "g_17"), 101);
glUniform2ui(glGetUniformLocation(m_program, "g_18"), 102, 103);
glUniform3ui(glGetUniformLocation(m_program, "g_19"), 104, 105, 106);
glUniform4ui(glGetUniformLocation(m_program, "g_20"), 107, 108, 109, 110);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
/* validate */
{
int data;
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(data), &data);
if (data != 1)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Data is " << data << " should be 1." << tcu::TestLog::EndMessage;
return ERROR;
}
}
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_storage_buffer);
return NO_ERROR;
}
};
class BasicBuiltinVariables : public ComputeShaderBase
{
virtual std::string Title()
{
return "CS built-in variables";
}
virtual std::string Purpose()
{
return NL "Verify that all (gl_WorkGroupSize, gl_WorkGroupID, gl_LocalInvocationID," NL
"gl_GlobalInvocationID, gl_NumWorkGroups, gl_WorkGroupSize)" NL
"CS built-in variables has correct values.";
}
virtual std::string Method()
{
return NL "1. Create CS which writes all built-in variables to SSBO." NL
"2. Dispatch CS with DispatchCompute and DispatchComputeIndirect commands." NL
"3. Verify SSBO content." NL "4. Repeat for several different local and global work sizes.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
GLuint m_program;
GLuint m_storage_buffer;
GLuint m_dispatch_buffer;
std::string GenSource(const uvec3& local_size, const uvec3& num_groups)
{
const uvec3 global_size = local_size * num_groups;
std::stringstream ss;
ss << NL "layout(local_size_x = " << local_size.x() << ", local_size_y = " << local_size.y()
<< ", local_size_z = " << local_size.z() << ") in;" NL "const uvec3 kGlobalSize = uvec3(" << global_size.x()
<< ", " << global_size.y() << ", " << global_size.z()
<< ");" NL "layout(std430) buffer OutputBuffer {" NL " uvec4 num_work_groups["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uvec4 work_group_size["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uvec4 work_group_id["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uvec4 local_invocation_id["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uvec4 global_invocation_id["
<< global_size.x() * global_size.y() * global_size.z() << "];" NL " uvec4 local_invocation_index["
<< global_size.x() * global_size.y() * global_size.z()
<< "];" NL "} g_out_buffer;" NL "void main() {" NL
" if ((gl_WorkGroupSize * gl_WorkGroupID + gl_LocalInvocationID) != gl_GlobalInvocationID) return;" NL
" const uint global_index = gl_GlobalInvocationID.x +" NL
" gl_GlobalInvocationID.y * kGlobalSize.x +" NL
" gl_GlobalInvocationID.z * kGlobalSize.x * kGlobalSize.y;" NL
" g_out_buffer.num_work_groups[global_index] = uvec4(gl_NumWorkGroups, 0);" NL
" g_out_buffer.work_group_size[global_index] = uvec4(gl_WorkGroupSize, 0);" NL
" g_out_buffer.work_group_id[global_index] = uvec4(gl_WorkGroupID, 0);" NL
" g_out_buffer.local_invocation_id[global_index] = uvec4(gl_LocalInvocationID, 0);" NL
" g_out_buffer.global_invocation_id[global_index] = uvec4(gl_GlobalInvocationID, 0);" NL
" g_out_buffer.local_invocation_index[global_index] = uvec4(gl_LocalInvocationIndex);" NL "}";
return ss.str();
}
bool RunIteration(const uvec3& local_size, const uvec3& num_groups, bool dispatch_indirect)
{
if (m_program != 0)
glDeleteProgram(m_program);
m_program = CreateComputeProgram(GenSource(local_size, num_groups));
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return false;
const GLuint kBufferSize =
local_size.x() * num_groups.x() * local_size.y() * num_groups.y() * local_size.z() * num_groups.z();
std::vector<uvec4> data(kBufferSize * 6);
if (m_storage_buffer == 0)
glGenBuffers(1, &m_storage_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_storage_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(uvec4) * kBufferSize * 6, &data[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
glUseProgram(m_program);
if (dispatch_indirect)
{
if (m_dispatch_buffer == 0)
glGenBuffers(1, &m_dispatch_buffer);
glBindBuffer(GL_DISPATCH_INDIRECT_BUFFER, m_dispatch_buffer);
glBufferData(GL_DISPATCH_INDIRECT_BUFFER, sizeof(num_groups), &num_groups[0], GL_STATIC_DRAW);
glDispatchComputeIndirect(0);
}
else
{
glDispatchCompute(num_groups.x(), num_groups.y(), num_groups.z());
}
glBindBuffer(GL_SHADER_STORAGE_BUFFER, m_storage_buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(uvec4) * kBufferSize * 6, &data[0]);
// gl_NumWorkGroups
for (GLuint index = 0; index < kBufferSize; ++index)
{
if (!IsEqual(data[index], uvec4(num_groups.x(), num_groups.y(), num_groups.z(), 0)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "gl_NumWorkGroups: Invalid data at index " << index << "."
<< tcu::TestLog::EndMessage;
return false;
}
}
// gl_WorkGroupSize
for (GLuint index = kBufferSize; index < 2 * kBufferSize; ++index)
{
if (!IsEqual(data[index], uvec4(local_size.x(), local_size.y(), local_size.z(), 0)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "gl_WorkGroupSize: Invalid data at index " << index << "."
<< tcu::TestLog::EndMessage;
return false;
}
}
// gl_WorkGroupID
for (GLuint index = 2 * kBufferSize; index < 3 * kBufferSize; ++index)
{
uvec3 expected = IndexTo3DCoord(index - 2 * kBufferSize, local_size.x() * num_groups.x(),
local_size.y() * num_groups.y());
expected.x() /= local_size.x();
expected.y() /= local_size.y();
expected.z() /= local_size.z();
if (!IsEqual(data[index], uvec4(expected.x(), expected.y(), expected.z(), 0)))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "gl_WorkGroupID: Invalid data at index "
<< index << "." << tcu::TestLog::EndMessage;
return false;
}
}
// gl_LocalInvocationID
for (GLuint index = 3 * kBufferSize; index < 4 * kBufferSize; ++index)
{
uvec3 expected = IndexTo3DCoord(index - 3 * kBufferSize, local_size.x() * num_groups.x(),
local_size.y() * num_groups.y());
expected.x() %= local_size.x();
expected.y() %= local_size.y();
expected.z() %= local_size.z();
if (!IsEqual(data[index], uvec4(expected.x(), expected.y(), expected.z(), 0)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "gl_LocalInvocationID: Invalid data at index " << index << "."
<< tcu::TestLog::EndMessage;
return false;
}
}
// gl_GlobalInvocationID
for (GLuint index = 4 * kBufferSize; index < 5 * kBufferSize; ++index)
{
uvec3 expected = IndexTo3DCoord(index - 4 * kBufferSize, local_size.x() * num_groups.x(),
local_size.y() * num_groups.y());
if (!IsEqual(data[index], uvec4(expected.x(), expected.y(), expected.z(), 0)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "gl_GlobalInvocationID: Invalid data at index " << index << "."
<< tcu::TestLog::EndMessage;
return false;
}
}
// gl_LocalInvocationIndex
for (GLuint index = 5 * kBufferSize; index < 6 * kBufferSize; ++index)
{
uvec3 coord = IndexTo3DCoord(index - 5 * kBufferSize, local_size.x() * num_groups.x(),
local_size.y() * num_groups.y());
const GLuint expected = (coord.x() % local_size.x()) + (coord.y() % local_size.y()) * local_size.x() +
(coord.z() % local_size.z()) * local_size.x() * local_size.y();
if (!IsEqual(data[index], uvec4(expected)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "gl_LocalInvocationIndex: Invalid data at index " << index << "."
<< tcu::TestLog::EndMessage;
return false;
}
}
return true;
}
virtual long Setup()
{
m_program = 0;
m_storage_buffer = 0;
m_dispatch_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
if (!RunIteration(uvec3(64, 1, 1), uvec3(8, 1, 1), false))
return ERROR;
if (!RunIteration(uvec3(1, 1, 64), uvec3(1, 5, 2), true))
return ERROR;
if (!RunIteration(uvec3(1, 1, 4), uvec3(2, 2, 2), false))
return ERROR;
if (!RunIteration(uvec3(3, 2, 1), uvec3(1, 2, 3), true))
return ERROR;
if (!RunIteration(uvec3(2, 4, 2), uvec3(2, 4, 1), false))
return ERROR;
if (!RunIteration(uvec3(2, 4, 7), uvec3(2, 1, 4), true))
return ERROR;
return NO_ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_storage_buffer);
glDeleteBuffers(1, &m_dispatch_buffer);
return NO_ERROR;
}
};
class BasicMax : public ComputeShaderBase
{
virtual std::string Title()
{
return NL "CS max values";
}
virtual std::string Purpose()
{
return NL "Verify (on the API and GLSL side) that all GL_MAX_COMPUTE_* values are not less than" NL
"required by the OpenGL specification.";
}
virtual std::string Method()
{
return NL "1. Use all API commands to query all GL_MAX_COMPUTE_* values. Verify that they are correct." NL
"2. Verify all gl_MaxCompute* constants in the GLSL.";
}
virtual std::string PassCriteria()
{
return NL "Everything works as expected.";
}
GLuint m_program;
GLuint m_buffer;
bool CheckIndexed(GLenum target, const GLint* min_values)
{
GLint i;
GLint64 i64;
GLfloat f;
GLdouble d;
GLboolean b;
for (GLuint c = 0; c < 3; c++)
{
glGetIntegeri_v(target, c, &i);
if (i < min_values[c])
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << i << " should be at least "
<< min_values[c] << "." << tcu::TestLog::EndMessage;
return false;
}
}
for (GLuint c = 0; c < 3; c++)
{
glGetInteger64i_v(target, c, &i64);
if (i64 < static_cast<GLint64>(min_values[c]))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Is " << i64 << " should be at least "
<< static_cast<GLint64>(min_values[c]) << "." << tcu::TestLog::EndMessage;
return false;
}
}
for (GLuint c = 0; c < 3; c++)
{
glGetFloati_v(target, c, &f);
if (f < static_cast<GLfloat>(min_values[c]))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Is " << f << " should be at least "
<< static_cast<GLfloat>(min_values[c]) << "." << tcu::TestLog::EndMessage;
return false;
}
}
for (GLuint c = 0; c < 3; c++)
{
glGetDoublei_v(target, c, &d);
if (d < static_cast<GLdouble>(min_values[c]))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Is " << d << " should be at least "
<< static_cast<GLdouble>(min_values[c]) << "." << tcu::TestLog::EndMessage;
return false;
}
}
for (GLuint c = 0; c < 3; c++)
{
glGetBooleani_v(target, c, &b);
if (b != (min_values[c] ? GL_TRUE : GL_FALSE))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Is " << b << " should be " << (min_values[c] ? GL_TRUE : GL_FALSE)
<< "." << tcu::TestLog::EndMessage;
return false;
}
}
return true;
}
bool Check(GLenum target, const GLint min_value)
{
GLint i;
GLint64 i64;
GLfloat f;
GLdouble d;
GLboolean b;
glGetIntegerv(target, &i);
if (i < min_value)
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << i << " should be at least "
<< min_value << "." << tcu::TestLog::EndMessage;
return false;
}
glGetInteger64v(target, &i64);
if (i64 < static_cast<GLint64>(min_value))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << i64 << " should be at least "
<< static_cast<GLint64>(min_value) << "." << tcu::TestLog::EndMessage;
return false;
}
glGetFloatv(target, &f);
if (f < static_cast<GLfloat>(min_value))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << f << " should be at least "
<< static_cast<GLfloat>(min_value) << "." << tcu::TestLog::EndMessage;
return false;
}
glGetDoublev(target, &d);
if (d < static_cast<GLdouble>(min_value))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << d << " should be at least "
<< static_cast<GLdouble>(min_value) << "." << tcu::TestLog::EndMessage;
return false;
}
glGetBooleanv(target, &b);
if (b != (min_value ? GL_TRUE : GL_FALSE))
{
m_context.getTestContext().getLog() << tcu::TestLog::Message << "Is " << b << " should be "
<< (min_value ? GL_TRUE : GL_FALSE) << "." << tcu::TestLog::EndMessage;
return false;
}
return true;
}
virtual long Setup()
{
m_program = 0;
m_buffer = 0;
return NO_ERROR;
}
virtual long Run()
{
const GLint work_group_count[3] = { 65535, 65535, 65535 };
if (!CheckIndexed(GL_MAX_COMPUTE_WORK_GROUP_COUNT, work_group_count))
return ERROR;
const GLint work_group_size[3] = { 1024, 1024, 64 };
if (!CheckIndexed(GL_MAX_COMPUTE_WORK_GROUP_SIZE, work_group_size))
return ERROR;
if (!Check(GL_MAX_COMPUTE_UNIFORM_BLOCKS, 12))
return ERROR;
if (!Check(GL_MAX_COMPUTE_TEXTURE_IMAGE_UNITS, 16))
return ERROR;
if (!Check(GL_MAX_COMPUTE_ATOMIC_COUNTER_BUFFERS, 8))
return ERROR;
if (!Check(GL_MAX_COMPUTE_ATOMIC_COUNTERS, 8))
return ERROR;
if (!Check(GL_MAX_COMPUTE_SHARED_MEMORY_SIZE, 32768))
return ERROR;
if (glu::contextSupports(m_context.getRenderContext().getType(), glu::ApiType::core(4, 5)))
{
if (!Check(GL_MAX_COMPUTE_UNIFORM_COMPONENTS, 1024))
return ERROR;
}
else
{
if (!Check(GL_MAX_COMPUTE_UNIFORM_COMPONENTS, 512))
return ERROR;
}
if (!Check(GL_MAX_COMPUTE_IMAGE_UNIFORMS, 8))
return ERROR;
if (!Check(GL_MAX_COMBINED_COMPUTE_UNIFORM_COMPONENTS, 512))
return ERROR;
const char* const glsl_cs =
NL "layout(local_size_x = 1) in;" NL "layout(std430) buffer Output {" NL " int g_output;" NL "};" NL
"uniform ivec3 MaxComputeWorkGroupCount;" NL "uniform ivec3 MaxComputeWorkGroupSize;" NL
"uniform int MaxComputeUniformComponents;" NL "uniform int MaxComputeTextureImageUnits;" NL
"uniform int MaxComputeImageUniforms;" NL "uniform int MaxComputeAtomicCounters;" NL
"uniform int MaxComputeAtomicCounterBuffers;" NL "void main() {" NL " g_output = 1;" NL
" if (MaxComputeWorkGroupCount != gl_MaxComputeWorkGroupCount) g_output = 0;" NL
" if (MaxComputeWorkGroupSize != gl_MaxComputeWorkGroupSize) g_output = 0;" NL
" if (MaxComputeUniformComponents != gl_MaxComputeUniformComponents) g_output = 0;" NL
" if (MaxComputeTextureImageUnits != gl_MaxComputeTextureImageUnits) g_output = 0;" NL
" if (MaxComputeImageUniforms != gl_MaxComputeImageUniforms) g_output = 0;" NL
" if (MaxComputeAtomicCounters != gl_MaxComputeAtomicCounters) g_output = 0;" NL
" if (MaxComputeAtomicCounterBuffers != gl_MaxComputeAtomicCounterBuffers) g_output = 0;" NL "}";
m_program = CreateComputeProgram(glsl_cs);
glLinkProgram(m_program);
if (!CheckProgram(m_program))
return ERROR;
glUseProgram(m_program);
GLint p[3];
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_COUNT, 0, &p[0]);
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_COUNT, 1, &p[1]);
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_COUNT, 2, &p[2]);
glUniform3i(glGetUniformLocation(m_program, "MaxComputeWorkGroupCount"), p[0], p[1], p[2]);
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_SIZE, 0, &p[0]);
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_SIZE, 1, &p[1]);
glGetIntegeri_v(GL_MAX_COMPUTE_WORK_GROUP_SIZE, 2, &p[2]);
glUniform3iv(glGetUniformLocation(m_program, "MaxComputeWorkGroupSize"), 1, p);
glGetIntegerv(GL_MAX_COMPUTE_UNIFORM_COMPONENTS, p);
glUniform1i(glGetUniformLocation(m_program, "MaxComputeUniformComponents"), p[0]);
glGetIntegerv(GL_MAX_COMPUTE_TEXTURE_IMAGE_UNITS, p);
glUniform1iv(glGetUniformLocation(m_program, "MaxComputeTextureImageUnits"), 1, p);
glGetIntegerv(GL_MAX_COMPUTE_IMAGE_UNIFORMS, p);
glUniform1i(glGetUniformLocation(m_program, "MaxComputeImageUniforms"), p[0]);
glGetIntegerv(GL_MAX_COMPUTE_ATOMIC_COUNTERS, p);
glUniform1i(glGetUniformLocation(m_program, "MaxComputeAtomicCounters"), p[0]);
glGetIntegerv(GL_MAX_COMPUTE_ATOMIC_COUNTER_BUFFERS, p);
glUniform1i(glGetUniformLocation(m_program, "MaxComputeAtomicCounterBuffers"), p[0]);
GLint data = 0xffff;
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(GLint), &data, GL_DYNAMIC_DRAW);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(GLint), &data);
return data == 1 ? NO_ERROR : ERROR;
}
virtual long Cleanup()
{
glUseProgram(0);
glDeleteProgram(m_program);
glDeleteBuffers(1, &m_buffer);
return NO_ERROR;
}
};
class BasicBuildMonolithic : public ComputeShaderBase
{
virtual std::string Title()
{
return "Building CS monolithic program";
}
virtual std::string Purpose()
{
return NL "1. Verify that building monolithic CS program works as expected." NL
"2. Verify that program consisting from 3 compilation units links as expected." NL
"3. Verify that CS consisting from 2 strings compiles as expected.";
}
virtual std::string Method()
{
return NL "1. Create, compile and link CS using CreateShader, CompileShader and LinkProgram commands." NL
"2. Dispatch and verify CS program.";
}
virtual std::string PassCriteria()
{
return "Everything works as expected.";
}
virtual long Run()
{
const char* const cs1[2] = { "#version 430 core",
NL "layout(local_size_x = 1) in;" NL "void Run();" NL "void main() {" NL
" Run();" NL "}" };
const char* const cs2 =
"#version 430 core" NL "layout(binding = 0, std430) buffer Output {" NL " vec4 g_output;" NL "};" NL
"vec4 CalculateOutput();" NL "void Run() {" NL " g_output = CalculateOutput();" NL "}";
const char* const cs3 =
"#version 430 core" NL "layout(local_size_x = 1) in;" NL "layout(binding = 0, std430) buffer Output {" NL
" vec4 g_output;" NL "};" NL "vec4 CalculateOutput() {" NL " g_output = vec4(0);" NL
" return vec4(1, 2, 3, 4);" NL "}";
const GLuint sh1 = glCreateShader(GL_COMPUTE_SHADER);
GLint type;
glGetShaderiv(sh1, GL_SHADER_TYPE, &type);
if (static_cast<GLenum>(type) != GL_COMPUTE_SHADER)
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "SHADER_TYPE should be COMPUTE_SHADER." << tcu::TestLog::EndMessage;
glDeleteShader(sh1);
return false;
}
glShaderSource(sh1, 2, cs1, NULL);
glCompileShader(sh1);
const GLuint sh2 = glCreateShader(GL_COMPUTE_SHADER);
glShaderSource(sh2, 1, &cs2, NULL);
glCompileShader(sh2);
const GLuint sh3 = glCreateShader(GL_COMPUTE_SHADER);
glShaderSource(sh3, 1, &cs3, NULL);
glCompileShader(sh3);
const GLuint p = glCreateProgram();
glAttachShader(p, sh1);
glAttachShader(p, sh2);
glAttachShader(p, sh3);
glLinkProgram(p);
glDeleteShader(sh1);
glDeleteShader(sh2);
glDeleteShader(sh3);
bool res = CheckProgram(p);
GLuint buffer;
glGenBuffers(1, &buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(vec4), &vec4(0.0f)[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
glUseProgram(p);
glDispatchCompute(1, 1, 1);
vec4 data;
glBindBuffer(GL_SHADER_STORAGE_BUFFER, buffer);
glMemoryBarrier(GL_BUFFER_UPDATE_BARRIER_BIT);
glGetBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(vec4), &data[0]);
if (!IsEqual(data, vec4(1.0f, 2.0f, 3.0f, 4.0f)))
{
m_context.getTestContext().getLog()
<< tcu::TestLog::Message << "Invalid value!" << tcu::TestLog::EndMessage;
res = false;
}
glDeleteBuffers(1, &buffer);
glUseProgram(0);
glDeleteProgram(p);
return res == true ? NO_ERROR : ERROR;
}
};
class BasicBuildSeparable : public ComputeShaderBase
{
virtual std::string Title()
{
return "Building CS separable program";
}
virtual std::string Purpose()
{
return NL "1. Verify that building separable CS program works as expected." NL
"2. Verify that program consisting from 4 strings works as expected.";
}
virtual std::string Method()
{
return NL "1. Create, compile and link CS using CreateShaderProgramv command." NL
"2. Dispatch and verify CS program.";
|
__label__pos
| 0.977914 |
Numbered lists in HTML
Numbered lists in HTML
Numbered lists in HTML
• Currently 0 out of 5 Stars.
• 1
• 2
• 3
• 4
• 5
Rating: 0/5 (0 votes cast)
Thank you for rating!
You have already rated this page, you can only rate it once!
Your rating has been changed, thanks for rating!
Log in or create a user account to rate this page.
Numbered list is a list of items each having its own numbers. The style and the type of numbered list depends on the ol tag, which is used to create it. Markers can have the following values:
• Arabic numerals;
• Uppercase letters;
• Lower-case letters;
• Capital Roman numerals;
• Lowercase Roman numerals;
Table 1 shows various parameters of the ol tag and their results.
Table 1. Various numbered lists.
HTML code
Example
<ol>
<li>text</li>
<li>text</li>
<li>text</li>
</ol>
Numbered list with default settings:
1. text
2. text
3. text
<ol start="5">
Numbered list that starts with five:
1. text
2. text
3. text
<ol type="A">
Numbered list with capital letters of the latin alphabet:
1. text
2. text
3. text
<ol type="a">
Numbered list with lower-case letters of the latin alphabet:
1. text
2. text
3. text
<ol type="I">
Numbered list with roman numerals:
1. text
2. text
3. text
<ol type="i">
Numbered list with lower-case roman numerals:
1. text
2. text
3. text
<ol type="1">
Numbered list with Arabic numerals:
1. text
2. text
3. text
<ol type="I" start="7">
List with Roman numerals that start with seven:
1. text
2. text
3. text
Example 1 shows how to create a list using Roman numerals.
Example 1. Creating numbered list.
<ol type="I" start="8">
<li>King Magnum XLIV</li>
<li>King Siegfried XVI</li>
<li>King Sigismund XXI</li>
<li>King Husbrandt I</li>
</ol>
The result of this example is shown below.
1. King Magnum XLIV
2. King Siegfried XVI
3. King Sigismund XXI
4. King Husbrandt I
Note that before and after the list vertical spacings are automatically added, it's one of the features of the ol tag.
Read also
Bulleted lists in HTML
Bulleted lists in HTML
Nested lists in HTML
Nested lists in HTML
Post comment
basicuse.net
html5_css
106160485398655174790
Quick navigation
General navigation
|
__label__pos
| 0.999808 |
Redis Monitoring Performance Metrics
Monitoring Redis, an in-memory data structure store, is crucial to ensure its performance, availability, and efficient resource utilization.
By tracking metrics such as command latency, memory usage, CPU utilization, and throughput, you can identify areas for optimization and fine-tune your Redis configuration for optimal performance.
Redis monitoring
Redis metrics to monitor
You can use a monitoring solution or agent to collect key performance metrics from Redis, such as CPU usage, memory consumption, throughput, latency, and cache hit rates. These metrics provide insights into the overall health and performance of your Redis instances.
Latency. Redis latency metrics measure the time it takes for Redis to process commands. Low and consistent latency is crucial for responsive applications.
Memory usage. Keep an eye on Redis memory usage to ensure it stays within acceptable limits. Monitor metrics like used memory, memory fragmentation, and eviction rates to identify potential memory leaks or inefficient memory utilization.
CPU utilization. Monitor CPU usage to ensure that your Redis instances are not under excessive load. High CPU usage can impact the responsiveness and performance of Redis. Track CPU metrics and identify any spikes or sustained high usage that may require optimization or scaling.
Throughput. Monitor the throughput of Redis by measuring the number of commands processed per second. Track metrics such as total commands, operations per second, or requests per second to understand the workload on your Redis instances and identify any sudden changes or spikes in traffic.
Keyspace. Monitor the size and distribution of keys within Redis to identify memory-consuming or frequently accessed keys. Analyze metrics like the number of keys, key sizes, and key eviction rates to optimize memory usage and performance.
Network. Monitor network traffic and bandwidth usage of Redis to identify any potential bottlenecks or network-related issues. Track metrics such as incoming and outgoing network traffic to ensure optimal network performance.
Logs. Monitor Redis logs for warnings, errors, or other important messages. Centralized log management solutions can help you analyze log data, detect patterns, and troubleshoot issues.
By monitoring these aspects of Redis, you can ensure optimal performance, troubleshoot issues promptly, and maintain the stability and reliability of your Redis infrastructure.
Redis INFO
When you run the INFO command in Redis, it returns a comprehensive set of metrics and information about the Redis server, allowing you to monitor its performance, resource usage, and operational aspects.
OpenTelemetry Redisopen in new window receiver periodically runs the INFO command to collect telemetry data from Redis and send it to your observability pipeline for analysis and monitoring.
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=406783,expires=406783,avg_ttl=5427303
Memory fragmentation
Memory fragmentation occurs when memory is allocated and deallocated, leading to unused gaps between allocated blocks of memory. This fragmentation can lead to inefficient memory usage and increased memory footprint.
Redis manages memory using its own memory allocator. When objects are created, Redis allocates memory blocks to store them. Over time, objects can be deallocated, resulting in gaps or holes in the allocated memory.
Fragmentation can lead to higher overall memory usage in Redis because fragmented memory blocks cannot be reused effectively. Allocating memory for new objects can become slower and more resource intensive because of the need to look for contiguous blocks of memory.
It is important to monitor memory fragmentation levels in Redis using tools like INFO memory command or third-party monitoring solutions. If fragmentation becomes a concern, you can take proactive measures such as periodically restarting Redis, adjusting memory policies, or utilizing external memory allocators.
127.0.0.1:6379> info memory
# Memory
mem_fragmentation_ratio:1.09
mem_fragmentation_bytes:3640288
Active defragmentation
Defragmentation in Redis works by scanning the keyspace and moving Redis objects (such as strings, lists, sets, etc.) to new locations, thereby eliminating fragmentation. It works in the background, ensuring that the process does not interfere with normal Redis operations.
Active defragmentation is an optional feature and needs to be enabled in the Redis configuration. You can enable it by setting the activedefrag option to yes in the Redis configuration file or by using the CONFIG SET command.
CONFIG SET activedefrag yes
Note that enabling active defragmentation may introduce some additional CPU and I/O overhead due to the memory relocation process. Therefore, it's essential to monitor the overall system performance when using this feature.
Redis Latency Monitor
Redis latency monitoringopen in new window is crucial for assessing the performance of your Redis instance and identifying potential bottlenecks or issues.
You can track Redis command latency using the LATENCY command in Redis. The LATENCY DOCTOR command provides a summary of the latency distribution, while the LATENCY HISTORY command provides historical information about latency measurements.
You can enable the latency monitor at runtime with the following command:
CONFIG SET latency-monitor-threshold 100
Redis SlowLog
Redis slowlogopen in new window allows you to track and analyze Redis commands that exceed a certain execution time threshold. When enabled, Redis will log these slow commands in a dedicated slowlog, providing valuable insights into the performance of your Redis instance.
To enable the slowlog feature, you can set the slowlog-log-slower-than' configuration parameter in your Redis configuration file or use the CONFIG SET' command to configure it dynamically. This parameter specifies the threshold execution time in microseconds, and any command that exceeds this threshold will be logged to the slowlog.
You can retrieve slowlog entries using the `SLOWLOG GET' command. This command allows you to retrieve a specified number of slowlog entries or to retrieve all entries. Each slowlog entry is represented as an array of values containing relevant information about the slow command.
By default, Redis stores the last 128 entries, but you can configure the maximum number of entries using the slowlog-max-len configuration parameter.
Redis slowlog provides a valuable tool for identifying and analyzing slow commands, helping you optimize the performance of your Redis instance and troubleshoot potential performance issues. It is particularly useful for understanding and addressing bottlenecks in your Redis workload.
Monitoring Redis with Uptrace
Uptrace is open source APMopen in new window that provides comprehensive monitoring capabilities for Redis, allowing you to gain visibility into your Redis infrastructure and make informed decisions based on the collected data.
Uptrace Overview
Uptrace uses OpenTelemetryopen in new window to instrument code and collect traces, metrics, and logs. OpenTelemetry specifies how to collect and export telemetry data. With OpenTelemetry, you can instrument your application once and then add or change vendors without changing the instrumentation.
Conclusion
Redis monitoring plays a vital role in maintaining the stability, performance, and security of your Redis infrastructure. It enables you to optimize resource utilization, troubleshoot issues efficiently, and make data-driven decisions to ensure the reliable and efficient operation of your Redis deployment.
Last Updated:
Uptrace is an open source APM and DataDog alternative that supports OpenTelemetry traces, metrics, and logs. You can use it to monitor apps and set up alerts to receive notifications via email, Slack, Telegram, and more.
Uptrace can process billions of spans on a single server, allowing you to monitor your software at 10x less cost. View on GitHub →
Uptrace Demo
|
__label__pos
| 0.976238 |
0
votes
2answers
75 views
Getting errors trying to run shell script (Linux)
I'm trying to run the following script. I have run it before without issue, but now I encounter an error. #!/bin/bash # init function pause(){ read -p "$*" } echo & echo "(Website):" ...
1
vote
1answer
82 views
Bash error while running script
I have a CentOS 6.5 64-bit dedicated server. The only thing I done on it is yum install java7, so I have not installed any other stuff. So in the directory /root I made this file (test.sh) ...
0
votes
0answers
13 views
Make a script identify DBUS session address and use it
I'm using cron with notify-send to display some notifications, but it requires DBUS session to run Here is the script: export DBUS_SESSION_BUS_ADDRESS=[session] #!/bin/sh { DISPLAY=:0 ...
-1
votes
2answers
25 views
What is the risk when editing crontab file without the “crontab -e” command?
I developed a script in which I add lines to the crontab file with echo command and I remove lines with sed command. I do not know the risk of that especially that I find in some web site that we ...
-1
votes
2answers
77 views
How can I change the “/” path in linux [closed]
When I type cd /, it changes my directory to a default one. How can I override the current settings so that / points to a different directory.
0
votes
2answers
24 views
Copy shell configuration from one machine to another?
I've got login to another server where shell configurations are not like I would want. For example I don't see my username and CWD on prompt, and when I press arrow key, instead of giving last entered ...
2
votes
1answer
175 views
Redirect stdout/stderr of a background job from console to a log file?
I just run a job (assume foo.sh). ./foo.sh [Press Ctrl-Z to stop] bg # enter background And it generate output to stdout and stderr. Is there any method to redirect to stdout and stderr to other ...
1
vote
0answers
166 views
execl: couldn't exec `/bin/sh'
I accidentally moved the whole bin, boot, dev and etc directory. After moving everything back I get the following error email from various cron jobs: execl: couldn't exec `/bin/sh' execl: No such ...
0
votes
1answer
93 views
How can I encode an url for wget?
I am looking for a way to convert a string to a clean url. For example : wget http://myurl.com/toto/foo bar.jpg This is going to download http://myurl.com/toto/foo and http://bar.jpg. I want to ...
0
votes
1answer
56 views
Issues with this sh script?
Having issues executing this sh script...i don't see anything wrong with it, does someone else see something I screwed up? echo "**************************************" echo "*** ...
0
votes
1answer
28 views
Is there a way to pass a statement via an argument in shell?
What I'm trying to achieve is to set the title of a widget when computer goes to idle: xsidle.sh echo mywidget.title = "idle" | awesome-client when i remove xsidle.sh it works flawlessly. xsidle.sh ...
1
vote
1answer
153 views
Why do I sometimes get 'sh: $'\302\211 … ': command not found' in xterm/sh?
Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: ...
1
vote
1answer
51 views
bash says '[[:' not found. what does that mean?
While running a bash script, I get this error: sh: [[: not found How can I find what where is the problem?
1
vote
1answer
1k views
Why does subshell not inherit exported variable (PS1)?
I am using startx to start the graphical environment. I have a very simple .xinitrc which I will add things to as I set up the environment, but for now it is as follows: catwm & # Just a basic ...
0
votes
3answers
170 views
Write pipe result to variable
I need to be able to write whether the test for a grep is either TRUE or FALSE to a variable so I can use it later For the following, if I run defaults read com.apple.Finder | grep ...
0
votes
1answer
474 views
useradd denies home directory creation Ubuntu 13.04
I'm trying to set up a git user, but am getting the error useradd: cannot create directory /srv/data/git when running this command sudo useradd \ --create-home ...
0
votes
1answer
67 views
How to make an alias and work in sh for all users (not in bash)?
My question maybe is silly, but how can I make an alias so as to work in sh for all users? I know that I can put an alias, let say this one: alias ls='ls -l' in /etc/bash.bashrc so as to work in ...
1
vote
1answer
436 views
Linux: How to eval the contents of STDOUT?
Consider you have a file with some sort of terminal command. How might each line be executed? Can you pipe the output of more into an eval? %> more ./foo.txt Edit: After some help/guidance ...
2
votes
2answers
355 views
Allow any user/password entry in ssh
I want to setup ssh so that one can login with any username or password and be accepted. If they chose a username and password of an actual user on the system they will successfully login and use ssh ...
0
votes
1answer
233 views
Bash: Copy logs to other file and remove copied log
I am using linux and doing some bash scripting. I have a log file which is making logs of all the events/changes our in particular directory. I need to copy these logs to some other file so that I can ...
2
votes
4answers
1k views
How do I start in bash when ssh'ing into my server?
I have a hosted Debian server. When I log in via ssh, I'm greeted with a sh environment. How do I change it so I start in a bash environment?
2
votes
3answers
1k views
How can cat/print a file except the last 2 lines?
How can get a file, except the last (for instance) 2 lines, with standard or GNU shell tools? (bash, tail, cat and so on.)
0
votes
1answer
136 views
I changed the symbolic link /bin/sh by accident
I am working on Ubuntu 11.1. The symlink /bin/sh pointed to dash in my system. /bin/bash pointed to sh. I accidentally changed /bin/sh to point to /bin/bash. Now, I can't open the terminal. How can I ...
2
votes
2answers
192 views
Record bash_history to private database for all users?
I want to be able to setup a MySQL database and record all commands issued on my server by username and command. Is this possible, and if so, how would I set it up?
1
vote
3answers
374 views
How to find out environment variables set after logging into a shell session
How can I find out the environment variables set after logging into a shell session? My two dummy solutions have been so far as follows: _VariableName1="VarValue1";export _VariableName1; ...
4
votes
3answers
7k views
How do I change my default shell to bash if I don't have access to chsh nor /etc/passwd?
I'm working on a university remote Linux account, and the default shell is sadly csh without tab completion. How can I change my account's default shell to bash? chsh is not available.
0
votes
2answers
692 views
How to use sudo with rcp command to copy files from linux host to HP-UX host?
I'm having this issue where when I try to use sudo to rcp some files from a Linux host to an HP-UX host (note that the destination directory requires root access to write to), I get the following ...
0
votes
1answer
1k views
sh: time command not found
In llvm 3.0 test-suite, the snippet of code bellow gives the following error on bash: sh: time command not found if [ "x$RHOST" = x ] ; then ( sh -c "$ULIMITCMD $TIMEIT -p sh -c '$COMMAND ...
0
votes
0answers
223 views
bash script rm cannot delete folder created by php mkdir
I cannot delete folder created by php mkdir for I in `echo $*` do find $I -type f -name "sess_*" -exec rm -f {} \; find $I -type f -name "*.bak" -exec rm -f {} \; find $I -type f -name ...
0
votes
1answer
252 views
Loop until user presses 'C' in sh file
What I want is as follows: At first when user runs .sh file it displays following: Review id: You id is:XXX000YYY Do you want to change it?[Press Y to change, C to continue]: Now If user presses Y ...
1
vote
3answers
729 views
Renaming a file from myFile.sh to myFile.bash?
PART 1 Ok so i'm using Mac and I started out with a file called... Example: myFile.sh In Terminal I ran this file in its directory by typing... bash myFile.sh That works perfectly, but then I did ...
1
vote
1answer
931 views
sh alias: command not found
I have written a very simple script like this: function apply_to_dev { echo "Applying scripts to DEV..." alias ISQL="isql -Uuser -Ppwd -SDEV -DDATA -I ~/bin/interfaces" shopt -s nullglob ...
2
votes
1answer
359 views
sh on Lion can't cd into folders with implicit paths (causing make to fail constantly)
I've had a problem with my OSX 10.7 Lion install for some time and I finally took some time to investigate. The issue is that when running make, I always get an error of the form: /bin/sh: line 0: ...
5
votes
3answers
137 views
CD into directory with “-” in the beginning
I have a problem with cd-ing into folder named "--Recovery Files" and can't remember how to escape dashes in the filename. Any ideas?
2
votes
2answers
507 views
Log Files from bash script output
I have a script that runs (this works fine). I'd like to produce logfiles from its output and still show it on screen. I have this command that creates three files from this blog: ((./fk.sh ...
2
votes
2answers
604 views
How run sudo -s after ssh login
My Step By Step: On myserver.com i paste line "sudo -s" to file "~/.bashrc" in home directory for "mylogin" SSH Login to [email protected] After login: [email protected]:~$ But, press cntrl+D ...
0
votes
1answer
1k views
How to check for program exit code in Linux?
I want to check in a shell script whether subversion is installed. For that I chose to check the exit code after the program has been executed. I tried using command svn, but that prints out the ...
7
votes
2answers
2k views
Comments in a multi-line bash command
This single-command BASH script file is difficult to understand, so I want to write a comment for each of the actions: grep -R "%" values* \ | sed -e "s/%/\n%/" \ | grep "%" \ | grep -v " % " \ | ...
9
votes
3answers
2k views
How is install -c different from cp
What is the difference between install -c and cp? Most installations tend to use install -c, but from the man page it doesn't sound like it does anything different than cp (except maybe set ...
8
votes
2answers
705 views
Making bash TAB completion more like cmd.exe [duplicate]
I wondered if theres a way to do rotational style completion in bash similar to the behavoir on cmd.exe, I've found it speeds me up in regard to entering commands
1
vote
3answers
239 views
Read everything that has been echo'd and 'errored' to terminal window?
I've inherited a complex shell script running on OSX that gets run on a crontab. Within the script I would like to periodically read everything in the terminal window and write it to another file... ...
1
vote
2answers
3k views
What happens to the environment when you run “su -c”?
What happens to the environment when you run "su -c"? The reason I ask, is this mysterious behavior: bash$ which firefox /usr/local/bin/firefox bash$ su - user -c "echo $PATH" ...
3
votes
1answer
400 views
Forcibly break a for loop in sh
If I run a for loop on the command line in sh, and I press control-C, it usually cancels the current running process, so I need to hold ^C until the shell itself catches it and breaks the loop. Is ...
4
votes
2answers
217 views
shell dotfiles and *rcs: what's a sane setup?
A bash user will eventually end up with .bashrc, .bash_profile, .profile, and maybe some more. Now, each file gets loaded unders particular situations, and it all leads to confusion and frustration. ...
|
__label__pos
| 0.555771 |
4
$\begingroup$
I'm trying to figure out the distribution of this statistic:
$$S=\frac{\frac{\overline{X}-\mu_0}{\sigma / \sqrt{n}}}{\sqrt{\hat{\sigma}^2/\sigma^2}}$$
Where:
• $\overline{X}=\frac{1}{n} \sum_{i=1}^n X_i$
• $\hat{\sigma}^2=\frac{1}{n-1} \sum_{i=1}^n (X_i-\overline{X})^2$
And $X_i \sim N(\mu, \sigma^2)$ i.i.d.
The numerator is easy, it is a classical standardization. Thus:
$$\frac{\overline{X}-\mu_0}{\sigma / \sqrt{n}} = Z \sim N(0,1)$$
For what it concerns the denominator: $$\frac{\hat{\sigma}^2}{\sigma^2}=\frac{\frac{1}{n-1} \sum_{i=1}^n (X_i-\overline{X})^2}{\sigma^2}=\frac{\frac{1}{\sigma^2}\sum_{i=1}^n (X_i-\overline{X})^2}{n-1}=\frac{\sum_{i=1}^n \left (\frac{X_i-\mu}{\sigma}\right )^2}{n-1}$$
Now, since $Z=\frac{X_i-\mu}{\sigma}$ is a standard normal , then $$\sum_{i=1}^n \left (\frac{X_i-\mu}{\sigma}\right )^2=\sum_{i=1}^n Z^2 \sim \chi^2_n$$
This means that:
$$\frac{\hat{\sigma}^2}{\sigma^2} = \frac{\chi^2_n}{n-1}$$
Combining the results above leads me to this final result:
$$\frac{\frac{\overline{X}-\mu_0}{\sigma / \sqrt{n}}}{\sqrt{\hat{\sigma}^2/\sigma^2}}=\frac{N(0,1)}{\sqrt{\frac{\chi^2_n}{n-1}}} \sim t_n$$
Where $t_n$ is a Student distribution with $n$ degrees of freedom. This result must be wrong, because the right one should give a Student with $n-1$ degrees of freedom. Something is clearly missing but I cannot understand what it is. Could you please help me?
$\endgroup$
• 1
$\begingroup$ The $\mu$ in the equation following "For what it concerns the denominator:" should be a $\bar X$. $\endgroup$ – JimB Aug 17 '15 at 20:21
• $\begingroup$ You don't need to call in the true mean $\mu$ - the sample variance (using the sample mean) has a finite-sample chi-square distribution, if the sample is i.i.d. normal. $\endgroup$ – Alecos Papadopoulos Aug 17 '15 at 21:58
5
$\begingroup$
What you are missing is that the Student distribution does take on the degrees of freedom of the chi-square in the denominator. I.e. the formula on the left is the definition of $t_{n-1}$, not of $t_{n}$. In other words, you don't miss anything.
Just to make this a bit more rich, see the derivation of the Student's density in this post. Note that in the linked post, the $n$ degrees of freedom of the chi-square are arbitrary -the important thing is to see that the same $n$ ends up being the degrees of freedom of the Student distribution.
In your example the degrees of freedom are $n-1$.
$\endgroup$
• 2
$\begingroup$ Ok, thank you very much for the explanation and the interesting link. Though, there is an error in what I have done. Following the definition of the Student distribution on the link, the denominator on the last ratio should have been $$\sqrt{\frac{\chi^2_{n-1}}{n-1}}$$ instead of $$\sqrt{\frac{\chi^2_{n}}{n-1}}$$ The mistake is that I have arbitrarily and erroneously substituted the sample mean ($\overline{X}$) with the true mean ($\mu$), as you said above; this as caused the $\chi^2$ distribution to have $n$ degrees of freedom instead of $n-1$. $\endgroup$ – ChicagoCubs Aug 18 '15 at 22:27
• $\begingroup$ @ChicagoCubs Yes, you got that right. $\endgroup$ – Alecos Papadopoulos Aug 19 '15 at 3:19
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.99348 |
How To Draw A Circle In Python
In this tutorial, we will learn how to draw a circle in Python using different approaches. Python provides a variety of libraries and packages that facilitate the process of drawing shapes, including circles. We will mainly focus on using the matplotlib library, and the turtle module, which are popular choices for creating simple graphics and shapes.
First, let’s make sure you have the required libraries installed. Open your terminal or command prompt and install matplotlib by running the following command:
Now that you have the required library installed, let’s explore the methods to draw a circle.
Method 1: Draw a Circle using matplotlib
Step 1: Import the required library
You will need matplotlib.pyplot and matplotlib.patches.Circle for drawing a circle. Import these using the following lines of code:
Step 2: Determine the circle’s center and radius
Choose the coordinates of the circle’s center point and set its radius:
Step 3: Create the figure and axis
Here, we create a new figure and axis using plt.subplots() function:
Step 4: Create the circle
Use the Circle class to create a circle object using the center and radius defined earlier.
Step 5: Add the circle to the axis
Now, we need to add the circle object to the axis:
Step 6: Set the limits and aspect ratio
We will set the limits for the x and y axes and make sure that the aspect ratio is equal so that the circle does not appear distorted:
Step 7: Display the circle
Finally, use the show() method to display the generated circle:
Here is the complete code for drawing a circle using matplotlib:
Method 2: Draw a Circle using turtle
The turtle module is another way of drawing shapes like circles in Python. Turtle graphics is a popular method for introducing programming to kids and is built-in to Python.
Step 1: Import the turtle module
Start by importing the turtle module:
Step 2: Create the turtle object
Step 3: Draw the circle
Using the circle() method, ask the turtle to draw a circle with the specified radius:
Step 4: Close the turtle window
To exit the turtle graphics window, use the following code:
Here is the complete code for drawing a circle using the turtle module:
Conclusion
We have learned two methods for drawing circles in Python, using the matplotlib library and the turtle module. Both methods can be used to create simple graphics and shapes. You can try customizing the appearance of the circles, such as line width, color, or transparency, by exploring the library documentation and experimenting with different parameters. Happy coding!
|
__label__pos
| 0.999999 |
Build an histogram in R Studio - Beginner
Thank you, @MLaure19! This is the first step in building what's called a 'reprex', which is ideal way to post questions so folks can help you more easily. (See FAQ: How to do a minimal reproducible example ( reprex ) for beginners for helpful details.) I'll add the assignment to my_table to make it easier to copy:
my_table <-
structure(list(X = structure(c(1L, 5L, 2L, 4L, 3L), .Label = c("Bresson",
"Claix ", "Corenc", "Coteau", "Plaine "), class = "factor"),
X8.5 = c(1L, 0L, 0L, 1L, 0L), X8.6 = c(8L, 0L, 0L, 2L, 1L
), X8.7 = c(2L, 1L, 0L, 1L, 2L), X8.8 = c(0L, 2L, 0L, 0L,
1L), X8.9 = c(0L, 1L, 1L, 0L, 0L), X9 = c(0L, 0L, 3L, 0L,
0L), X9.1 = c(0L, 0L, 1L, 0L, 0L)), row.names = c(NA, 5L), class = "data.frame")
Is this all the data you're working with? It seems quite small to require a histogram -- could you say what you need it for? That might help folks understand your context better.
|
__label__pos
| 0.85998 |
1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Homework Help: God how do you prove geometry theorems?
1. Sep 9, 2003 #1
I would really appreciate it if some1 can help me... We are doing basic vectors about adding, multiplying by scalars, subtracting etc...I have a problem with the following question and something else in general:
1)I attached an image and I have to determine what vectors AB-BC+CD-DE+EF-FA will equal
IMAGE AT: http://f1.pg.briefcase.yahoo.com/bc/robvdamm/lst?.dir=/Math [Broken]
Click on "vector"
2)How can I prove a theorem?
What I did for number one is that I drew another point in the middle and called it G (Its not in the picture so you have to imagine there is a point in the middle) and I named vectors GC parallel to AB. BUT my teacher says prove that, how the hell can I prove it? I would kill geometry if I understood how to prove the stuff...Any help is much appreciated, thanks!
Last edited by a moderator: May 1, 2017
2. jcsd
3. Sep 9, 2003 #2
dduardo
User Avatar
Staff Emeritus
Just by looking at the picture visually, It looks like the answer is going to be zero.
Here is the method I took:
(AB->) = -(DE->)
(CD->) = -(FA->)
(BC->) = -(EF->)
Right off the bat I can elimate 3 terms
(AB->) - (BC->) + (CD->) - (DE->) + (EF->) - (FA->)
becomes...
2 * [ (EF->) - (FA->) - (DE->) ]
Then by breaking up each vector into its i^ (x) and j^ (y) components I cancel EFj^ and FAj^ which leaves me with EFi^ + FAi^. But since the picture is of a regular hexagon EFi^ = FAi^
so I can say 2* [ 2*EFi^ - (DE->) ]
Again, knowing that it is a regual hexagon, 2*EFi^ - (DE->) = 0
You can prove this by using the interior angles (120 degrees) and pluging in some value for the length of the hexagon and calculate my solution.
4. Sep 9, 2003 #3
HallsofIvy
User Avatar
Science Advisor
I would think (1) would be kind of obvious. Well, almost obvious. Your picture is of a regular hexagon (I assume that the length of all vectors is the same. That's what it looks like. You don't specifically say that. Was it given that the lengths are all equal?) with the vectors all going "counter-clockwise". Did you notice that to every vector there is an "equal and opposite" vector- the side opposite it? If the problem were AB+ BC+ CD+ DE+ EF+ FA, then it really would be obvious! Either because the "equal and opposite" vectors cancel or simply because the "arrows" go around in a circle and end exactly where they started.
Since the problem is AB- BC+ CD- DE+ EF- FA, we can note that -DE=
AB, -BC= EF and -FA= CD so this the same as 2(AB+ CD+ EF). Well, I'll be darned! Those three vectors form an equilateral triangle: that sum is 0 too!
There is no "Royal Road" to proving statements. As far as your "GC" is concerned, I would look at the picture and suspect that the four points A, B, C, D form a parallelogram. If I could prove that was a parallelogram, then, of course, opposite side are congruent. If I remember my geometry correctly, then one way to prove "parallelgram" is to prove one pair of opposite sides is both
parallel and congruent. How would I prove that? Well, since this is a regular hexagon, what do I know about the angles? Hmm, If I were to draw a line from the center (G) to EVERY vertex, that would divide the center into 6 congruent angles: 360/6= 60. That seems awfully familiar. Aha! equilateral triangles! That divides the hexagon into 6 equilateral triangles! That at least proves that side AG is congruent to side BC. Now look at the angles: Angle GAB is, of course, 60 degrees. If I extended AB past B what would that angle be?
The three angles, ABG, CBG and this (unnamed) angle form a straight line so add to 180 degrees. ABG and CBG are both 60 degrees (from above) and so add to 120 degrees that leaves, for that external angle,
180- 120= 60 degrees! OK!! That's the same as angle GAB! Remember "if corresponding angles on a transversal are congruent then the lines are parallel"? Now that I know AG and BC are both congruent and parallel it follows that ABCG is a parallelogram and thus CG is parallel to AB.
Do you see what I did? I looked at the picture and I saw a parallelogram. That started me thinking about every thing I knew about parallelograms and shifted the problem from "prove AB and GC are parallel" to "prove ABCG is a parallelogram". Once I was thinking of that, I thought about opposite sides being congruent and parallel and then to angles. You have to think "What would have to be true so that one of the theorems I know will give me the result I want? What do I have to know to prove THAT is true?" and just keep working back until you get to something you are given.
By the way, the way I did (1), I asserted that "Those three vectors form an equilateral triangle". I'm sure your teacher would insist that I prove that too! I would do that just the way I did above: Draw lines from each vertex to it's opposite vertex, dividing the hexagon into equilateral triangles. I would then use the fact that the angles are all 60 degrees to show that the 3 vectors AB, CD, and EF, moved together, form an equilateral triangle.
5. Sep 9, 2003 #4
dduardo
User Avatar
Staff Emeritus
a little late hitting submit Hallsoflvy
I like vector notation better
6. Sep 9, 2003 #5
Thanks for the replies guys. Yes you both say that AB=-DE and BC=-EF etc... I told my teacher the same thing and she says to prove that, I told her its obvious but she says that I cant just do that I have to somehow make it more obvious to her. Also , dduardo, "i^ (x) and j^ (y)"
7. Sep 9, 2003 #6
dduardo
User Avatar
Staff Emeritus
i^ or eye hat is the x unit vector
j^ or jay hat is the y unit vector
I put x and y in parathesis just to make it clear I was dealing with the x and y components
I don't know if your familiar with this type of notation, but each vector can be broken down into their spatial components.
a vector (v->) in two dimensions can be broken down into x and y components
|v->| i^ = |v->| cos( theta )
|v->| j^ = |v->| sin( theta )
where |v->| is the magnitude of the vector
and theta is the angle of the vector
Just imagine a right triangle where you know the hypotenuse (vector magnitude) and you want to figure out the two legs.
8. Sep 10, 2003 #7
HallsofIvy
User Avatar
Science Advisor
First point: ARE you given that this is a regular hexagon???
You don't say that anywhere in you post. If you aren't given that, then opposite sides are NOT necessarily parallel and you can't answer this question.
If you are given that, then you can start looking at the angles formed (the interior angles are all 120 degrees, so the "exterior angles" are all 60 degrees) and use "corresponding angles", etc. to prove parallel lines.
(I like your teacher!)
DDuardo: Yes, using vectors is easier than a pure geometric proof but the original post did refer to proving lines are parallel (and the title was "God How Do You Prove a Theorem". While I can't speak for how God would do it, I was trying to show how I would.
You should be able to use the angles, a little trigonometry (or just calculations based on the equilateral triangle) to find the x, y components of each vector (what DDuardo meant by i^ and j^). Then just do the arithmetic.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
__label__pos
| 0.840251 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.