content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
DEV Community
Cover image for Web3 Development Tools and Frameworks: Building Decentralized Applications
Tonny Batya
Tonny Batya
Posted on
Web3 Development Tools and Frameworks: Building Decentralized Applications
Introduction
The advent of blockchain technology and decentralized systems has paved the way for the next evolution of the internet. Web3, also known as the decentralized web, holds the promise of a more open, secure, and user-centric online experience. As Web3 gains momentum, developers are seeking powerful tools and frameworks to create decentralized applications (dApps) that harness the full potential of this transformative technology. In this blog post, we will delve into the world of Web3 development tools and frameworks and explore a comprehensive list of essential components for building the future of decentralized applications.
Solidity
The Language of Smart Contracts At the heart of many Web3 applications lies the concept of smart contracts, self-executing contracts with predefined rules encoded on the blockchain. Solidity is the most popular language for writing smart contracts on the Ethereum blockchain. With its syntax similar to JavaScript, Solidity enables developers to create complex smart contracts that power decentralized applications.
Truffle
The Development Framework for Ethereum dApps Truffle is a robust development framework and testing suite specifically designed for Ethereum dApp development. It provides a suite of tools that simplify the development process, including smart contract compilation, migration, and testing. Truffle also offers built-in support for popular development frameworks like React, making it easier for developers to create user interfaces that interact with the Ethereum blockchain.
Ganache
Local Ethereum Network for Testing Ganache, formerly known as TestRPC, is a personal Ethereum blockchain that allows developers to test their dApps in a local environment. It provides a suite of developer-friendly features, such as instant mining, customizable accounts, and transaction control, making it an indispensable tool for testing and debugging smart contracts before deploying them on the mainnet.
Web3.js
JavaScript Library for Web3 Integration Web3.js is a JavaScript library that enables developers to interact with the Ethereum blockchain and build decentralized applications. It provides a comprehensive set of APIs for managing accounts, sending transactions, and querying smart contracts. With Web3.js, developers can create user interfaces that seamlessly integrate with the Ethereum network, enabling users to interact with dApps using their web browsers.
IPFS
Distributed Storage for Web3 Applications As Web3 applications aim to decentralize not only the logic but also the data, the InterPlanetary File System (IPFS) plays a crucial role in decentralized storage. IPFS is a distributed peer-to-peer file system that allows developers to store and retrieve files in a decentralized manner. By leveraging IPFS, dApp developers can ensure data integrity, censorship resistance, and faster content delivery for their applications.
Remix
Browser-Based IDE for Solidity Development Remix is a powerful browser-based integrated development environment (IDE) that offers a seamless development experience for writing, testing, and deploying Solidity smart contracts. It provides features such as syntax highlighting, debugging tools, gas estimation, and built-in deployment to popular networks like Ethereum. Remix is an excellent choice for developers looking for a lightweight and accessible environment to start building their Web3 applications.
Embark
Full-Stack Development Framework Embark is a full-stack development framework that simplifies the process of building decentralized applications. It supports multiple blockchain protocols, including Ethereum, IPFS, and Whisper. Embark provides a development environment, testing framework, and deployment tools, making it easier for developers to create end-to-end Web3 applications.
Drizzle
State Management Library for dApps Drizzle is a state management library specifically designed for dApps built on the Ethereum blockchain. It integrates seamlessly with Web3.js and provides a predictable state container for managing complex application states. Drizzle simplifies the process of fetching and managing data from the blockchain, synchronizing it with the user interface, and handling updates efficiently. It is a valuable tool for developers looking to build scalable and interactive decentralized applications.
Hardhat
Ethereum Development Environment Hardhat is a popular Ethereum development environment that offers a wide range of features for building Web3 applications. It provides a testing environment, task runner, and built-in support for smart contract development and deployment. Hardhat also integrates with other tools like Truffle, making it a flexible choice for Ethereum developers.
OpenZeppelin
Smart Contract Library OpenZeppelin is a widely used open-source library of reusable and secure smart contracts for building decentralized applications. It provides a collection of battle-tested contracts, including token standards like ERC20 and ERC721, which developers can use as building blocks for their dApps. OpenZeppelin ensures best practices in smart contract security and saves developers time by offering pre-audited and well-tested contracts.
The Graph
Indexing and Querying Blockchain Data The Graph is an indexing and querying protocol for Web3 applications. It allows developers to efficiently index and retrieve data from the blockchain, making it easier to build powerful and interactive dApps. With The Graph, developers can create subgraphs, which define data schemas and queries, enabling efficient data retrieval and real-time updates for their applications.
Ceramic Network
Decentralized Data Network Ceramic Network is a decentralized data network that provides a framework for building censorship-resistant and interoperable Web3 applications. It offers a tamper-resistant global namespace for data, ensuring data integrity and accessibility across different applications. Ceramic Network enables developers to create data streams, manage decentralized identities, and build data-centric dApps with enhanced privacy and security.
Conclusion
Web3 development tools and frameworks provide a rich ecosystem for building decentralized applications that leverage the power of blockchain technology. From smart contract development and testing to user interface integration, decentralized storage, and data indexing, these tools streamline the development process and enable developers to build robust and user-centric Web3 applications. As the Web3 ecosystem continues to evolve, it's essential for developers to explore and utilize a diverse range of tools and frameworks to harness the full potential of the decentralized web and shape the future of the internet.
Top comments (0)
|
__label__pos
| 0.738649 |
Quick Answer: How do I change the loading icon in WordPress?
How do I change the loading image in WordPress?
In order to change the pre-loader icon, you should perform the following steps:
1. Log into the WordPress admin panel with your login credentials.
2. Navigate to the Appearance -> Editor section. …
3. Add the following css rule to the bottom of this style.
10 апр. 2017 г.
How do I change WordPress Preloader?
Installation
1. Upload ‘the-preloader’ folder to the ‘/wp-content/plugins/’ directory.
2. Activate the plugin through the ‘Plugins’ menu in WordPress.
3. Go to Plugins menu > Preloader.
4. Enter your background color code and your Preloader image link.
5. Choose display Preloader, default is “In The Entire Website”.
6. Open header.
How do I change menu icons in WordPress?
Upon activation, you need to visit Appearance » Menus page. From here, you can click on any menu item in the in the right column to expand it. You’ll see the ‘Menu image’ and ‘Image on hover’ buttons in settings for each item. Using these buttons, you can select or upload the menu image icon you want to use.
What is WordPress Preloader?
Do you want to add a preloader to your WordPress site? A preloader is an animation indicating the progress of a page load in the background. Preloaders assure users that the website is working on loading the page. This can help improve user experience and reduce overall bounce rate.
IT IS INTERESTING: How do I put images side by side in WordPress?
How do I add a preloader to my website?
How to Add CSS Preloader to Your Website
1. Go to spinkit website, choose the first spinner and click on “Source”
2. You can see the HTML and CSS code of the selected CSS spinner, we have already added HTML, so just copy the CSS and paste it in your website’s CSS stylesheet.
6 февр. 2016 г.
What is Page Preloader?
What’s a preloader? Essentially, preloaders (also known as loaders) are what you see on the screen while the rest of the page’s content is still loading. Preloaders are often simple or complex animations that are used to keep visitors entertained while server operations finish processing.
How do I remove the loading effect in WordPress?
1. Go to Theme Settings->General.
2. Copy paste the following code to “Custom CSS” field. #site-loading,#site-loading-css { display : none ! important ; }
3. Click “Save Changes”
21 июл. 2016 г.
How do I disable preloader in WordPress?
Enabling or Disabling the Preloader
1From the WordPress left menu, go to Theme Options > Global Settings > Preloader. 2From the Preloader setting, enable or disable the site preloader. 3Click on the Save Settings button.
How do I put icons in my navigation bar?
First, navigate to Appearance > Menus and open your menu we’re adding the icons to. Inside your menu, you’ll see your menu items. If you expand the menu item you wish to add the icon to by clicking the down arrow, you’ll see some fields appear. In the Navigation Label field, we can add our icon HTML.
How do I upload custom icons to WordPress?
Method 1: Uploading Custom Icons
IT IS INTERESTING: How do I remove a WordPress theme from my dashboard?
This method is fairly straightforward and is the exact same as uploading media to your WordPress site. You’ll first want to click on the ‘Add or Upload Icon’ button in the ‘Image Icon’ section. Next, click on the ‘Upload Files’ tab and click the ‘Select Files’ button.
Make a website
|
__label__pos
| 0.999837 |
javascript sleep function
I’ve been looking for the javascript sleep function for a long time. Unfortunately, although most programming languages such as C/C++, php, python, etc. have a sleep function, javascript has neither a sleep function nor a delay function that can pause the execution for some time and continue to execute the following code. Javascript designers are proud of this. They say if you ever have the need for a sleep function, you are on the wrong way. You should re-design your code to eliminate the need for such a function. You should split your code into several functions. This situation is analogous to Einstein’s theory of relativity to Newton’s classic theory. Although the theory of relativity is righter than Newton’s theory, I would not think about the world using the theory of relativity because that would make me crazy. By the way, I do not think javascript is as great as the theory of relativity.
A simple alternative of the sleep function would be the following code:
function sleep(millis)
{
var date = new Date();
var curDate = null;
do { curDate = new Date(); }
while(curDate-date < millis);
}
But that is really a silly piece of code because it would occupy 100% CPU and freeze the web page. A smarter equivalent of the sleep function is:
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
This function is actually an asynchronous function which returns(a Promise object) immediately. So how does it simulate the sleep behavior? Well, you can use the “await” operator before the calling of the async function.
await sleep(2000);
Now this line will pause 2000 milliseconds then execute the following code. await, as the name implies, will wait for the returned Promise object to reach the fulfilled status. Now you should understand what the Promise object is.
When constructing a Promise object, you need to pass a function called executor as the parameter of the construction function. The function should have a resolve parameter and an optional reject parameter. The executor is called as soon as you new the Promise object. When the executor is called, it is passed two parameters by the system. The actual parameters are the system-defined resolve and reject function. I mean, you can use other names for the parameters when you define the executor like this:
function(myresolve,myreject)
{
.....
if(success)
myresolve(0);
else
myreject(1);
}
But when the function is called, myresolve will be passed as the system-defined resolve and myreject will be passed as the system-defined reject. The system-defined resolve function will set the status of the Promise object from the initial pending status to fulfilled status. The system-defined reject function will change the status from the initial pending status to the rejected status. The status, after changed, will keep the same forever.
Now look back at our sleep function. The parameter passed when constructing the Promise object is not an ordinary function, but a lambda expression. A lamda expression essentially defines an anonymous function, the name before the arrow is the parameter(s) of the function. You should use brackets to enclose multiple parameters. If the function has no parameter, you should use a pair of brackets (containing nothing) in there. The stuff after the arrow is the function body. Here we call the setTimeout function to set the Promise object to the fulfilled status after specific time, in order for await to return.
That is not the whole story. In order to use await, you should include your code in an async function:
async function myfun()
{
dosomething;
await sleep(5000);//sleep 5 seconds
do something else;
}
myfun();
When you call an async function like myfun, it will return a pending Promise object internally created by the async function immediately when it meets the await line.(note that the await expression itself evaluates to the value of the promise object following the await operator in the resolved status (at later time). This promise object is not the same as the promise object returned to the caller of this async function.) The current task(function) myfun will be put into the wait queue, and the code after the call of myfun will be executed. After all the subsequent code is executed, it will enter the event loop, where it will pick up the waiting task(myfun) and continue to execute the task. In the end, the async function returns with the promise object created earlier(when executing the await line) set to resolved status. A function cannot have two return values, right?
The Promise object returned from an async function is not the same as that comes after the “return” statement. If you returns a resolved Promise object, the return value of the async function is another Promise object in pending status. The returned pending Promise object only get resolved when entering the event loop, after the Promise object coming after the “return” gets resolved. If you do not return a Promise in async function, the return value of the async function would be(or changed to) a fulfilled Promise with the result set to that value coming after the “return” statement. Keep in mind that all callback functions in the .then will be called when entering the event loop. Study the following example carefully, and you’ll understand what I’m saying:
<script>
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms,12));
}
var p2;
async function fun1()
{
console.log("dosomething1");
var p1=await sleep(3000);
console.log(p1);
console.log("dosomething2");
//p2=new Promise(resolve=>resolve(12));
p2=sleep(3000);
p2.then(result=>console.log("p2:"+result));
console.log(p2);
console.log("returning");
//return p2;
return "returnfromfun1";
}
var prom;
console.log(prom=fun1());
console.log(p2);
console.log("hello:"+(p2==prom));
prom.then(result=>console.log("prom:"+result));
//p2.then(result=>console.log("p2:"+result));
console.log("hello1");
</script>
To summarize how async function works:
• An async function will return when it meets the return statement or the await statement.
• An async function always returns a Promise object created internally by itself(not you). In most cases, the returned Promise object is in pending status except when returning with a non-promise value(or just return;) where the returned promise object is in resolved status and the resolved value is set to the return value(or undefined for no return value). This is true even you return a resolved promise object created by yourself. If the returned promise is in pending status, when will it be resolved? It gets resolved when the promise object following the “return” gets resolved, or when the async function returns a non-promise value, and in the event loop. And its resolved value is set to the resolved value of the promise object following the “return”, or just the returned non-promise value. If the promise object following the “return” never gets resolved, the returned promise object will never get resolved too.
• If there is an error or exception(such as use an undefined variable) during running the async function, the whole script will not stop. Instead, the code after the calling of the async function will continue and the promise returned from the async function will enter the rejected state, and .then function of the returned promise will be executed.
• The function passed in Promise.then gets called only when entering the event loop.
If you understand how async/await work, you can know that if you put the sleep function in different async functions, you may not get the expected result:
async function fun1()
{
dosomething1;
await sleep(1000);
dosomething2;
}
async function fun2()
{
dosomething3;
}
fun1();
fun2();
You want to dosomething1, wait 1000 ms, then dosomething2, then dosomething3 at last. But the result is dosomething1, then dosomething3, and dosomething2 at last.
Leave a Reply
|
__label__pos
| 0.977777 |
81.8 Million in Numbers
81.8 million in numbers, or in numeric form, is written as 81,800,000. It is also expressed as 81.8M.
So, now you know what 81.8 million looks like in numbers.
How to Write 81.8 Million in Numbers?
Use this free online millions calculator to convert any number in word form to numeric form.
Explanation on How to Write 81.8 Million in Numbers
First of all, 1 million in numbers = 1,000,000.
Therefore, to find X million in numbers, we just need to multiply 1,000,000 by X.
So, X million = X * 1,000,000 = X million.
Hence, 81.8 million = 81.8 * 1,000,000 = 81,800,000
So, 81.8 million in number is 81,800,000.
Millions in Numbers Conversion Table
The following table contains some close variations of 81.8 million:
Millions in WordsMillions in Numbers
81.481,400,000
81.581,500,000
81.681,600,000
81.781,700,000
81.981,900,000
8282,000,000
82.182,100,000
82.282,200,000
82.382,300,000
82.482,400,000
Some More Insights on 81.8 Million
* There are exactly 5 zeros in 81.8 million..
* 81.8 million has 7 digits in total.
* The scientific expression of 81.8 million is 81.8 x 10^6.
* 81.8 million can also be written in exponential notation as 81.8E + 6.
* 81.8 million can also be expressed using e-notation as 81.8e6.
Find Out More Million to Number Conversions
|
__label__pos
| 0.858959 |
SpECTRE v2023.01.13
Burgers::AnalyticData::Sinusoid Class Reference
Analytic data (with an "exact" solution known) that is periodic over the interval \([0,2\pi]\). More...
#include <Sinusoid.hpp>
Public Types
using options = tmpl::list<>
Public Member Functions
Sinusoid (const Sinusoid &)=default
Sinusoidoperator= (const Sinusoid &)=default
Sinusoid (Sinusoid &&)=default
Sinusoidoperator= (Sinusoid &&)=default
auto get_clone () const -> std::unique_ptr< evolution::initial_data::InitialData > override
template<typename T >
Scalar< T > u (const tnsr::I< T, 1 > &x) const
tuples::TaggedTuple< Tags::Uvariables (const tnsr::I< DataVector, 1 > &x, tmpl::list< Tags::U >) const
void pup (PUP::er &p) override
virtual auto get_clone () const -> std::unique_ptr< InitialData >=0
Static Public Attributes
static constexpr Options::String help
Detailed Description
Analytic data (with an "exact" solution known) that is periodic over the interval \([0,2\pi]\).
The initial data is given by:
\begin{align} u(x, 0) = \sin(x) \end{align}
At future times the analytic solution can be found by solving the transcendental equation [70]
\begin{align} \label{eq:transcendental burgers periodic} \mathcal{F}=\sin\left(x-\mathcal{F}t\right) \end{align}
on the interval \(x\in(0,\pi)\). The solution from \(x\in(\pi,2\pi)\) is given by \(\mathcal{F}(x, t)=-\mathcal{F}(2\pi-x,t)\). The transcendental equation \((\ref{eq:transcendental burgers periodic})\) can be solved with a Newton-Raphson iterative scheme. Since this can be quite sensitive to the initial guess we implement this solution as analytic data. The python code below can be used to compute the analytic solution if desired.
At time \(1\) the solution develops a discontinuity at \(x=\pi\) followed by the amplitude of the solution decaying over time.
Note
We have rescaled \(x\) and \(t\) by \(\pi\) compared to [70].
import numpy as np
from scipy.optimize import newton
# x_grid is a np.array of positions at which to evaluate the solution
def burgers_periodic(x_grid, time):
def opt_fun(F, x, t):
return np.sin((x - F * t)) - F
results = []
for i in range(len(x_grid)):
x = x_grid[i]
greater_than_pi = False
if x > np.pi:
x = x - np.pi
x = -x
x = x + np.pi
greater_than_pi = True
guess = 0.0
if len(results) > 0:
if results[-1] < 0.0:
guess = -results[-1]
else:
guess = results[-1]
res = newton(lambda F: opt_fun(F, x, time), x0=guess)
if greater_than_pi:
results.append(-res)
else:
results.append(res)
return np.asarray(results)
Member Function Documentation
◆ get_clone()
auto Burgers::AnalyticData::Sinusoid::get_clone ( ) const -> std::unique_ptr< evolution::initial_data::InitialData >
overridevirtual
Member Data Documentation
◆ help
constexpr Options::String Burgers::AnalyticData::Sinusoid::help
staticconstexpr
Initial value:
{
"A solution that is periodic over the interval [0,2pi]. The solution "
"starts as a sinusoid: u(x,0) = sin(x) and develops a "
"discontinuity at x=pi and t=1."}
The documentation for this class was generated from the following file:
|
__label__pos
| 0.998414 |
MongoDB is a document-oriented database.
learn more… | top users | synonyms
2
votes
1answer
466 views
Is there an easy way to map directory structure to a MongoDB schema?
I'm trying to store a directory structure, including files and their content, in MongoDB. The work is part of a synching app, and is using in Node/Mongoose. Now, I'm new to Mongo, and it's late here ...
|
__label__pos
| 0.997104 |
Skip to content
jaredpalmer/the-platform
master
Switch branches/tags
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code
Files
Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
src
Repo Banner
The Platform
Blazing Fast Discord Discord
Web API's turned into React Hooks and Suspense-friendly React components. #useThePlatform
Install
Note: React 16.8+ is required for Hooks.
With npm
npm i the-platform --save
Or with yarn
yarn add the-platform
Examples
API
Hooks
useDeviceMotion()
Detect and retrieve current device Motion.
Returns
DeviceMotionEvent
Example
import { useDeviceMotion } from 'the-platform';
const Example = () => {
const { acceleration, rotationRate, interval } = useDeviceMotion();
// ...
};
useDeviceOrientation()
Detect and retrieve current device orientation.
Returns
DeviceOrientationEvent
Example
import { useDeviceOrientation } from 'the-platform';
const Example = () => {
const { alpha, beta, gamma, absolute } = useDeviceOrientation();
// ...
};
useGeoPosition()
Retrieve Geo position from the browser. This will throw a promise (must use with Suspense).
Arguments
PositionOptions
Returns
Position
Example
import { useGeoPosition } from 'the-platform';
const Example = () => {
const {
coords: { latitude, longitude },
} = useGeoPosition();
// ...
};
useNetworkStatus()
Retrieve network status from the browser.
Returns
Object containing:
• isOnline: boolean: true if the browser has network access. false otherwise.
• offlineAt?: Date: Date when network connection was lost.
Example
import { useNetworkStatus } from 'the-platform';
const Example = () => {
const { isOnline, offlineAt } = useNetworkStatus();
// ...
};
useMedia()
Arguments
query: string | object: media query string or object (parsed by json2mq). defaultMatches: boolean: a boolean providing a default value for matches
Returns
match: boolean: true if the media query matches, false otherwise.
Example
import { useMedia } from 'the-platform';
const Example = () => {
const small = useMedia('(min-width: 400px)');
const medium = useMedia({ minWidth: 800 });
// ...
};
useScript()
This will throw a promise (must use with Suspense).
Arguments
Object containing:
• src: string: The script's URI.
import { useScript } from 'the-platform';
const Example = () => {
const _unused = useScript({ src: 'bundle.js' });
// ...
};
useStylesheet()
This will throw a promise (must use with Suspense).
Arguments
Object containing:
• href: string: The stylesheet's URI.
• media?: string: Intended destination media for style information.
import { useStylesheet } from 'the-platform';
const Example = () => {
const _unused = useStylesheet({ href: 'normalize.css' });
// ...
};
useWindowScrollPosition()
Returns
Object containing:
• x: number: Horizontal scroll in pixels (window.pageXOffset).
• y: number: Vertical scroll in pixels (window.pageYOffset).
Example
import { useWindowScrollPosition } from 'the-platform';
const Example = () => {
const { x, y } = useWindowScrollPosition();
// ...
};
useWindowSize()
Returns
Object containing:
• width: Width of browser viewport (window.innerWidth)
• height: Height of browser viewport (window.innerHeight)
Example
import { useWindowSize } from 'the-platform';
const Example = () => {
const { width, height } = useWindowSize();
// ...
};
Components
<Img>
Props
• src: string
• anything else you can pass to an <img> tag
import React from 'react';
import { Img } from 'the-platform';
function App() {
return (
<div>
<h1>Hello</h1>
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Img src="https://source.unsplash.com/random/4000x2000" />
</React.Suspense>
</div>
);
}
export default App;
<Script>
Props
• src: string
• children?: () => React.ReactNode - This render prop will only execute after the script has loaded.
• anything else you can pass to a <script> tag
import React from 'react';
import { Script } from 'the-platform';
function App() {
return (
<div>
<h1>Load Stripe.js Async</h1>
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Script src="https://js.stripe.com/v3/" async>
{() => console.log(window.Stripe) || null}
</Script>
</React.Suspense>
</div>
);
}
export default App;
<Video>
Props
• src: string
• anything else you can pass to a <video> tag
import React from 'react';
import { Video } from 'the-platform';
function App() {
return (
<div>
<h1>Ken Wheeler on a Scooter</h1>
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Video
src="https://video.twimg.com/ext_tw_video/1029780437437014016/pu/vid/360x640/QLNTqYaYtkx9AbeH.mp4?tag=5"
preload="auto"
autoPlay
/>
</React.Suspense>
</div>
);
}
export default App;
<Audio>
Props
• src: string
• anything else you can pass to a <audio> tag
import React from 'react';
import { Audio } from 'the-platform';
function App() {
return (
<div>
<h1>Meavy Boy - Compassion</h1>
{/* source: http://freemusicarchive.org/music/Meavy_Boy/EP_71_to_20/Compassion */}
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Audio src="https://file-dnzavydoqu.now.sh/" preload="auto" autoPlay />
</React.Suspense>
</div>
);
}
export default App;
<Preload>
Preload a resource with <link rel="preload">. For more information check out MDN or the Google Developer Blog.
Props
• href: string
• as: string - resource type
import React from 'react';
import { Preload, Script } from 'the-platform';
function App() {
return (
<div>
<h1>Preload</h1>
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Preload href="https://js.stripe.com/v3/" rel="preload" as="script" />
<Script src="https://js.stripe.com/v3/" async />
</React.Suspense>
</div>
);
}
export default App;
<Stylesheet>
Lazy load a stylesheet.
Props
• href: string
import React from 'react';
import { Stylesheet } from 'the-platform';
function App() {
return (
<div>
<h1>Styles</h1>
<React.Suspense maxDuration={300} fallback={'loading...'}>
<Stylesheet href="style.css" />
</React.Suspense>
</div>
);
}
export default App;
Authors
Inspiration
MIT License
|
__label__pos
| 0.554698 |
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
In the preface to his very influential books Automata, Languages and Machines (Volumes A, B), Samuel Eilenberg tantalizingly promised Volumes C and D dealing with "a hierarchy (called the rational hierarchy) of the nonrational phenomena... using rational relations as a tool for comparison. Rational sets are at the bottom of this hierarchy. Moving upward one encounters 'algebraic phenomena,'" which lead to "to the context-free grammars and context-free languages of Chomsky, and to several related topics."
But Eilenberg never published volume C. He did leave preliminary handwritten notes for the first few chapters (http://www-igm.univ-mlv.fr/~berstel/EilenbergVolumeC.html) complete with scratchouts, question marks, side notes and gaps. But they do not reveal much beyond the beginnings of the well-known power series approach to grammars.
So, my actual question -- does anyone know of work along the same lines to possibly reconstruct what Eilenberg had in mind? If not, what material is likely closest to his ideas?
The site http://x-machines.net/ is about x-machines, one of Eilenberg's key innovations, but it deals mainly with applications of x-machines rather than further developing the theory as Eilenberg seemed to promise.
Also, anyone know why Eilenberg stopped before making much progress on Volume C? This was the late 70's, and he lived until 1998, though he did not appear to have published any math after Volume B. Yet he seemed to have the math for Volumes C and D largely done, at least in his mind.
share|cite|improve this question
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.636279 |
/[escript]/trunk/SConstruct
ViewVC logotype
Contents of /trunk/SConstruct
Parent Directory Parent Directory | Revision Log Revision Log
Revision 2026 - (show annotations)
Tue Nov 11 01:56:54 2008 UTC (14 years, 2 months ago) by jfenwick
File size: 27352 byte(s)
I have changed the default scons options for gcc to be much more picky about warnings.
By default warnings will be treated as errors; so a program with warnings won't compile.
If you need to turn this off, usewarnings=no will work.
The previous usepedantic oprtion is no longer turned on by default.
This should have no effect on any of the other compilers.
If anyone wants to jump in and supply sensible values for the fatalwarnings variable for their favourite compiler go ahead.
1
2 ########################################################
3 #
4 # Copyright (c) 2003-2008 by University of Queensland
5 # Earth Systems Science Computational Center (ESSCC)
6 # http://www.uq.edu.au/esscc
7 #
8 # Primary Business: Queensland, Australia
9 # Licensed under the Open Software License version 3.0
10 # http://www.opensource.org/licenses/osl-3.0.php
11 #
12 ########################################################
13
14
15 EnsureSConsVersion(0,96,91)
16 EnsurePythonVersion(2,3)
17
18 import sys, os, re, socket
19
20 # Add our extensions
21 if os.path.isdir('scons'): sys.path.append('scons')
22 import scons_extensions
23
24 # Use /usr/lib64 if available, else /usr/lib
25 usr_lib = '/usr/lib'
26 if os.path.isfile('/usr/lib64/libc.so'): usr_lib = '/usr/lib64'
27
28 # The string python2.4 or python2.5
29 python_version = 'python%s.%s' % (sys.version_info[0], sys.version_info[1])
30
31 # MS Windows support, many thanks to PH
32 IS_WINDOWS_PLATFORM = (os.name== "nt")
33
34 prefix = ARGUMENTS.get('prefix', Dir('#.').abspath)
35
36 # Read configuration options from file scons/<hostname>_options.py
37 hostname = re.sub("[^0-9a-zA-Z]", "_", socket.gethostname().split('.')[0])
38 tmp = os.path.join("scons",hostname+"_options.py")
39 options_file = ARGUMENTS.get('options_file', tmp)
40 if not os.path.isfile(options_file):
41 options_file = False
42 print "Options file not found (expected '%s')" % tmp
43 else:
44 print "Options file is", options_file
45
46 # Load options file and command-line arguments
47 opts = Options(options_file, ARGUMENTS)
48
49 ############ Load build options ################################
50
51 opts.AddOptions(
52 # Where to install esys stuff
53 ('prefix', 'where everything will be installed', Dir('#.').abspath),
54 ('incinstall', 'where the esys headers will be installed', os.path.join(Dir('#.').abspath,'include')),
55 ('bininstall', 'where the esys binaries will be installed', os.path.join(prefix,'bin')),
56 ('libinstall', 'where the esys libraries will be installed', os.path.join(prefix,'lib')),
57 ('pyinstall', 'where the esys python modules will be installed', os.path.join(prefix,'esys')),
58 # Compilation options
59 BoolOption('dodebug', 'For backwards compatibility', 'no'),
60 BoolOption('usedebug', 'Do you want a debug build?', 'no'),
61 BoolOption('usevtk', 'Do you want to use VTK?', 'yes'),
62 ('options_file', 'File of paths/options. Default: scons/<hostname>_options.py', options_file),
63 ('win_cc_name', 'windows C compiler name if needed', 'msvc'),
64 # The strings -DDEFAULT_ get replaced by scons/<hostname>_options.py or by defaults below
65 ('cc_flags', 'C compiler flags to use', '-DEFAULT_1'),
66 ('cc_optim', 'C compiler optimization flags to use', '-DEFAULT_2'),
67 ('cc_debug', 'C compiler debug flags to use', '-DEFAULT_3'),
68 ('omp_optim', 'OpenMP compiler flags to use (Release build)', '-DEFAULT_4'),
69 ('omp_debug', 'OpenMP compiler flags to use (Debug build)', '-DEFAULT_5'),
70 ('omp_libs', 'OpenMP compiler libraries to link with', '-DEFAULT_6'),
71 ('cc_extra', 'Extra C/C++ flags', ''),
72 ('ld_extra', 'Extra linker flags', ''),
73 ('sys_libs', 'System libraries to link with', []),
74 ('ar_flags', 'Static library archiver flags to use', ''),
75 BoolOption('useopenmp', 'Compile parallel version using OpenMP', 'yes'),
76 BoolOption('usepedantic', 'Compile with -pedantic if using gcc', 'no'),
77 BoolOption('usewarnings','Compile with warnings as errors if using gcc','yes'),
78 # Python
79 ('python_path', 'Path to Python includes', '/usr/include/'+python_version),
80 ('python_lib_path', 'Path to Python libs', usr_lib),
81 ('python_libs', 'Python libraries to link with', [python_version]),
82 ('python_cmd', 'Python command', 'python'),
83 # Boost
84 ('boost_path', 'Path to Boost includes', '/usr/include'),
85 ('boost_lib_path', 'Path to Boost libs', usr_lib),
86 ('boost_libs', 'Boost libraries to link with', ['boost_python']),
87 # NetCDF
88 BoolOption('usenetcdf', 'switch on/off the usage of netCDF', 'yes'),
89 ('netCDF_path', 'Path to netCDF includes', '/usr/include'),
90 ('netCDF_lib_path', 'Path to netCDF libs', usr_lib),
91 ('netCDF_libs', 'netCDF C++ libraries to link with', ['netcdf_c++', 'netcdf']),
92 # MPI
93 BoolOption('useMPI', 'For backwards compatibility', 'no'),
94 BoolOption('usempi', 'Compile parallel version using MPI', 'no'),
95 ('MPICH_IGNORE_CXX_SEEK', 'name of macro to ignore MPI settings of C++ SEEK macro (for MPICH)' , 'MPICH_IGNORE_CXX_SEEK'),
96 ('mpi_path', 'Path to MPI includes', '/usr/include'),
97 ('mpi_run', 'mpirun name' , 'mpiexec -np 1'),
98 ('mpi_lib_path', 'Path to MPI libs (needs to be added to the LD_LIBRARY_PATH)', usr_lib),
99 ('mpi_libs', 'MPI libraries to link with (needs to be shared!)', ['mpich' , 'pthread', 'rt']),
100 # ParMETIS
101 BoolOption('useparmetis', 'Compile parallel version using ParMETIS', 'yes'),
102 ('parmetis_path', 'Path to ParMETIS includes', '/usr/include'),
103 ('parmetis_lib_path', 'Path to ParMETIS library', usr_lib),
104 ('parmetis_libs', 'ParMETIS library to link with', ['parmetis', 'metis']),
105 # PAPI
106 BoolOption('usepapi', 'switch on/off the usage of PAPI', 'no'),
107 ('papi_path', 'Path to PAPI includes', '/usr/include'),
108 ('papi_lib_path', 'Path to PAPI libs', usr_lib),
109 ('papi_libs', 'PAPI libraries to link with', ['papi']),
110 BoolOption('papi_instrument_solver', 'use PAPI in Solver.c to instrument each iteration of the solver', False),
111 # MKL
112 BoolOption('usemkl', 'switch on/off the usage of MKL', 'no'),
113 ('mkl_path', 'Path to MKL includes', '/sw/sdev/cmkl/10.0.2.18/include'),
114 ('mkl_lib_path', 'Path to MKL libs', '/sw/sdev/cmkl/10.0.2.18/lib/em64t'),
115 ('mkl_libs', 'MKL libraries to link with', ['mkl_solver', 'mkl_em64t', 'guide', 'pthread']),
116 # UMFPACK
117 BoolOption('useumfpack', 'switch on/off the usage of UMFPACK', 'no'),
118 ('ufc_path', 'Path to UFconfig includes', '/usr/include/suitesparse'),
119 ('umf_path', 'Path to UMFPACK includes', '/usr/include/suitesparse'),
120 ('umf_lib_path', 'Path to UMFPACK libs', usr_lib),
121 ('umf_libs', 'UMFPACK libraries to link with', ['umfpack']),
122 # AMD (used by UMFPACK)
123 ('amd_path', 'Path to AMD includes', '/usr/include/suitesparse'),
124 ('amd_lib_path', 'Path to AMD libs', usr_lib),
125 ('amd_libs', 'AMD libraries to link with', ['amd']),
126 # BLAS (used by UMFPACK)
127 ('blas_path', 'Path to BLAS includes', '/usr/include/suitesparse'),
128 ('blas_lib_path', 'Path to BLAS libs', usr_lib),
129 ('blas_libs', 'BLAS libraries to link with', ['blas']),
130 # An option for specifying the compiler tools set (see windows branch).
131 ('tools_names', 'allow control over the tools in the env setup', ['intelc'])
132 )
133
134 ############ Specify which compilers to use ####################
135
136 # intelc uses regular expressions improperly and emits a warning about
137 # failing to find the compilers. This warning can be safely ignored.
138
139 if IS_WINDOWS_PLATFORM:
140 env = Environment(options = opts)
141 env = Environment(tools = ['default'] + env['tools_names'],
142 options = opts)
143 else:
144 if socket.gethostname().split('.')[0] == 'service0':
145 env = Environment(tools = ['default', 'intelc'], options = opts)
146 elif os.uname()[4]=='ia64':
147 env = Environment(tools = ['default', 'intelc'], options = opts)
148 if env['CXX'] == 'icpc':
149 env['LINK'] = env['CXX'] # version >=9 of intel c++ compiler requires use of icpc to link in C++ runtimes (icc does not)
150 else:
151 env = Environment(tools = ['default'], options = opts)
152 Help(opts.GenerateHelpText(env))
153
154 ############ Fill in compiler options if not set above #########
155
156 # Backwards compatibility: allow dodebug=yes and useMPI=yes
157 if env['dodebug']: env['usedebug'] = 1
158 if env['useMPI']: env['usempi'] = 1
159
160 # Default compiler options (override allowed in hostname_options.py, but should not be necessary)
161 # For both C and C++ you get: cc_flags and either the optim flags or debug flags
162
163 if env["CC"] == "icc":
164 # Intel compilers
165 cc_flags = "-fPIC -ansi -wd161 -w1 -vec-report0 -DBLOCKTIMER -DCORE_ID1"
166 cc_optim = "-O3 -ftz -IPF_ftlacc- -IPF_fma -fno-alias"
167 cc_debug = "-g -O0 -DDOASSERT -DDOPROF -DBOUNDS_CHECK"
168 omp_optim = "-openmp -openmp_report0"
169 omp_debug = "-openmp -openmp_report0"
170 omp_libs = ['guide', 'pthread']
171 pedantic = ""
172 fatalwarning = "" # Switch to turn warnings into errors
173 elif env["CC"] == "gcc":
174 # GNU C on any system
175 cc_flags = "-Wall -fPIC -ansi -ffast-math -Wno-unknown-pragmas -DBLOCKTIMER -isystem /usr/include/boost/ -isystem /usr/include/python2.5/ -Wno-sign-compare"
176 cc_optim = "-O3"
177 cc_debug = "-g -O0 -DDOASSERT -DDOPROF -DBOUNDS_CHECK"
178 omp_optim = ""
179 omp_debug = ""
180 omp_libs = []
181 pedantic = "-pedantic-errors -Wno-long-long"
182 fatalwarning = "-Werror"
183 elif env["CC"] == "cl":
184 # Microsoft Visual C on Windows
185 cc_flags = "/FD /EHsc /GR /wd4068 -D_USE_MATH_DEFINES -DDLL_NETCDF"
186 cc_optim = "/O2 /Op /MT /W3"
187 cc_debug = "/Od /RTC1 /MTd /ZI -DBOUNDS_CHECK"
188 omp_optim = ""
189 omp_debug = ""
190 omp_libs = []
191 pedantic = ""
192 fatalwarning = ""
193 elif env["CC"] == "icl":
194 # intel C on Windows, see windows_msvc71_options.py for a start
195 pedantic = ""
196 fatalwarning = ""
197
198 # If not specified in hostname_options.py then set them here
199 if env["cc_flags"] == "-DEFAULT_1": env['cc_flags'] = cc_flags
200 if env["cc_optim"] == "-DEFAULT_2": env['cc_optim'] = cc_optim
201 if env["cc_debug"] == "-DEFAULT_3": env['cc_debug'] = cc_debug
202 if env["omp_optim"] == "-DEFAULT_4": env['omp_optim'] = omp_optim
203 if env["omp_debug"] == "-DEFAULT_5": env['omp_debug'] = omp_debug
204 if env["omp_libs"] == "-DEFAULT_6": env['omp_libs'] = omp_libs
205
206 # OpenMP is disabled if useopenmp=no or both variables omp_optim and omp_debug are empty
207 if not env["useopenmp"]:
208 env['omp_optim'] = ""
209 env['omp_debug'] = ""
210 env['omp_libs'] = []
211
212 if env['omp_optim'] == "" and env['omp_debug'] == "": env["useopenmp"] = 0
213
214 ############ Copy environment variables into scons env #########
215
216 try: env['ENV']['OMP_NUM_THREADS'] = os.environ['OMP_NUM_THREADS']
217 except KeyError: env['ENV']['OMP_NUM_THREADS'] = 1
218
219 try: env['ENV']['PATH'] = os.environ['PATH']
220 except KeyError: pass
221
222 try: env['ENV']['PYTHONPATH'] = os.environ['PYTHONPATH']
223 except KeyError: pass
224
225 try: env['ENV']['C_INCLUDE_PATH'] = os.environ['C_INCLUDE_PATH']
226 except KeyError: pass
227
228 try: env['ENV']['CPLUS_INCLUDE_PATH'] = os.environ['CPLUS_INCLUDE_PATH']
229 except KeyError: pass
230
231 try: env['ENV']['LD_LIBRARY_PATH'] = os.environ['LD_LIBRARY_PATH']
232 except KeyError: pass
233
234 try: env['ENV']['LIBRARY_PATH'] = os.environ['LIBRARY_PATH']
235 except KeyError: pass
236
237 try: env['ENV']['DISPLAY'] = os.environ['DISPLAY']
238 except KeyError: pass
239
240 try: env['ENV']['XAUTHORITY'] = os.environ['XAUTHORITY']
241 except KeyError: pass
242
243 try: env['ENV']['HOME'] = os.environ['HOME']
244 except KeyError: pass
245
246 # Configure for test suite
247 env.PrependENVPath('PYTHONPATH', prefix)
248 env.PrependENVPath('LD_LIBRARY_PATH', env['libinstall'])
249
250 env['ENV']['ESCRIPT_ROOT'] = prefix
251
252 ############ Set up paths for Configure() ######################
253
254 # Make a copy of an environment
255 # Use env.Clone if available, but fall back on env.Copy for older version of scons
256 def clone_env(env):
257 if 'Clone' in dir(env): return env.Clone() # scons-0.98
258 else: return env.Copy() # scons-0.96
259
260 # Add cc option -I<Escript>/trunk/include
261 env.Append(CPPPATH = [Dir('include')])
262
263 # Add cc option -L<Escript>/trunk/lib
264 env.Append(LIBPATH = [Dir(env['libinstall'])])
265
266 env.Append(CPPDEFINES = ['ESCRIPT_EXPORTS', 'FINLEY_EXPORTS'])
267
268 if env['cc_extra'] != '': env.Append(CCFLAGS = env['cc_extra'])
269 if env['ld_extra'] != '': env.Append(LINKFLAGS = env['ld_extra'])
270
271 if env['usepedantic']: env.Append(CCFLAGS = pedantic)
272
273 # MS Windows
274 if IS_WINDOWS_PLATFORM:
275 env.PrependENVPath('PATH', [env['boost_lib_path']])
276 env.PrependENVPath('PATH', [env['libinstall']])
277 if env['usenetcdf']:
278 env.PrependENVPath('PATH', [env['netCDF_lib_path']])
279
280 env.Append(ARFLAGS = env['ar_flags'])
281
282 # Get the global Subversion revision number for getVersion() method
283 try:
284 global_revision = os.popen("svnversion -n .").read()
285 global_revision = re.sub(":.*", "", global_revision)
286 global_revision = re.sub("[^0-9]", "", global_revision)
287 except:
288 global_revision="-1"
289 if global_revision == "": global_revision="-2"
290 env.Append(CPPDEFINES = ["SVN_VERSION="+global_revision])
291
292 ############ numarray (required) ###############################
293
294 try:
295 from numarray import identity
296 except ImportError:
297 print "Cannot import numarray, you need to set your PYTHONPATH"
298 sys.exit(1)
299
300 ############ C compiler (required) #############################
301
302 # Create a Configure() environment for checking existence of required libraries and headers
303 conf = Configure(clone_env(env))
304
305 # Test that the compiler is working
306 if not conf.CheckFunc('printf'):
307 print "Cannot run C compiler '%s' (or libc is missing)" % (env['CC'])
308 sys.exit(1)
309
310 if conf.CheckFunc('gethostname'):
311 conf.env.Append(CPPDEFINES = ['HAVE_GETHOSTNAME'])
312
313 ############ python libraries (required) #######################
314
315 conf.env.AppendUnique(CPPPATH = [env['python_path']])
316 conf.env.AppendUnique(LIBPATH = [env['python_lib_path']])
317 conf.env.AppendUnique(LIBS = [env['python_libs']])
318
319 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['python_lib_path']) # The wrapper script needs to find these libs
320
321 if not conf.CheckCHeader('Python.h'):
322 print "Cannot find python include files (tried 'Python.h' in directory %s)" % (env['python_path'])
323 sys.exit(1)
324 if not conf.CheckFunc('Py_Main'):
325 print "Cannot find python library method Py_Main (tried lib %s in directory %s)" % (env['python_libs'], env['python_lib_path'])
326 sys.exit(1)
327
328 ############ boost (required) ##################################
329
330 conf.env.AppendUnique(CPPPATH = [env['boost_path']])
331 conf.env.AppendUnique(LIBPATH = [env['boost_lib_path']])
332 conf.env.AppendUnique(LIBS = [env['boost_libs']])
333
334 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['boost_lib_path']) # The wrapper script needs to find these libs
335
336 if not conf.CheckCXXHeader('boost/python.hpp'):
337 print "Cannot find boost include files (tried boost/python.hpp in directory %s)" % (env['boost_path'])
338 sys.exit(1)
339 if not conf.CheckFunc('PyObject_SetAttr'):
340 print "Cannot find boost library method PyObject_SetAttr (tried method PyObject_SetAttr in library %s in directory %s)" % (env['boost_libs'], env['boost_lib_path'])
341 sys.exit(1)
342
343 # Commit changes to environment
344 env = conf.Finish()
345
346 ############ VTK (optional) ####################################
347
348 if env['usevtk']:
349 try:
350 import vtk
351 env['usevtk'] = 1
352 except ImportError:
353 env['usevtk'] = 0
354
355 # Add VTK to environment env if it was found
356 if env['usevtk']:
357 env.Append(CPPDEFINES = ['USE_VTK'])
358
359 ############ NetCDF (optional) #################################
360
361 conf = Configure(clone_env(env))
362
363 if env['usenetcdf']:
364 conf.env.AppendUnique(CPPPATH = [env['netCDF_path']])
365 conf.env.AppendUnique(LIBPATH = [env['netCDF_lib_path']])
366 conf.env.AppendUnique(LIBS = [env['netCDF_libs']])
367 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['netCDF_lib_path']) # The wrapper script needs to find these libs
368
369 if env['usenetcdf'] and not conf.CheckCHeader('netcdf.h'): env['usenetcdf'] = 0
370 if env['usenetcdf'] and not conf.CheckFunc('nc_open'): env['usenetcdf'] = 0
371
372 # Add NetCDF to environment env if it was found
373 if env['usenetcdf']:
374 env = conf.Finish()
375 env.Append(CPPDEFINES = ['USE_NETCDF'])
376 else:
377 conf.Finish()
378
379 ############ PAPI (optional) ###################################
380
381 # Start a new configure environment that reflects what we've already found
382 conf = Configure(clone_env(env))
383
384 if env['usepapi']:
385 conf.env.AppendUnique(CPPPATH = [env['papi_path']])
386 conf.env.AppendUnique(LIBPATH = [env['papi_lib_path']])
387 conf.env.AppendUnique(LIBS = [env['papi_libs']])
388 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['papi_lib_path']) # The wrapper script needs to find these libs
389
390 if env['usepapi'] and not conf.CheckCHeader('papi.h'): env['usepapi'] = 0
391 if env['usepapi'] and not conf.CheckFunc('PAPI_start_counters'): env['usepapi'] = 0
392
393 # Add PAPI to environment env if it was found
394 if env['usepapi']:
395 env = conf.Finish()
396 env.Append(CPPDEFINES = ['BLOCKPAPI'])
397 else:
398 conf.Finish()
399
400 ############ MKL (optional) ####################################
401
402 # Start a new configure environment that reflects what we've already found
403 conf = Configure(clone_env(env))
404
405 if env['usemkl']:
406 conf.env.AppendUnique(CPPPATH = [env['mkl_path']])
407 conf.env.AppendUnique(LIBPATH = [env['mkl_lib_path']])
408 conf.env.AppendUnique(LIBS = [env['mkl_libs']])
409 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['mkl_lib_path']) # The wrapper script needs to find these libs
410
411 if env['usemkl'] and not conf.CheckCHeader('mkl_solver.h'): env['usemkl'] = 0
412 if env['usemkl'] and not conf.CheckFunc('pardiso_'): env['usemkl'] = 0
413
414 # Add MKL to environment env if it was found
415 if env['usemkl']:
416 env = conf.Finish()
417 env.Append(CPPDEFINES = ['MKL'])
418 else:
419 conf.Finish()
420
421 ############ UMFPACK (optional) ################################
422
423 # Start a new configure environment that reflects what we've already found
424 conf = Configure(clone_env(env))
425
426 if env['useumfpack']:
427 conf.env.AppendUnique(CPPPATH = [env['ufc_path']])
428 conf.env.AppendUnique(CPPPATH = [env['umf_path']])
429 conf.env.AppendUnique(LIBPATH = [env['umf_lib_path']])
430 conf.env.AppendUnique(LIBS = [env['umf_libs']])
431 conf.env.AppendUnique(CPPPATH = [env['amd_path']])
432 conf.env.AppendUnique(LIBPATH = [env['amd_lib_path']])
433 conf.env.AppendUnique(LIBS = [env['amd_libs']])
434 conf.env.AppendUnique(CPPPATH = [env['blas_path']])
435 conf.env.AppendUnique(LIBPATH = [env['blas_lib_path']])
436 conf.env.AppendUnique(LIBS = [env['blas_libs']])
437 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['umf_lib_path']) # The wrapper script needs to find these libs
438 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['amd_lib_path']) # The wrapper script needs to find these libs
439 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['blas_lib_path']) # The wrapper script needs to find these libs
440
441 if env['useumfpack'] and not conf.CheckCHeader('umfpack.h'): env['useumfpack'] = 0
442 if env['useumfpack'] and not conf.CheckFunc('umfpack_di_symbolic'): env['useumfpack'] = 0
443
444 # Add UMFPACK to environment env if it was found
445 if env['useumfpack']:
446 env = conf.Finish()
447 env.Append(CPPDEFINES = ['UMFPACK'])
448 else:
449 conf.Finish()
450
451 ############ Add the compiler flags ############################
452
453 # Enable debug by choosing either cc_debug or cc_optim
454 if env['usedebug']:
455 env.Append(CCFLAGS = env['cc_debug'])
456 env.Append(CCFLAGS = env['omp_debug'])
457 else:
458 env.Append(CCFLAGS = env['cc_optim'])
459 env.Append(CCFLAGS = env['omp_optim'])
460
461 # Always use cc_flags
462 env.Append(CCFLAGS = env['cc_flags'])
463 env.Append(LIBS = [env['omp_libs']])
464
465 ############ MPI (optional) ####################################
466
467 # Create a modified environment for MPI programs (identical to env if usempi=no)
468 env_mpi = clone_env(env)
469
470 # Start a new configure environment that reflects what we've already found
471 conf = Configure(clone_env(env_mpi))
472
473 if env_mpi['usempi']:
474 conf.env.AppendUnique(CPPPATH = [env_mpi['mpi_path']])
475 conf.env.AppendUnique(LIBPATH = [env_mpi['mpi_lib_path']])
476 conf.env.AppendUnique(LIBS = [env_mpi['mpi_libs']])
477 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['mpi_lib_path']) # The wrapper script needs to find these libs
478
479 if env_mpi['usempi'] and not conf.CheckCHeader('mpi.h'): env_mpi['usempi'] = 0
480 if env_mpi['usempi'] and not conf.CheckFunc('MPI_Init'): env_mpi['usempi'] = 0
481
482 # Add MPI to environment env_mpi if it was found
483 if env_mpi['usempi']:
484 env_mpi = conf.Finish()
485 env_mpi.Append(CPPDEFINES = ['PASO_MPI', 'MPI_NO_CPPBIND', env_mpi['MPICH_IGNORE_CXX_SEEK']])
486 else:
487 conf.Finish()
488
489 env['usempi'] = env_mpi['usempi']
490
491 ############ ParMETIS (optional) ###############################
492
493 # Start a new configure environment that reflects what we've already found
494 conf = Configure(clone_env(env_mpi))
495
496 if not env_mpi['usempi']: env_mpi['useparmetis'] = 0
497
498 if env_mpi['useparmetis']:
499 conf.env.AppendUnique(CPPPATH = [env_mpi['parmetis_path']])
500 conf.env.AppendUnique(LIBPATH = [env_mpi['parmetis_lib_path']])
501 conf.env.AppendUnique(LIBS = [env_mpi['parmetis_libs']])
502 conf.env.PrependENVPath('LD_LIBRARY_PATH', env['parmetis_lib_path']) # The wrapper script needs to find these libs
503
504 if env_mpi['useparmetis'] and not conf.CheckCHeader('parmetis.h'): env_mpi['useparmetis'] = 0
505 if env_mpi['useparmetis'] and not conf.CheckFunc('ParMETIS_V3_PartGeomKway'): env_mpi['useparmetis'] = 0
506
507 # Add ParMETIS to environment env_mpi if it was found
508 if env_mpi['useparmetis']:
509 env_mpi = conf.Finish()
510 env_mpi.Append(CPPDEFINES = ['USE_PARMETIS'])
511 else:
512 conf.Finish()
513
514 env['useparmetis'] = env_mpi['useparmetis']
515
516 ############ Now we switch on Warnings as errors ###############
517
518 #this needs to be done after configuration because the scons test files have warnings in them
519
520 if ((fatalwarning != "") and (env['usewarnings'])):
521 env.Append(CCFLAGS = fatalwarning)
522 env_mpi.Append(CCFLAGS = fatalwarning)
523
524 ############ Summarize our environment #########################
525
526 print ""
527 print "Summary of configuration (see ./config.log for information)"
528 print " Using python libraries"
529 print " Using numarray"
530 print " Using boost"
531 if env['usenetcdf']: print " Using NetCDF"
532 else: print " Not using NetCDF"
533 if env['usevtk']: print " Using VTK"
534 else: print " Not using VTK"
535 if env['usemkl']: print " Using MKL"
536 else: print " Not using MKL"
537 if env['useumfpack']: print " Using UMFPACK"
538 else: print " Not using UMFPACK"
539 if env['useopenmp']: print " Using OpenMP"
540 else: print " Not using OpenMP"
541 if env['usempi']: print " Using MPI"
542 else: print " Not using MPI"
543 if env['useparmetis']: print " Using ParMETIS"
544 else: print " Not using ParMETIS (requires MPI)"
545 if env['usepapi']: print " Using PAPI"
546 else: print " Not using PAPI"
547 if env['usedebug']: print " Compiling for debug"
548 else: print " Not compiling for debug"
549 print " Installing in", prefix
550 if ((fatalwarning != "") and (env['usewarnings'])): print " Treating warnings as errors"
551 else: print " Not treating warnings as errors"
552 print ""
553
554 ############ Delete option-dependent files #####################
555
556 Execute(Delete(env['libinstall'] + "/Compiled.with.debug"))
557 Execute(Delete(env['libinstall'] + "/Compiled.with.mpi"))
558 Execute(Delete(env['libinstall'] + "/Compiled.with.openmp"))
559 if not env['usempi']: Execute(Delete(env['libinstall'] + "/pythonMPI"))
560
561 ############ Add some custom builders ##########################
562
563 py_builder = Builder(action = scons_extensions.build_py, suffix = '.pyc', src_suffix = '.py', single_source=True)
564 env.Append(BUILDERS = {'PyCompile' : py_builder});
565
566 runUnitTest_builder = Builder(action = scons_extensions.runUnitTest, suffix = '.passed', src_suffix=env['PROGSUFFIX'], single_source=True)
567 env.Append(BUILDERS = {'RunUnitTest' : runUnitTest_builder});
568
569 runPyUnitTest_builder = Builder(action = scons_extensions.runPyUnitTest, suffix = '.passed', src_suffic='.py', single_source=True)
570 env.Append(BUILDERS = {'RunPyUnitTest' : runPyUnitTest_builder});
571
572 ############ Build the subdirectories ##########################
573
574 Export(["env", "env_mpi", "clone_env"])
575
576 env.SConscript(dirs = ['tools/CppUnitTest/src'], build_dir='build/$PLATFORM/tools/CppUnitTest', duplicate=0)
577 env.SConscript(dirs = ['paso/src'], build_dir='build/$PLATFORM/paso', duplicate=0)
578 env.SConscript(dirs = ['escript/src'], build_dir='build/$PLATFORM/escript', duplicate=0)
579 env.SConscript(dirs = ['esysUtils/src'], build_dir='build/$PLATFORM/esysUtils', duplicate=0)
580 env.SConscript(dirs = ['finley/src'], build_dir='build/$PLATFORM/finley', duplicate=0)
581 env.SConscript(dirs = ['modellib/py_src'], build_dir='build/$PLATFORM/modellib', duplicate=0)
582 env.SConscript(dirs = ['doc'], build_dir='build/$PLATFORM/doc', duplicate=0)
583 env.SConscript(dirs = ['pyvisi/py_src'], build_dir='build/$PLATFORM/pyvisi', duplicate=0)
584 env.SConscript(dirs = ['pycad/py_src'], build_dir='build/$PLATFORM/pycad', duplicate=0)
585 env.SConscript(dirs = ['pythonMPI/src'], build_dir='build/$PLATFORM/pythonMPI', duplicate=0)
586 env.SConscript(dirs = ['scripts'], build_dir='build/$PLATFORM/scripts', duplicate=0)
587
588 ############ Remember what optimizations we used ###############
589
590 remember_list = []
591
592 if env['usedebug']:
593 remember_list += env.Command(env['libinstall'] + "/Compiled.with.debug", None, Touch('$TARGET'))
594
595 if env['usempi']:
596 remember_list += env.Command(env['libinstall'] + "/Compiled.with.mpi", None, Touch('$TARGET'))
597
598 if env['omp_optim'] != '':
599 remember_list += env.Command(env['libinstall'] + "/Compiled.with.openmp", None, Touch('$TARGET'))
600
601 env.Alias('remember_options', remember_list)
602
603 ############ Targets to build and install libraries ############
604
605 target_init = env.Command(env['pyinstall']+'/__init__.py', None, Touch('$TARGET'))
606 env.Alias('target_init', [target_init])
607
608 # The headers have to be installed prior to build in order to satisfy #include <paso/Common.h>
609 env.Alias('build_esysUtils', ['target_install_esysUtils_headers', 'target_esysUtils_a'])
610 env.Alias('install_esysUtils', ['build_esysUtils', 'target_install_esysUtils_a'])
611
612 env.Alias('build_paso', ['target_install_paso_headers', 'target_paso_a'])
613 env.Alias('install_paso', ['build_paso', 'target_install_paso_a'])
614
615 env.Alias('build_escript', ['target_install_escript_headers', 'target_escript_so', 'target_escriptcpp_so'])
616 env.Alias('install_escript', ['build_escript', 'target_install_escript_so', 'target_install_escriptcpp_so', 'target_install_escript_py'])
617
618 env.Alias('build_finley', ['target_install_finley_headers', 'target_finley_so', 'target_finleycpp_so'])
619 env.Alias('install_finley', ['build_finley', 'target_install_finley_so', 'target_install_finleycpp_so', 'target_install_finley_py'])
620
621 # Now gather all the above into a couple easy targets: build_all and install_all
622 build_all_list = []
623 build_all_list += ['build_esysUtils']
624 build_all_list += ['build_paso']
625 build_all_list += ['build_escript']
626 build_all_list += ['build_finley']
627 if env['usempi']: build_all_list += ['target_pythonMPI_exe']
628 if not IS_WINDOWS_PLATFORM: build_all_list += ['target_finley_wrapper']
629 env.Alias('build_all', build_all_list)
630
631 install_all_list = []
632 install_all_list += ['target_init']
633 install_all_list += ['install_esysUtils']
634 install_all_list += ['install_paso']
635 install_all_list += ['install_escript']
636 install_all_list += ['install_finley']
637 install_all_list += ['target_install_pyvisi_py']
638 install_all_list += ['target_install_modellib_py']
639 install_all_list += ['target_install_pycad_py']
640 if env['usempi']: install_all_list += ['target_install_pythonMPI_exe']
641 if not IS_WINDOWS_PLATFORM: install_all_list += ['target_install_finley_wrapper']
642 install_all_list += ['remember_options']
643 env.Alias('install_all', install_all_list)
644
645 # Default target is install
646 env.Default('install_all')
647
648 ############ Targets to build and run the test suite ###########
649
650 env.Alias('build_cppunittest', ['target_install_cppunittest_headers', 'target_cppunittest_a'])
651 env.Alias('install_cppunittest', ['build_cppunittest', 'target_install_cppunittest_a'])
652 env.Alias('run_tests', ['install_all', 'target_install_cppunittest_a'])
653 env.Alias('all_tests', ['install_all', 'target_install_cppunittest_a', 'run_tests', 'py_tests'])
654
655 ############ Targets to build the documentation ################
656
657 env.Alias('docs', ['examples_tarfile', 'examples_zipfile', 'api_epydoc', 'api_doxygen', 'guide_pdf', 'guide_html'])
658
ViewVC Help
Powered by ViewVC 1.1.26
|
__label__pos
| 0.943331 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with HTTPS or Subversion.
Download ZIP
Fetching contributors…
Cannot retrieve contributors at this time
312 lines (286 sloc) 14.027 kb
/*
* This is an implementation of wcwidth() and wcswidth() (defined in
* IEEE Std 1002.1-2001) for Unicode.
*
* http://www.opengroup.org/onlinepubs/007904975/functions/wcwidth.html
* http://www.opengroup.org/onlinepubs/007904975/functions/wcswidth.html
*
* In fixed-width output devices, Latin characters all occupy a single
* "cell" position of equal width, whereas ideographic CJK characters
* occupy two such cells. Interoperability between terminal-line
* applications and (teletype-style) character terminals using the
* UTF-8 encoding requires agreement on which character should advance
* the cursor by how many cell positions. No established formal
* standards exist at present on which Unicode character shall occupy
* how many cell positions on character terminals. These routines are
* a first attempt of defining such behavior based on simple rules
* applied to data provided by the Unicode Consortium.
*
* For some graphical characters, the Unicode standard explicitly
* defines a character-cell width via the definition of the East Asian
* FullWidth (F), Wide (W), Half-width (H), and Narrow (Na) classes.
* In all these cases, there is no ambiguity about which width a
* terminal shall use. For characters in the East Asian Ambiguous (A)
* class, the width choice depends purely on a preference of backward
* compatibility with either historic CJK or Western practice.
* Choosing single-width for these characters is easy to justify as
* the appropriate long-term solution, as the CJK practice of
* displaying these characters as double-width comes from historic
* implementation simplicity (8-bit encoded characters were displayed
* single-width and 16-bit ones double-width, even for Greek,
* Cyrillic, etc.) and not any typographic considerations.
*
* Much less clear is the choice of width for the Not East Asian
* (Neutral) class. Existing practice does not dictate a width for any
* of these characters. It would nevertheless make sense
* typographically to allocate two character cells to characters such
* as for instance EM SPACE or VOLUME INTEGRAL, which cannot be
* represented adequately with a single-width glyph. The following
* routines at present merely assign a single-cell width to all
* neutral characters, in the interest of simplicity. This is not
* entirely satisfactory and should be reconsidered before
* establishing a formal standard in this area. At the moment, the
* decision which Not East Asian (Neutral) characters should be
* represented by double-width glyphs cannot yet be answered by
* applying a simple rule from the Unicode database content. Setting
* up a proper standard for the behavior of UTF-8 character terminals
* will require a careful analysis not only of each Unicode character,
* but also of each presentation form, something the author of these
* routines has avoided to do so far.
*
* http://www.unicode.org/unicode/reports/tr11/
*
* Markus Kuhn -- 2007-05-26 (Unicode 5.0)
*
* Permission to use, copy, modify, and distribute this software
* for any purpose and without fee is hereby granted. The author
* disclaims all warranties with regard to this software.
*
* Latest version: http://www.cl.cam.ac.uk/~mgk25/ucs/wcwidth.c
*/
#include <wchar.h>
#include "putty.h" /* for prototypes */
struct interval {
unsigned int first;
unsigned int last;
};
/* auxiliary function for binary search in interval table */
static int bisearch(unsigned int ucs, const struct interval *table, int max) {
int min = 0;
int mid;
if (ucs < table[0].first || ucs > table[max].last)
return 0;
while (max >= min) {
mid = (min + max) / 2;
if (ucs > table[mid].last)
min = mid + 1;
else if (ucs < table[mid].first)
max = mid - 1;
else
return 1;
}
return 0;
}
/* The following two functions define the column width of an ISO 10646
* character as follows:
*
* - The null character (U+0000) has a column width of 0.
*
* - Other C0/C1 control characters and DEL will lead to a return
* value of -1.
*
* - Non-spacing and enclosing combining characters (general
* category code Mn or Me in the Unicode database) have a
* column width of 0.
*
* - SOFT HYPHEN (U+00AD) has a column width of 1.
*
* - Other format characters (general category code Cf in the Unicode
* database) and ZERO WIDTH SPACE (U+200B) have a column width of 0.
*
* - Hangul Jamo medial vowels and final consonants (U+1160-U+11FF)
* have a column width of 0.
*
* - Spacing characters in the East Asian Wide (W) or East Asian
* Full-width (F) category as defined in Unicode Technical
* Report #11 have a column width of 2.
*
* - All remaining characters (including all printable
* ISO 8859-1 and WGL4 characters, Unicode control characters,
* etc.) have a column width of 1.
*
* This implementation assumes that wchar_t characters are encoded
* in ISO 10646.
*/
int mk_wcwidth(unsigned int ucs)
{
/* sorted list of non-overlapping intervals of non-spacing characters */
/* generated by "uniset +cat=Me +cat=Mn +cat=Cf -00AD +1160-11FF +200B c" */
static const struct interval combining[] = {
{ 0x0300, 0x036F }, { 0x0483, 0x0486 }, { 0x0488, 0x0489 },
{ 0x0591, 0x05BD }, { 0x05BF, 0x05BF }, { 0x05C1, 0x05C2 },
{ 0x05C4, 0x05C5 }, { 0x05C7, 0x05C7 }, { 0x0600, 0x0603 },
{ 0x0610, 0x0615 }, { 0x064B, 0x065E }, { 0x0670, 0x0670 },
{ 0x06D6, 0x06E4 }, { 0x06E7, 0x06E8 }, { 0x06EA, 0x06ED },
{ 0x070F, 0x070F }, { 0x0711, 0x0711 }, { 0x0730, 0x074A },
{ 0x07A6, 0x07B0 }, { 0x07EB, 0x07F3 }, { 0x0901, 0x0902 },
{ 0x093C, 0x093C }, { 0x0941, 0x0948 }, { 0x094D, 0x094D },
{ 0x0951, 0x0954 }, { 0x0962, 0x0963 }, { 0x0981, 0x0981 },
{ 0x09BC, 0x09BC }, { 0x09C1, 0x09C4 }, { 0x09CD, 0x09CD },
{ 0x09E2, 0x09E3 }, { 0x0A01, 0x0A02 }, { 0x0A3C, 0x0A3C },
{ 0x0A41, 0x0A42 }, { 0x0A47, 0x0A48 }, { 0x0A4B, 0x0A4D },
{ 0x0A70, 0x0A71 }, { 0x0A81, 0x0A82 }, { 0x0ABC, 0x0ABC },
{ 0x0AC1, 0x0AC5 }, { 0x0AC7, 0x0AC8 }, { 0x0ACD, 0x0ACD },
{ 0x0AE2, 0x0AE3 }, { 0x0B01, 0x0B01 }, { 0x0B3C, 0x0B3C },
{ 0x0B3F, 0x0B3F }, { 0x0B41, 0x0B43 }, { 0x0B4D, 0x0B4D },
{ 0x0B56, 0x0B56 }, { 0x0B82, 0x0B82 }, { 0x0BC0, 0x0BC0 },
{ 0x0BCD, 0x0BCD }, { 0x0C3E, 0x0C40 }, { 0x0C46, 0x0C48 },
{ 0x0C4A, 0x0C4D }, { 0x0C55, 0x0C56 }, { 0x0CBC, 0x0CBC },
{ 0x0CBF, 0x0CBF }, { 0x0CC6, 0x0CC6 }, { 0x0CCC, 0x0CCD },
{ 0x0CE2, 0x0CE3 }, { 0x0D41, 0x0D43 }, { 0x0D4D, 0x0D4D },
{ 0x0DCA, 0x0DCA }, { 0x0DD2, 0x0DD4 }, { 0x0DD6, 0x0DD6 },
{ 0x0E31, 0x0E31 }, { 0x0E34, 0x0E3A }, { 0x0E47, 0x0E4E },
{ 0x0EB1, 0x0EB1 }, { 0x0EB4, 0x0EB9 }, { 0x0EBB, 0x0EBC },
{ 0x0EC8, 0x0ECD }, { 0x0F18, 0x0F19 }, { 0x0F35, 0x0F35 },
{ 0x0F37, 0x0F37 }, { 0x0F39, 0x0F39 }, { 0x0F71, 0x0F7E },
{ 0x0F80, 0x0F84 }, { 0x0F86, 0x0F87 }, { 0x0F90, 0x0F97 },
{ 0x0F99, 0x0FBC }, { 0x0FC6, 0x0FC6 }, { 0x102D, 0x1030 },
{ 0x1032, 0x1032 }, { 0x1036, 0x1037 }, { 0x1039, 0x1039 },
{ 0x1058, 0x1059 }, { 0x1160, 0x11FF }, { 0x135F, 0x135F },
{ 0x1712, 0x1714 }, { 0x1732, 0x1734 }, { 0x1752, 0x1753 },
{ 0x1772, 0x1773 }, { 0x17B4, 0x17B5 }, { 0x17B7, 0x17BD },
{ 0x17C6, 0x17C6 }, { 0x17C9, 0x17D3 }, { 0x17DD, 0x17DD },
{ 0x180B, 0x180D }, { 0x18A9, 0x18A9 }, { 0x1920, 0x1922 },
{ 0x1927, 0x1928 }, { 0x1932, 0x1932 }, { 0x1939, 0x193B },
{ 0x1A17, 0x1A18 }, { 0x1B00, 0x1B03 }, { 0x1B34, 0x1B34 },
{ 0x1B36, 0x1B3A }, { 0x1B3C, 0x1B3C }, { 0x1B42, 0x1B42 },
{ 0x1B6B, 0x1B73 }, { 0x1DC0, 0x1DCA }, { 0x1DFE, 0x1DFF },
{ 0x200B, 0x200F }, { 0x202A, 0x202E }, { 0x2060, 0x2063 },
{ 0x206A, 0x206F }, { 0x20D0, 0x20EF }, { 0x302A, 0x302F },
{ 0x3099, 0x309A }, { 0xA806, 0xA806 }, { 0xA80B, 0xA80B },
{ 0xA825, 0xA826 }, { 0xFB1E, 0xFB1E }, { 0xFE00, 0xFE0F },
{ 0xFE20, 0xFE23 }, { 0xFEFF, 0xFEFF }, { 0xFFF9, 0xFFFB },
{ 0x10A01, 0x10A03 }, { 0x10A05, 0x10A06 }, { 0x10A0C, 0x10A0F },
{ 0x10A38, 0x10A3A }, { 0x10A3F, 0x10A3F }, { 0x1D167, 0x1D169 },
{ 0x1D173, 0x1D182 }, { 0x1D185, 0x1D18B }, { 0x1D1AA, 0x1D1AD },
{ 0x1D242, 0x1D244 }, { 0xE0001, 0xE0001 }, { 0xE0020, 0xE007F },
{ 0xE0100, 0xE01EF }
};
/* test for 8-bit control characters */
if (ucs == 0)
return 0;
if (ucs < 32 || (ucs >= 0x7f && ucs < 0xa0))
return -1;
/* binary search in table of non-spacing characters */
if (bisearch(ucs, combining,
sizeof(combining) / sizeof(struct interval) - 1))
return 0;
/* if we arrive here, ucs is not a combining or C0/C1 control character */
return 1 +
(ucs >= 0x1100 &&
(ucs <= 0x115f || /* Hangul Jamo init. consonants */
ucs == 0x2329 || ucs == 0x232a ||
(ucs >= 0x2e80 && ucs <= 0xa4cf &&
ucs != 0x303f) || /* CJK ... Yi */
(ucs >= 0xac00 && ucs <= 0xd7a3) || /* Hangul Syllables */
(ucs >= 0xf900 && ucs <= 0xfaff) || /* CJK Compatibility Ideographs */
(ucs >= 0xfe10 && ucs <= 0xfe19) || /* Vertical forms */
(ucs >= 0xfe30 && ucs <= 0xfe6f) || /* CJK Compatibility Forms */
(ucs >= 0xff00 && ucs <= 0xff60) || /* Fullwidth Forms */
(ucs >= 0xffe0 && ucs <= 0xffe6) ||
(ucs >= 0x20000 && ucs <= 0x2fffd) ||
(ucs >= 0x30000 && ucs <= 0x3fffd)));
}
int mk_wcswidth(const unsigned int *pwcs, size_t n)
{
int w, width = 0;
for (;*pwcs && n-- > 0; pwcs++)
if ((w = mk_wcwidth(*pwcs)) < 0)
return -1;
else
width += w;
return width;
}
/*
* The following functions are the same as mk_wcwidth() and
* mk_wcswidth(), except that spacing characters in the East Asian
* Ambiguous (A) category as defined in Unicode Technical Report #11
* have a column width of 2. This variant might be useful for users of
* CJK legacy encodings who want to migrate to UCS without changing
* the traditional terminal character-width behaviour. It is not
* otherwise recommended for general use.
*/
int mk_wcwidth_cjk(unsigned int ucs)
{
/* sorted list of non-overlapping intervals of East Asian Ambiguous
* characters, generated by "uniset +WIDTH-A -cat=Me -cat=Mn -cat=Cf c" */
static const struct interval ambiguous[] = {
{ 0x00A1, 0x00A1 }, { 0x00A4, 0x00A4 }, { 0x00A7, 0x00A8 },
{ 0x00AA, 0x00AA }, { 0x00AE, 0x00AE }, { 0x00B0, 0x00B4 },
{ 0x00B6, 0x00BA }, { 0x00BC, 0x00BF }, { 0x00C6, 0x00C6 },
{ 0x00D0, 0x00D0 }, { 0x00D7, 0x00D8 }, { 0x00DE, 0x00E1 },
{ 0x00E6, 0x00E6 }, { 0x00E8, 0x00EA }, { 0x00EC, 0x00ED },
{ 0x00F0, 0x00F0 }, { 0x00F2, 0x00F3 }, { 0x00F7, 0x00FA },
{ 0x00FC, 0x00FC }, { 0x00FE, 0x00FE }, { 0x0101, 0x0101 },
{ 0x0111, 0x0111 }, { 0x0113, 0x0113 }, { 0x011B, 0x011B },
{ 0x0126, 0x0127 }, { 0x012B, 0x012B }, { 0x0131, 0x0133 },
{ 0x0138, 0x0138 }, { 0x013F, 0x0142 }, { 0x0144, 0x0144 },
{ 0x0148, 0x014B }, { 0x014D, 0x014D }, { 0x0152, 0x0153 },
{ 0x0166, 0x0167 }, { 0x016B, 0x016B }, { 0x01CE, 0x01CE },
{ 0x01D0, 0x01D0 }, { 0x01D2, 0x01D2 }, { 0x01D4, 0x01D4 },
{ 0x01D6, 0x01D6 }, { 0x01D8, 0x01D8 }, { 0x01DA, 0x01DA },
{ 0x01DC, 0x01DC }, { 0x0251, 0x0251 }, { 0x0261, 0x0261 },
{ 0x02C4, 0x02C4 }, { 0x02C7, 0x02C7 }, { 0x02C9, 0x02CB },
{ 0x02CD, 0x02CD }, { 0x02D0, 0x02D0 }, { 0x02D8, 0x02DB },
{ 0x02DD, 0x02DD }, { 0x02DF, 0x02DF }, { 0x0391, 0x03A1 },
{ 0x03A3, 0x03A9 }, { 0x03B1, 0x03C1 }, { 0x03C3, 0x03C9 },
{ 0x0401, 0x0401 }, { 0x0410, 0x044F }, { 0x0451, 0x0451 },
{ 0x2010, 0x2010 }, { 0x2013, 0x2016 }, { 0x2018, 0x2019 },
{ 0x201C, 0x201D }, { 0x2020, 0x2022 }, { 0x2024, 0x2027 },
{ 0x2030, 0x2030 }, { 0x2032, 0x2033 }, { 0x2035, 0x2035 },
{ 0x203B, 0x203B }, { 0x203E, 0x203E }, { 0x2074, 0x2074 },
{ 0x207F, 0x207F }, { 0x2081, 0x2084 }, { 0x20AC, 0x20AC },
{ 0x2103, 0x2103 }, { 0x2105, 0x2105 }, { 0x2109, 0x2109 },
{ 0x2113, 0x2113 }, { 0x2116, 0x2116 }, { 0x2121, 0x2122 },
{ 0x2126, 0x2126 }, { 0x212B, 0x212B }, { 0x2153, 0x2154 },
{ 0x215B, 0x215E }, { 0x2160, 0x216B }, { 0x2170, 0x2179 },
{ 0x2190, 0x2199 }, { 0x21B8, 0x21B9 }, { 0x21D2, 0x21D2 },
{ 0x21D4, 0x21D4 }, { 0x21E7, 0x21E7 }, { 0x2200, 0x2200 },
{ 0x2202, 0x2203 }, { 0x2207, 0x2208 }, { 0x220B, 0x220B },
{ 0x220F, 0x220F }, { 0x2211, 0x2211 }, { 0x2215, 0x2215 },
{ 0x221A, 0x221A }, { 0x221D, 0x2220 }, { 0x2223, 0x2223 },
{ 0x2225, 0x2225 }, { 0x2227, 0x222C }, { 0x222E, 0x222E },
{ 0x2234, 0x2237 }, { 0x223C, 0x223D }, { 0x2248, 0x2248 },
{ 0x224C, 0x224C }, { 0x2252, 0x2252 }, { 0x2260, 0x2261 },
{ 0x2264, 0x2267 }, { 0x226A, 0x226B }, { 0x226E, 0x226F },
{ 0x2282, 0x2283 }, { 0x2286, 0x2287 }, { 0x2295, 0x2295 },
{ 0x2299, 0x2299 }, { 0x22A5, 0x22A5 }, { 0x22BF, 0x22BF },
{ 0x2312, 0x2312 }, { 0x2460, 0x24E9 }, { 0x24EB, 0x254B },
{ 0x2550, 0x2573 }, { 0x2580, 0x258F }, { 0x2592, 0x2595 },
{ 0x25A0, 0x25A1 }, { 0x25A3, 0x25A9 }, { 0x25B2, 0x25B3 },
{ 0x25B6, 0x25B7 }, { 0x25BC, 0x25BD }, { 0x25C0, 0x25C1 },
{ 0x25C6, 0x25C8 }, { 0x25CB, 0x25CB }, { 0x25CE, 0x25D1 },
{ 0x25E2, 0x25E5 }, { 0x25EF, 0x25EF }, { 0x2605, 0x2606 },
{ 0x2609, 0x2609 }, { 0x260E, 0x260F }, { 0x2614, 0x2615 },
{ 0x261C, 0x261C }, { 0x261E, 0x261E }, { 0x2640, 0x2640 },
{ 0x2642, 0x2642 }, { 0x2660, 0x2661 }, { 0x2663, 0x2665 },
{ 0x2667, 0x266A }, { 0x266C, 0x266D }, { 0x266F, 0x266F },
{ 0x273D, 0x273D }, { 0x2776, 0x277F }, { 0xE000, 0xF8FF },
{ 0xFFFD, 0xFFFD }, { 0xF0000, 0xFFFFD }, { 0x100000, 0x10FFFD }
};
/* binary search in table of non-spacing characters */
if (bisearch(ucs, ambiguous,
sizeof(ambiguous) / sizeof(struct interval) - 1))
return 2;
return mk_wcwidth(ucs);
}
int mk_wcswidth_cjk(const unsigned int *pwcs, size_t n)
{
int w, width = 0;
for (;*pwcs && n-- > 0; pwcs++)
if ((w = mk_wcwidth_cjk(*pwcs)) < 0)
return -1;
else
width += w;
return width;
}
Jump to Line
Something went wrong with that request. Please try again.
|
__label__pos
| 0.661266 |
Skip to navigation
Elite on the BBC Micro
Text: print_safe [Elite-A, I/O processor]
Name: print_safe [Show more] Type: Subroutine Category: Text Summary: Print a character using the VDU routine in the MOS, to bypass our custom WRCHV handler
Context: See this subroutine in context in the source code References: This subroutine is called as follows: * print_wrch calls print_safe * printer calls print_safe
.print_safe PHA \ Store the A, Y and X registers on the stack so we can TYA \ retrieve them after the call to rawrch PHA TXA PHA TSX \ Transfer the stack pointer S to X LDA &103,X \ The stack starts at &100, with &100+S pointing to the \ top of the stack, so this fetches the third value from \ the stack into A, which is the value of A that we just \ stored on the stack - i.e. the character that we want \ to print JSR rawrch \ Print the character by calling the VDU character \ output routine in the MOS PLA \ Retrieve the A, Y and X registers from the stack TAX PLA TAY PLA RTS \ Return from the subroutine
|
__label__pos
| 0.791621 |
A while ago I signed up to participate in the Azure Advent Calendar. An awesome initiative by Gregor Suttie and Richard Hooper.
My goal for my contribution was to summarize the past few months of my Azure Blueprint experiences and provide a clear introduction into managing Blueprints as Code.
During the video we'll briefly touch the Azure Blueprint basics and creating a fresh Blueprint using the Azure Portal.
From there we'll move on to working with Artifacts, exporting the newly created Blueprint and then go about managing that Blueprint as code. For simplicity and to prevent too much of a technical deep dive into JSON, I focused on a single artifact. But the principle remains. If you're going to start with Blueprints as Code, this will hopefully help you kick start your journey.
Of course, there are multiple ways to go about doing this. We're using the Azure Portal to help you design an initial Blueprint, understand what's happening and go from there. But once you get the hang of it and understand how artifacts work and how you can fit your resource into them, I recommend looking into the Azure Blueprints Generator for Visual Studio Code by @AmineCharot. The plugin provides you with a pretty a pretty straight forward way of creating your Blueprints from scratch.
The Video
Alright, so the video is probably what you are here for: I can write out the entire script, but it's probably easier to just watch the video :)
The Code
The samples for "MyFirstBlueprint" can be found at https://github.com/whaakman/azureadventcalendar-blueprint-files
The Assignment file
One specific item that deserves some explanation; the assignment file.
During the video we used the assignment file as pasted below. As mentioned in the video, the file contains similar values as provided through the assignment using the Azure Portal.
{
"identity": {
"type": "systemAssigned"
},
"location": "West Europe",
"properties": {
"blueprintId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Blueprint/blueprints/bpAsCode",
"resourceGroups": {},
"locks": {
"mode": "AllResourcesDoNotDelete"
},
"parameters": {
"webApplicationshouldonlybeaccessibleoverHTTPS_effect": {
"value": "Audit"
}
}
}
}
When deploying a Blueprint, the BlueprintID must match with what eventually will be present within Azure. The Blueprint ID is pretty easy to determine. The ID depends on whether the Blueprint is stored on a subscription level or on a Management Group level and is build up as follows:
Blueprint ID on a ManagementGroup Level
/providers/Microsoft.Management/managementGroups/myManagementGroupId/providers/Microsoft.Blueprint/blueprints/BlueprintName
Blueprint ID on a Subscription Level
/subscriptions/SubscriptionID/providers/Microsoft.Blueprint/blueprints/BlueprintName
Each Blueprint.json also requires a region as for each Azure Resource Manager deployment a location is required. Note that this isn't the actual location that your resources will be deployed in, you can still specify a location for your Resource or Resource Group deployments.
Also note that the "resourceGroups": {} object is empty but still present. Blueprint.json always requires "resourceGroups": {} to be present, even though you don't necessarily need to deploy a Resource Group. Therefor, the parameter needs to be populated as well and also needs to be present within the assignment file.
Another notable section is "Parameters". This is where you store the values of your parameters. I've used the same example during the video but to recap the using an ARM Template example this is what's happening:
"We started with adding the value for "webAppName" to the assignment file. During the assignment the value is parsed into the "Blueprint.json". During the artifact deployment (artifact-template-webapp-deploy.json), the parameter value is available to the artifact, which then grabs the value and parses it into the actual ARM template just as if you were providing parameters during an ARM deployment." (https://www.wesleyhaakman.org/deploying-and-managing-your-azure-blueprints-as-code/)
All the background information as referred to during the video can be found here:
The use case
https://www.wesleyhaakman.org/why-you-need-to-build-landing-zones-and-use-azure-blueprints/
The basics
https://www.wesleyhaakman.org/deploying-and-managing-your-first-landing-zone-with-azure-blueprints/
Blueprints as Code
https://www.wesleyhaakman.org/deploying-and-managing-your-azure-blueprints-as-code/
Blueprint Level Parameters
https://www.wesleyhaakman.org/azure-blueprints-level-parameters/
|
__label__pos
| 0.721513 |
Oracle Certified Associate, Java SE 7 Programmer Exam (1Z0-803) Complete Video Course
Video description
Oracle Certified Associate, Java SE 7 Programmer Exam Complete Video Course is a comprehensive training course that brings the Java Certified Associate exam topics to life through the use of real-world live instruction, demonstrations, and animations so these foundational Java programming topics are easy and fun to learn. Simon Roberts—the best-selling author and video training, expert instructor, and creator of the original Sun Certified Programmer, Developer, and Architect certifications for Sun Microsystems—will walk you through each topic covered in the exam so you have a full understanding of the material. He will start off introducing you to the Oracle Certification program and also discuss preparation and test-taking strategies so you can confidently get started. Simon will then dive into the exam topics, covering all objectives in the Associate exam using a variety of video presentation styles, from live whiteboarding to code demonstrations and dynamic KeyNote presentations.
About the Instructor
Simon Roberts started his computing career as a programmer in the early 1980's and built several of his own microprocessor-based computers. He moved to Sun Microsystems, Inc. in mid-1995, and almost immediately became the lead Java instructor in the U.K. In 1998, Simon moved to Colorado, where he still lives. While at Sun, he created the Sun Certified Programmer, Developer, and Architect certifications, and worked in teams on several other certifications. He wrote three books on Java, including two certification study guides, one covering the Programmer and Developer exams, and one on the Architect exam. He left Sun in 2004 and became an independent instructor, architect, and software engineer.
Skill Level
• Beginning to Intermediate
Who Should Take This Course
The primary audience includes candidates for the Oracle Certified Java SE 7 Associate Exam. However, anyone interested in building a basic competence in the Java programming language will benefit from using this course.
Course Requirements
The audience background, or prerequisite is a basic knowledge of Java, but if not, then knowledge on another object-oriented programming language in the syntactic traditions of C/C++. For example, a candidate with a good knowledge of C# should be able to benefit from this material, even if they do not have prior experience in Java.
About LiveLessons Videos
The LiveLessons Video Training series publishes hundreds of hands-on, expert-led video tutorials covering a wide selection of technology topics designed to teach you the skills you need to succeed. This professional and personal technology video series features world-leading author instructors published by your trusted technology brands: Addison-Wesley, Cisco Press, IBM Press, Pearson IT Certification, Prentice Hall, Sams, and Que. Topics include: IT Certification, Programming, Web Development, Mobile Development, Home & Office Technologies, Business & Management, and more. View All LiveLessons http://www.informit.com/livelessons
Table of contents
1. Introduction
1. Oracle Certified Associate, Java SE 7 Programmer Exam (1Z0-803): Introduction 00:06:22
2. Module 1: Before You Begin
1. Introduction 00:00:27
3. Lesson 1: Why would I take the Oracle Certified Associate Java Programmer Exam
1. Learning Objectives 00:00:15
2. 1.1 Why would I take the Oracle Certified Associate Java Programmer Exam 00:05:10
4. Lesson 2: The path to certification
1. Learning Objectives 00:00:28
2. 2.1 The path to certification 00:02:23
5. Lesson 3: Preparation strategies
1. Learning Objectives 00:00:25
2. 3.1 Preparation strategies 00:08:55
6. Lesson 4: Test Taking Strategies
1. Learning Objectives 00:00:56
2. 4.1 How to take exam questions 00:08:00
3. 4.2 Prepare for exam questions, confidence, and other resources 00:07:31
7. Module 2: Java Basics
1. Java Basics 00:00:43
8. Lesson 1: Define the scope of variables
1. Learning Objectives 00:00:21
2. 1.1 The meaning of scope, blocks, and curly braces 00:06:46
3. 1.2 Special cases of scope 00:06:25
9. Lesson 2: Define the structure of a Java class
1. Learning Objectives 00:00:43
2. 2.1 Java class files: Contents and naming rules 00:13:27
3. 2.2 Java classes: The class, member variables, methods and constructors 00:08:22
10. Lesson 3: Create executable Java applications with a main method
1. Learning Objectives 00:00:32
2. 3.1 Create executable Java applications with a main method 00:13:04
11. Lesson 4: Import other Java packages to make them accessible in your code
1. Learning objectives 00:01:32
2. 4.1 About packages and their purpose 00:07:10
3. 4.2 Statement order, wildcard imports, importing sub-packages, and handling duplicate class names 00:18:32
12. Module 3: Working with Java Data Types
1. Working with Java Data Types 00:01:14
13. Lesson 1: Declare and initialize variables
1. Learning objectives 00:00:53
2. 1.1 Using the general form of simple declarations 00:05:26
3. 1.2 Using the general form of initalized declarations 00:03:24
4. 1.3 Understanding integer primitive types, literal forms 00:07:29
5. 1.4 Understanding floating point primitive types, literal forms 00:06:35
6. 1.5 Understanding logical and character primitive types, literal forms 00:07:22
14. Lesson 2: Differentiate between object reference variables and primitive variables
1. Learning objectives 00:01:29
2. 2.1 Using the == operator with primitives and references 00:11:07
3. 2.2 Understanding method argument passing 00:06:19
15. Lesson 3: Read or write to object fields
1. Learning objectives 00:01:18
2. 3.1 Selecting a field from a reference expression 00:10:58
3. 3.2 Using "this" to access fields 00:10:44
4. 3.3 Code examples 00:07:26
16. Lesson 4: Explain an Object's Lifecycle (creation, "dereference" and garbage collection)
1. Learning objectives 00:01:08
2. 4.1 Understanding allocation and referencing 00:06:01
3. 4.2 Collecting garbage 00:11:38
17. Lesson 5: Call methods on objects
1. Learning objectives 00:00:55
2. 5.1 Invoking a basic method and expressions that have behavior 00:06:45
3. 5.2 Invoking overloaded methods 00:07:22
4. 5.3 Calling overridden methods 00:05:43
5. 5.4 Distinguishing overridden and overloaded methods 00:04:45
18. Lesson 6: Manipulate data using the StringBuilder class and its methods
1. Learning objectives 00:00:55
2. 6.1 Understanding the common StringBuilder constructors 00:06:52
3. 6.2 Using methods that modify StringBuilders 00:04:26
4. 6.3 Using methods that read and search in StringBuilders, and using methods that interact with the internal storage of StringBuilders 00:07:19
19. Lesson 7: Creating and manipulating Strings
1. Learning objectives 00:01:25
2. 7.1 Creating Strings 00:04:19
3. 7.2 Understanding common String methods: Immutability 00:09:01
4. 7.3 Using common String methods 00:07:20
5. 7.4 Using common String methods to perform comparisons 00:07:22
20. Module 4: Using Operators and Decision Constructs
1. Using Operators and Decision Constructs 00:00:59
21. Lesson 1: Use Java operators
1. Learning Objectives 00:01:36
2. 1.1 Using operators, operands, and expressions 00:03:59
3. 1.2 Using arithmetic operators + - * / % 00:06:20
4. 1.3 Using the plus operator with Strings 00:05:02
5. 1.4 Promoting operands 00:05:05
6. 1.5 Using increment and decrement operators 00:10:46
7. 1.6 Using shift operators 00:10:10
8. 1.7 Using comparison operators 00:04:42
9. 1.8 Using logical operators 00:06:33
10. 1.9 Using short-circuit operators 00:05:43
11. 1.10 Using assignment operators 00:09:43
12. 1.11 Using assignment compatibility 00:07:17
13. 1.12 Using the ternary operator 00:07:38
14. 1.13 Understanding other elements of expressions 00:07:38
22. Lesson 2: Use Parenthesis to override operator precedence
1. Learning Objectives 00:01:43
2. 2.1 Using parentheses and operator precedence 00:08:04
23. Lesson 3: Test equality between Strings and other objects using == and equals ()
1. Learning Objectives 00:00:52
2. 3.1 Understanding the meaning of == and the intended meaning of equals () 00:07:33
3. 3.2 Determining if equals() is implemented, and implementing equals() 00:10:04
24. Lesson 4: Create if and if/else constructs
1. Learning Objectives 00:00:49
2. 4.1 Understanding the basic form of if and if/else 00:03:01
3. 4.2 Using braces with if/else. Effect of "else if" 00:08:12
4. 4.3 Understanding the if / else if / else structure 00:05:40
25. Lesson 5: Use a switch statement
1. Learning Objectives 00:01:01
2. 5.1 Using the general form of switch, case, break, and default 00:04:40
3. 5.2 Code examples for the general form of switch 00:04:59
4. 5.3 Understanding break 00:03:48
5. 5.4 Identifying switchable types 00:03:02
26. Module 5: Creating and Using Arrays
1. Creating and Using Arrays
27. Lesson 1: Use Java operators
1. Learning Objectives
2. 1.1 Understanding simple array declarations, and variables of array type
3. 1.2 Instantiating an array, array length
4. 1.3 Initalizing arrays by iteration, array indexes
5. 1.4 Using a combined declaration and intialization of arrays
6. 1.5 Using immediate array creation not in a declaration
7. 1.6 Initializing arrays by copying
28. Lesson 2: Declare, instantiate, initalize and use multi-dimensional array
1. Learning Objectives
2. 2.1 Declaring multi-dimensional arrays
3. 2.2 Using immediate initialization of multi-dimensional arrays
4. 2.3 Using iterative initialization of multi-dimensional array
5. 2.4 Code examples for multi-dimensional arrays
29. Lesson 3: Declare and use an ArrayList
1. Learning Objectives
2. 3.1 Understanding the purpose and benefits of ArrayList
3. 3.2 Declaring and initalizing an Array List
4. 3.3 Using common methods of, and uses of, ArrayList
5. 3.4 Investigating documentation and code for ArrayList
6. 3.5 Understanding simple generics with the ArrayList
30. Module 6 Using Loop Constructs
1. Using Loop Constructs
31. Lesson 1: Create and use while loops
1. Learning Objectives
2. 1.1 Creating and using while loops
3. 1.2 Code examples of the while loop
32. Lesson 2: Create and use for loops including the enhanced for loop
1. Learning Objectives
2. 2.1 Understanding the simple use of the for loop
3. 2.2 Understanding the initialization section of the for loop
4. 2.3 Understanding the test section of the for loop
5. 2.4 Understanding the increment section of the for loop
6. 2.5 Omitting sections of a for loop
7. 2.6 Code examples for basic for loops
8. 2.7 Understanding the simple use of the enhanced for loop
9. 2.8 Identifying the valid targets of the enhanced for loop
10. 2.9 Using the enhanced for loop with generic collections
11. 2.10 Code examples for enhanced for loops
33. Lesson 3: Create and use do/while loops
1. Learning Objectives
2. 3.1 Creating and using do/while loops
34. Lesson 4: Compare loop constructs
1. Learning Objectives
2. 4.1 Comparing while and do while loops
3. 4.2 Comparing while and simple for loops
4. 4.3 Comparing while and enhanced for loops working on Iterables
5. 4.4 Comparing while and enhanced for loops working on arrays
35. Lesson 5: Use break and continue
1. Learning Objectives
2. 5.1 Using break from a single loop
3. 5.2 Using continue in a single loop
4. 5.3 Using a labeled break from multiple loops
5. 5.4 Using a labeled continue from multiple loops
36. Module 7 Working with Methods and Encapsulation
1. Working with Methods and Encapsulation
37. Lesson 1: Create methods with arguments and return values
1. Learning Objectives
2. 1.1 Creating Methods
3. 1.2 Code example
38. Lesson 2: Apply the static keyword to methods and fields
1. Learning Objectives
2. 2.1 Comparing class fields and object fields
3. 2.2 Using static on methods
4. 2.3 Code example
39. Lesson 3: Create an overloaded method
1. Learning Objectives
2. 3.1 Understanding basic syntax of overloaded methods
3. 3.2 Understanding rules and guidance for using overloaded methods
4. 3.3 Code example
40. Lesson 4: Differentiate between default and user defined constructors
1. Learning Objectives
2. 4.1 Differentiating between default and user defined constructors
41. Lesson 5: Create and overload constructors
1. Learning Objectives
2. 5.1 Creating and overloading constructors
42. Lesson 6: Apply access modifiers
1. Learning Objectives
2. 6.1 Using the access modifiers public and private
3. 6.2 Using default access, and the protected modifier
43. Lesson 7: Apply encapsulation principles to a class
1. Learning Objectives
2. 7.1 Designing for encapsulation
3. 7.2 Implementing encapsulation
44. Lesson 8: Determine the effect upon object references and primitive values when they are passed into methods that change the values
1. Learning Objectives
2. 8.1 Changing values through method local variables
3. 8.2 Changing the value of method local variables
4. 8.3 Code example
45. Module 8 Working with Inheritance
1. Working with Inheritance
46. Lesson 1: Implement inheritance
1. Learning Objectives
2. 1.1 Understanding interface and implementation inheritance
3. 1.2 Basic coding of implementation inheritance
4. 1.3 Changing inherited behavior
5. 1.4 Code examples
47. Lesson 2: Devep code that demonstrates the use of polymorphism
1. Learning Objectives
2. 2.1 Understanding the concepts of polymorphism
3. 2.2 Code example
4. 2.3 Understanding the core terminogy of polymorphism
48. Lesson 3: Differentiate between the type of a reference and the type of an object
1. Learning Objectives
2. 3.1 Understanding variable type and object type
3. 3.2 Determining object type
4. 3.3 Code examples
49. Lesson 4: Determine when casting is necessary
1. Learning Objectives
2. 4.1 Understanding the Liskov substitution principle and the "is a" relationship
3. 4.2 Recognizing impossible assignments
4. 4.3 Understanding casting with interface types in assignments
50. Lesson 5: Use super and this to access objects and constructors
1. Learning Objectives
2. 5.1 Understanding "this" for accessing object features
3. 5.2 Understanding "super" for accessing parent features
4. 5.3 Understanding "this()" for accessing overaded constructors
5. 5.4 Understanding "super()" for accessing parent constructors
6. 5.5 Understanding the underlying principles of "this" and "super" for invoking other constructors
7. 5.6 Code examples
51. Lesson 6: Use abstract classes and interfaces
1. Learning Objectives
2. 6.1 Preventing instantiation
3. 6.2 Marking behaviors abstract
4. 6.3 Understanding the rules about abstract classes and methods
5. 6.4 Understanding and defining interfaces
6. 6.5 Implementing and using interfaces
7. 6.6 Code example for interfaces
8. 6.7 Understanding the rules about interfaces
52. Module 9 Handling Exceptionse
1. Handling Exceptions
53. Lesson 1: Differentiate among checked exceptions, RuntimeExceptions and Errors
1. Learning Objectives
2. 1.1 Understanding exception types
54. Lesson 2: Create a try-catch bck and determine how exceptions alter normal program flow
1. Learning Objectives
2. 2.1 Coding try and catch
3. 2.2 Passing an exception to our caller
4. 2.3 Using finally to clean up resources
5. 2.4 Using the try with resources mechanism
6. 2.5 Code example for try / catch / finally
7. 2.6 Code example for try with resources
55. Lesson 3: Describe what Exceptions are used for in Java
1. Learning Objectives
2. 3.1 Investigating the phisophy of the exception mechanism
56. Lesson 4: Invoke a method that throws an exception
1. Learning Objectives
2. 4.1 Handling exceptions thrown by called code
3. 4.2 Code example
57. Lesson 5: Recognize common exception classes and categories
1. Learning Objectives
2. 5.1 Common Exception Classes
58. Summary
1. Oracle Certified Associate, Java SE 7 Programmer Exam (1Z0-803): Summary
Product information
• Title: Oracle Certified Associate, Java SE 7 Programmer Exam (1Z0-803) Complete Video Course
• Author(s):
• Release date: March 2015
• Publisher(s): Pearson IT Certification
• ISBN: 0133926044
|
__label__pos
| 0.999949 |
Sine.H
Go to the documentation of this file.
1 /*---------------------------------------------------------------------------*\
2 ========= |
3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
4 \\ / O peration |
5 \\ / A nd | Copyright (C) 2016-2017 OpenFOAM Foundation
6 \\/ M anipulation |
7 -------------------------------------------------------------------------------
8 License
9 This file is part of OpenFOAM.
10
11 OpenFOAM is free software: you can redistribute it and/or modify it
12 under the terms of the GNU General Public License as published by
13 the Free Software Foundation, either version 3 of the License, or
14 (at your option) any later version.
15
16 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
17 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
18 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
19 for more details.
20
21 You should have received a copy of the GNU General Public License
22 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
23
24 Class
25 Foam::Function1Types::Sine
26
27 Description
28 Templated sine function with support for an offset level.
29
30 \f[
31 a sin(2 \pi f (t - t_0)) s + l
32 \f]
33
34 where
35
36 \vartable
37 symbol | Description | Data type
38 a | Amplitude | Function1<scalar>
39 f | Frequency [1/s] | Function1<scalar>
40 s | Type scale factor | Function1<Type>
41 l | Type offset level | Function1<Type>
42 t_0 | Start time [s] | scalar
43 t | Time [s] | scalar
44 \endvartable
45
46 Example for a scalar:
47 \verbatim
48 <entryName> sine;
49 <entryName>Coeffs
50 {
51 frequency 10;
52 amplitude 0.1;
53 scale 2e-6;
54 level 2e-6;
55 }
56 \endverbatim
57
58 Example for a vector:
59 \verbatim
60 <entryName> sine;
61 <entryName>Coeffs
62 {
63 frequency 10;
64 amplitude 1;
65 scale (1 0.1 0);
66 level (10 1 0);
67 }
68 \endverbatim
69
70 SourceFiles
71 Sine.C
72
73 \*---------------------------------------------------------------------------*/
74
75 #ifndef Sine_H
76 #define Sine_H
77
78 #include "Function1.H"
79
80 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
81
82 namespace Foam
83 {
84 namespace Function1Types
85 {
86
87 /*---------------------------------------------------------------------------*\
88 Class Sine Declaration
89 \*---------------------------------------------------------------------------*/
90
91 template<class Type>
92 class Sine
93 :
94 public Function1<Type>
95 {
96 // Private data
97
98 //- Start-time for the sin function
99 scalar t0_;
100
101 //- Scalar amplitude of the sin function
102 autoPtr<Function1<scalar>> amplitude_;
103
104 //- Frequency of the sin function
105 autoPtr<Function1<scalar>> frequency_;
106
107 //- Scaling factor of the sin function
108 autoPtr<Function1<Type>> scale_;
109
110 //- Level to which the sin function is added
111 autoPtr<Function1<Type>> level_;
112
113
114 // Private Member Functions
115
116 //- Read the coefficients from the given dictionary
117 void read(const dictionary& coeffs);
118
119 //- Disallow default bitwise assignment
120 void operator=(const Sine<Type>&);
121
122
123 public:
124
125 // Runtime type information
126 TypeName("sine");
127
128
129 // Constructors
130
131 //- Construct from entry name and dictionary
132 Sine
133 (
134 const word& entryName,
135 const dictionary& dict
136 );
137
138 //- Copy constructor
139 Sine(const Sine<Type>& se);
140
141 //- Construct and return a clone
142 virtual tmp<Function1<Type>> clone() const
143 {
144 return tmp<Function1<Type>>(new Sine<Type>(*this));
145 }
146
147
148 //- Destructor
149 virtual ~Sine();
150
151
152 // Member Functions
153
154 //- Return value for time t
155 Type value(const scalar t) const;
156
157 //- Write in dictionary format
158 virtual void writeData(Ostream& os) const;
159 };
160
161
162 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
163
164 } // End namespace Function1Types
165 } // End namespace Foam
166
167 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
168
169 #ifdef NoRepository
170 #include "Sine.C"
171 #endif
172
173 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
174
175 #endif
177 // ************************************************************************* //
const word const dictionary & dict
Definition: Function1.H:88
Sine(const word &entryName, const dictionary &dict)
Construct from entry name and dictionary.
Definition: Sine.C:44
A list of keyword definitions, which are a keyword followed by any number of values (e...
Definition: dictionary.H:137
virtual ~Sine()
Destructor.
Definition: Sine.C:70
Type value(const scalar t) const
Return value for time t.
Definition: Sine.C:77
const word & entryName
Definition: Function1.H:88
Templated sine function with support for an offset level.
Definition: Sine.H:126
A class for handling words, derived from string.
Definition: word.H:59
An Ostream is an abstract base class for all output systems (streams, files, token lists...
Definition: Ostream.H:53
virtual void writeData(Ostream &os) const
Write in dictionary format.
Definition: Sine.C:88
virtual tmp< Function1< Type > > clone() const
Construct and return a clone.
Definition: Sine.H:176
A class for managing temporary objects.
Definition: PtrList.H:53
Namespace for OpenFOAM.
|
__label__pos
| 0.995685 |
Input loop
From Rosetta Code
Task
Input loop
You are encouraged to solve this task according to the task description, using any language you may know.
Input loop is part of Short Circuit's Console Program Basics selection.
Task
Read from a text stream either word-by-word or line-by-line until the stream runs out of data.
The stream will have an unknown amount of data on it.
Ada[edit]
This example reads in a text stream from standard input line by line and writes the output to standard output.
with Ada.Text_Io; use Ada.Text_Io;
procedure Read_Stream is
Line : String(1..10);
Length : Natural;
begin
while not End_Of_File loop
Get_Line(Line, Length); -- read up to 10 characters at a time
Put(Line(1..Length));
-- The current line of input data may be longer than the string receiving the data.
-- If so, the current input file column number will be greater than 0
-- and the extra data will be unread until the next iteration.
-- If not, we have read past an end of line marker and col will be 1
if Col(Current_Input) = 1 then
New_Line;
end if;
end loop;
end Read_Stream;
Aime[edit]
void
read_stream(file f)
{
text s;
while (f_line(f, s) != -1) {
# the read line available as -s-
}
}
ALGOL 68[edit]
For file consisting of just one page - a typical linux/unix file:
main:(
PROC raise logical file end = (REF FILE f) BOOL: ( except logical file end );
on logical file end(stand in, raise logical file end);
DO
print(read string);
read(new line);
print(new line)
OD;
except logical file end:
SKIP
)
For multi page files, each page is seekable with PROC set = (REF FILE file, INT page, line, char)VOID: ~. This allows rudimentary random access where each new page is effectively a new record.
main:(
PROC raise logical file end = (REF FILE f) BOOL: ( except logical file end );
on logical file end(stand in, raise logical file end);
DO
PROC raise page end = (REF FILE f) BOOL: ( except page end );
on page end(stand in, raise page end);
DO
print(read string);
read(new line);
print(new line)
OD;
except page end:
read(new page);
print(new page)
OD;
except logical file end:
SKIP
)
The boolean functions physical file ended(f), logical file ended(f), page ended(f) and line ended(f) are also available to indicate the end of a file, page and line.
ALGOL W[edit]
begin
string(80) line;
% allow the program to continue after reaching end-of-file %
% without this, end-of-file would cause a run-time error %
ENDFILE := EXCEPTION( false, 1, 0, false, "EOF" );
% read lines until end of file %
read( line );
while not XCPNOTED(ENDFILE) do begin
write( line );
read( line )
end
end.
AmigaE[edit]
CONST BUFLEN=1024, EOF=-1
PROC consume_input(fh)
DEF buf[BUFLEN] : STRING, r
REPEAT
/* even if the line si longer than BUFLEN,
ReadStr won't overflow; rather the line is
"splitted" and the remaining part is read in
the next ReadStr */
r := ReadStr(fh, buf)
IF buf[] OR (r <> EOF)
-> do something
WriteF('\s\n',buf)
ENDIF
UNTIL r=EOF
ENDPROC
PROC main()
DEF fh
fh := Open('basicinputloop.e', OLDFILE)
IF fh
consume_input(fh)
Close(fh)
ENDIF
ENDPROC
AutoHotkey[edit]
This example reads the text of a source file line by line and writes the output to a destination file.
Loop, Read, Input.txt, Output.txt
{
FileAppend, %A_LoopReadLine%`n
}
AWK[edit]
This just reads lines from stdin and prints them until EOF is read.
{ print $0 }
or, more idiomatic:
1
Batch File[edit]
for /f %%i in (file.txt) do if %%i@ neq @ echo %%i
BASIC[edit]
Applesoft BASIC[edit]
100 INPUT "FILENAME:";F$
110 D$ = CHR$(4)
120 PRINT D$"VERIFY"F$
130 PRINT D$"OPEN"F$
140 PRINT D$"READ"F$
150 ONERR GOTO 190
160 GET C$
170 PRINT CHR$(0)C$;
180 GOTO 160
190 POKE 216,0
200 IF PEEK(222) <> 5 THEN RESUME
210 PRINT D$"CLOSE"F$
BBC BASIC[edit]
This specifically relates to console input (stdin).
STD_INPUT_HANDLE = -10
STD_OUTPUT_HANDLE = -11
SYS "GetStdHandle", STD_INPUT_HANDLE TO @hfile%(1)
SYS "GetStdHandle", STD_OUTPUT_HANDLE TO @hfile%(2)
SYS "SetConsoleMode", @hfile%(1), 0
*INPUT 13
*OUTPUT 14
REPEAT
INPUT A$
PRINT A$
UNTIL FALSE
IS-BASIC[edit]
100 PROGRAM "Type.bas"
110 TEXT 80
120 INPUT PROMPT "File name: ":F$
130 WHEN EXCEPTION USE IOERROR
140 OPEN #1:F$ ACCESS INPUT
150 DO
160 LINE INPUT #1,IF MISSING EXIT DO:F$
170 PRINT F$
180 LOOP
190 CLOSE #1
200 END WHEN
210 HANDLER IOERROR
220 PRINT EXSTRING$(EXTYPE)
230 END
240 END HANDLER
Alternate solution:
100 PROGRAM "Type.bas"
110 INPUT PROMPT "File name: ":F$
120 WHEN EXCEPTION USE IOERROR
130 OPEN #1:F$
140 COPY FROM #1 TO #0
150 CLOSE #1
160 END WHEN
170 HANDLER IOERROR
180 PRINT EXSTRING$(EXTYPE)
190 CLOSE #1
200 END HANDLER
Bracmat[edit]
This example first creates a test file with three lines. It then opens the file in read mode, sets the string of break characters and then reads the file token by token, where tokens are delimeted by break characters. Finally, the file position is set to an invalid value, which closes the file.
( put$("This is
a three line
text","test.txt",NEW)
& fil$("test.txt",r)
& fil$(,STR," \t\r\n")
& 0:?linenr
& whl
' ( fil$:(?line.?breakchar)
& put
$ ( str
$ ( "breakchar:"
( !breakchar:" "&SP
| !breakchar:\t&"\\t"
| !breakchar:\r&"\\r"
| !breakchar:\n&"\\n"
| !breakchar:&EOF
)
", word "
(1+!linenr:?linenr)
":"
!line
\n
)
)
)
& (fil$(,SET,-1)|out$"file closed")
);
Output:
breakchar:SP, word 1:This
breakchar:\n, word 2:is
breakchar:SP, word 3:a
breakchar:SP, word 4:three
breakchar:\n, word 5:line
breakchar:EOF, word 6:text
file closed
C[edit]
Reads arbitrarily long line each time and return a null-terminated string. Caller is responsible for freeing the string.
#include <stdlib.h>
#include <stdio.h>
char *get_line(FILE* fp)
{
int len = 0, got = 0, c;
char *buf = 0;
while ((c = fgetc(fp)) != EOF) {
if (got + 1 >= len) {
len *= 2;
if (len < 4) len = 4;
buf = realloc(buf, len);
}
buf[got++] = c;
if (c == '\n') break;
}
if (c == EOF && !got) return 0;
buf[got++] = '\0';
return buf;
}
int main()
{
char *s;
while ((s = get_line(stdin))) {
printf("%s",s);
free(s);
}
return 0;
}
C++[edit]
The following functions send the words resp. lines to a generic output iterator.
#include <istream>
#include <string>
#include <vector>
#include <algorithm>
#include <iostream>
#include <iterator>
// word by word
template<class OutIt>
void read_words(std::istream& is, OutIt dest)
{
std::string word;
while (is >> word)
{
// send the word to the output iterator
*dest = word;
}
}
// line by line:
template<class OutIt>
void read_lines(std::istream& is, OutIt dest)
{
std::string line;
while (std::getline(is, line))
{
// store the line to the output iterator
*dest = line;
}
}
int main()
{
// 1) sending words from std. in std. out (end with Return)
read_words(std::cin,
std::ostream_iterator<std::string>(std::cout, " "));
// 2) appending lines from std. to vector (end with Ctrl+Z)
std::vector<std::string> v;
read_lines(std::cin, std::back_inserter(v));
return 0;
}
An alternate way to read words or lines is to use istream iterators:
template<class OutIt>
void read_words(std::istream& is, OutIt dest)
{
typedef std::istream_iterator<std::string> InIt;
std::copy(InIt(is), InIt(),
dest);
}
namespace detail
{
struct ReadableLine : public std::string
{
friend std::istream & operator>>(std::istream & is, ReadableLine & line)
{
return std::getline(is, line);
}
};
}
template<class OutIt>
void read_lines(std::istream& is, OutIt dest)
{
typedef std::istream_iterator<detail::ReadableLine> InIt;
std::copy(InIt(is), InIt(),
dest);
}
C#[edit]
using System;
using System.IO;
class Program
{
static void Main(string[] args)
{
// For stdin, you could use
// new StreamReader(Console.OpenStandardInput(), Console.InputEncoding)
using (var b = new StreamReader("file.txt"))
{
string line;
while ((line = b.ReadLine()) != null)
Console.WriteLine(line);
}
}
}
Clojure[edit]
(defn basic-input [fname]
(line-seq (java.io.BufferedReader. (java.io.FileReader. fname))))
COBOL[edit]
Works with: GNU Cobol version 2.0
Works with: Visual COBOL
IDENTIFICATION DIVISION.
PROGRAM-ID. input-loop.
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT in-stream ASSIGN TO KEYBOARD *> or any other file/stream
ORGANIZATION LINE SEQUENTIAL
FILE STATUS in-stream-status.
DATA DIVISION.
FILE SECTION.
FD in-stream.
01 stream-line PIC X(80).
WORKING-STORAGE SECTION.
01 in-stream-status PIC 99.
88 end-of-stream VALUE 10.
PROCEDURE DIVISION.
OPEN INPUT in-stream
PERFORM UNTIL EXIT
READ in-stream
AT END
EXIT PERFORM
END-READ
DISPLAY stream-line
END-PERFORM
CLOSE in-stream
.
END PROGRAM input-loop.
Common Lisp[edit]
(defun basic-input (filename)
(with-open-file (stream (make-pathname :name filename) :direction :input)
(loop for line = (read-line stream nil nil)
while line
do (format t "~a~%" line))))
D[edit]
void main() {
import std.stdio;
immutable fileName = "input_loop1.d";
foreach (const line; fileName.File.byLine) {
pragma(msg, typeof(line)); // Prints: const(char[])
// line is a transient slice, so if you need to
// retain it for later use, you have to .dup or .idup it.
line.writeln; // Do something with each line.
}
// Keeping the line terminators:
foreach (const line; fileName.File.byLine(KeepTerminator.yes)) {
// line is a transient slice.
line.writeln;
}
foreach (const string line; fileName.File.lines) {
// line is a transient slice.
line.writeln;
}
}
Library: Tango
import tango.io.Console;
import tango.text.stream.LineIterator;
void main (char[][] args) {
foreach (line; new LineIterator!(char)(Cin.input)) {
// do something with each line
}
}
Library: Tango
import tango.io.Console;
import tango.text.stream.SimpleIterator;
void main (char[][] args) {
foreach (word; new SimpleIterator!(char)(" ", Cin.input)) {
// do something with each word
}
}
Note that foreach variables 'line' and 'word' are transient slices. If you need to retain them for later use, you should .dup them.
Delphi[edit]
program InputLoop;
{$APPTYPE CONSOLE}
uses SysUtils, Classes;
var
lReader: TStreamReader; // Introduced in Delphi XE
begin
lReader := TStreamReader.Create('input.txt', TEncoding.Default);
try
while lReader.Peek >= 0 do
Writeln(lReader.ReadLine);
finally
lReader.Free;
end;
end.
Déjà Vu[edit]
while /= :eof dup !read-line!stdin:
!print( "Read a line: " !decode!utf-8 swap )
drop
!print "End of file."
EasyLang[edit]
repeat
l$ = input
until error = 1
print l$
.
Eiffel[edit]
Works with: Eiffel Studio version 6.6
note
description : "{
There are several examples included, including input from a text file,
simple console input and input from standard input explicitly.
See notes in the code for details.
Examples were compile using Eiffel Studio 6.6 with only the default
class libraries.
}"
class APPLICATION
create
make
feature
make
do
-- These examples show non-console input (a plain text file)
-- with end-of-input handling.
read_lines_from_file
read_words_from_file
-- These examples use simplified input from 'io', that
-- handles the details of whether it's stdin or not
-- They terminate on a line (word) of "q"
read_lines_from_console_with_termination
read_words_from_console_with_termination
-- The next examples show reading stdin explicitly
-- as if it were a text file. It expects and end of file
-- termination and so will loop indefinitely unless reading
-- from a pipe or your console can send an EOF.
read_lines_from_stdin
read_words_from_stdin
-- These examples use simplified input from 'io', that
-- handles the details of whether it's stdin or not,
-- but have no explicit termination
read_lines_from_console_forever
read_words_from_console_forever
end
--|--------------------------------------------------------------
read_lines_from_file
-- Read input from a text file
-- Echo each line of the file to standard output.
--
-- Some language examples omit file open/close operations
-- but are included here for completeness. Additional error
-- checking would be appropriate in production code.
local
tf: PLAIN_TEXT_FILE
do
print ("Reading lines from a file%N")
create tf.make ("myfile") -- Create a file object
tf.open_read -- Open the file in read mode
-- The actual input loop
from
until tf.end_of_file
loop
tf.read_line
print (tf.last_string + "%N")
end
tf.close -- Close the file
end
--|--------------------------------------------------------------
read_words_from_file
-- Read input from a text file
-- Echo each word of the file to standard output on a
-- separate line.
--
-- Some language examples omit file open/close operations
-- but are included here for completeness. Additional error
-- checking would be appropriate in production code.
local
tf: PLAIN_TEXT_FILE
do
print ("Reading words from a file%N")
create tf.make ("myfile") -- Create a file object
tf.open_read -- Open the file in read mode
-- The actual input loop
from
until tf.end_of_file
loop
-- This instruction is the only difference between this
-- example and the read_lines_from_file example
tf.read_word
print (tf.last_string + "%N")
end
tf.close -- Close the file
end
--|--------------------------------------------------------------
read_lines_from_console_with_termination
-- Read lines from console and echo them back to output
-- until the line contains only the termination key 'q'
--
-- 'io' is acquired through inheritance from class ANY,
-- the top of all inheritance hierarchies.
local
the_cows_come_home: BOOLEAN
do
print ("Reading lines from console%N")
from
until the_cows_come_home
loop
io.read_line
if io.last_string ~ "q" then
the_cows_come_home := True
print ("Mooooo!%N")
else
print (io.last_string)
io.new_line
end
end
end
--|--------------------------------------------------------------
read_words_from_console_with_termination
-- Read words from console and echo them back to output, one
-- word per line, until the line contains only the
-- termination key 'q'
--
-- 'io' is acquired through inheritance from class ANY,
-- the top of all inheritance hierarchies.
local
the_cows_come_home: BOOLEAN
do
print ("Reading words from console%N")
from
until the_cows_come_home
loop
io.read_word
if io.last_string ~ "q" then
the_cows_come_home := True
print ("Mooooo!%N")
else
print (io.last_string)
io.new_line
end
end
end
--|--------------------------------------------------------------
read_lines_from_console_forever
-- Read lines from console and echo them back to output
-- until the program is terminated externally
--
-- 'io' is acquired through inheritance from class ANY,
-- the top of all inheritance hierarchies.
do
print ("Reading lines from console (no termination)%N")
from
until False
loop
io.read_line
print (io.last_string + "%N")
end
end
--|--------------------------------------------------------------
read_words_from_console_forever
-- Read words from console and echo them back to output, one
-- word per line until the program is terminated externally
--
-- 'io' is acquired through inheritance from class ANY,
-- the top of all inheritance hierarchies.
do
print ("Reading words from console (no termination)%N")
from
until False
loop
io.read_word
print (io.last_string + "%N")
end
end
--|--------------------------------------------------------------
read_lines_from_stdin
-- Read input from a stream on standard input
-- Echo each line of the file to standard output.
-- Note that we treat standard input as if it were a plain
-- text file
local
tf: PLAIN_TEXT_FILE
do
print ("Reading lines from stdin (EOF termination)%N")
tf := io.input
from
until tf.end_of_file
loop
tf.read_line
print (tf.last_string + "%N")
end
end
--|--------------------------------------------------------------
read_words_from_stdin
-- Read input from a stream on standard input
-- Echo each word of the file to standard output on a new
-- line
-- Note that we treat standard input as if it were a plain
-- text file
local
tf: PLAIN_TEXT_FILE
do
print ("Reading words from stdin (EOF termination)%N")
tf := io.input
from
until tf.end_of_file
loop
tf.read_line
print (tf.last_string + "%N")
end
end
end
Elena[edit]
ELENA 4.x:
Using ReaderEnumerator
import system'routines;
import system'io;
import extensions'routines;
public program()
{
ReaderEnumerator.new(File.assign:"file.txt").forEach(printingLn)
}
Using loop statement
import system'io;
public program()
{
var reader := File.assign:"file.txt".textreader();
while (reader.Available)
{
console.writeLine(reader.readLine())
}
}
Elixir[edit]
defmodule RC do
def input_loop(stream) do
case IO.read(stream, :line) do
:eof -> :ok
data -> IO.write data
input_loop(stream)
end
end
end
path = hd(System.argv)
File.open!(path, [:read], fn stream -> RC.input_loop(stream) end)
Erlang[edit]
% Implemented by Arjun Sunel
-module(read_files).
-export([main/0]).
main() ->
Read = fun (Filename) -> {ok, Data} = file:read_file(Filename), Data end,
Lines = string:tokens(binary_to_list(Read("read_files.erl")), "\n"),
lists:foreach(fun (Y) -> io:format("~s~n", [Y]) end, lists:zipwith(fun(X,_)->X end, Lines, lists:seq(1, length(Lines)))).
ERRE[edit]
input from stdio
LOOP
INPUT(LINE,A$)
PRINT(A$)
EXIT IF <condition> ! condition to be implemented to
! to avoid and endless loop
END LOOP
reading a text file line by line
OPEN("I",#1,FILENAME$)
WHILE NOT EOF(1)
INPUT(LINE,#1,A$)
PRINT(A$)
END WHILE
CLOSE(1)
Note: with GET(#1) you can read character by character.
Euphoria[edit]
Process text stream line-by-line:
procedure process_line_by_line(integer fn)
object line
while 1 do
line = gets(fn)
if atom(line) then
exit
end if
-- process the line
end while
end procedure
F#[edit]
Using a sequence expression:
let lines_of_file file =
seq { use stream = System.IO.File.OpenRead file
use reader = new System.IO.StreamReader(stream)
while not reader.EndOfStream do
yield reader.ReadLine() }
The file is reopened every time the sequence is traversed and lines are read on-demand so this can handle arbitrarily-large files.
Factor[edit]
"file.txt" utf8 [ [ process-line ] each-line ] with-file-reader
Fantom[edit]
An input stream can be from a string or from a file. The method eachLine will divide the stream by linebreaks. The method readStrToken takes two arguments: a maximum size to read, and a function to decide when to stop reading - by default, it stops when it finds a white space.
class Main
{
public static Void main ()
{
// example of reading by line
str := "first\nsecond\nthird\nword"
inputStream := str.in
inputStream.eachLine |Str line|
{
echo ("Line is: $line")
}
// example of reading by word
str = "first second third word"
inputStream = str.in
word := inputStream.readStrToken // reads up to but excluding next space
while (word != null)
{
echo ("Word: $word")
inputStream.readChar // skip over the preceding space!
word = inputStream.readStrToken
}
}
}
Forth[edit]
Works with: GNU Forth
4096 constant max-line
: read-lines
begin stdin pad max-line read-line throw
while pad swap \ addr len is the line of data, excluding newline
2drop
repeat ;
Fortran[edit]
Works with: Fortran version 90 and later
The code read line-by-line, but the maximum length of the line is limited (by a parameter)
program BasicInputLoop
implicit none
integer, parameter :: in = 50, &
linelen = 1000
integer :: ecode
character(len=linelen) :: l
open(in, file="afile.txt", action="read", status="old", iostat=ecode)
if ( ecode == 0 ) then
do
read(in, fmt="(A)", iostat=ecode) l
if ( ecode /= 0 ) exit
write(*,*) trim(l)
end do
close(in)
end if
end program BasicInputLoop
FreeBASIC[edit]
' FB 1.05.0 Win64
Dim line_ As String ' line is a keyword
Open "input.txt" For Input As #1
While Not Eof(1)
Input #1, line_
Print line_ ' echo to the console
Wend
Close #1
Print
Print "Press any key to quit"
Sleep
Frink[edit]
while (line = readStdin[]) != undef
println[line]
gnuplot[edit]
The following gnuplot script echoes standard input to standard output line-by-line until the end of the stream.
!cat
It makes use of the ability of gnuplot to spawn shell commands. In that sense it might be considered cheating. Nevertheless, this is a valid gnuplot script that does meet the requirements of the task description.
It seems impossible to complete this task with just standard gnuplot commands.
Go[edit]
The following reads a line at a time from stdin.
package main
import (
"bufio"
"io"
"log"
"os"
)
func main() {
in := bufio.NewReader(os.Stdin)
for {
s, err := in.ReadString('\n')
if err != nil {
// io.EOF is expected, anything else
// should be handled/reported
if err != io.EOF {
log.Fatal(err)
}
break
}
// Do something with the line of text
// in string variable s.
_ = s
}
}
Or, using bufio.Scanner you can read line at a time, word at a time, byte or Unicode code point at a time, or by any custom "split function".
package main
import (
"bufio"
"log"
"os"
)
func main() {
s := bufio.NewScanner(os.Stdin)
// Select the split function, other ones are available
// in bufio or you can provide your own.
s.Split(bufio.ScanWords)
for s.Scan() {
// Get and use the next 'token'
asBytes := s.Bytes() // Bytes does no alloaction
asString := s.Text() // Text returns a newly allocated string
_, _ = asBytes, asString
}
if err := s.Err(); err != nil {
// Handle/report any error (EOF will not be reported)
log.Fatal(err)
}
}
FutureBasic[edit]
Note: This code goes beyond simply specifying the file to open. It includes a dialog window that allows the user to select a text file to read. The entire contents of the file are read in at once, rather than line by line.
include "ConsoleWindow"
local fn ReadTextFile
dim as CFURLRef fileRef
dim as Handle h
dim as CFStringRef cfStr : cfStr = NULL
dim as long fileLen
if ( files$( _CFURLRefOpen, "TEXT", "Select text file...", @fileRef ) )
open "i", 2, fileRef
fileLen = lof( 2, 1 )
h = fn NewHandleClear( fileLen )
if ( h )
read file 2, [h], fileLen
close #2
cfStr = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false )
fn DisposeH( h )
end if
else
// User canceled
end if
fn HIViewSetText( sConsoleHITextView, cfStr )
CFRelease( cfStr )
end fn
fn ReadTextFile
Groovy[edit]
Solution:
def lineMap = [:]
System.in.eachLine { line, i ->
lineMap[i] = line
}
lineMap.each { println it }
Test:
$ groovy -e 'def lineMap = [:]
> System.in.eachLine { line, i ->
> lineMap[i] = line
> }
> lineMap.each { println it }' <<EOF
>
> Whose woods these are I think I know
> His house is in the village tho'
> He will not see me stopping here
> To watch his woods fill up with snow
> EOF
Output:
1=
2=Whose woods these are I think I know
3=His house is in the village tho'
4=He will not see me stopping here
5=To watch his woods fill up with snow
Haskell[edit]
The whole contents of a file can be read lazily. The standard functions lines and words convert that lazily into the lists of lines resp. words. Usually, one wouldn't use extra routines for that, but just use readFile and then put 'lines' or words somewhere in the next processing step.
import System.IO
readLines :: Handle -> IO [String]
readLines h = do
s <- hGetContents h
return $ lines s
readWords :: Handle -> IO [String]
readWords h = do
s <- hGetContents h
return $ words s
HicEst[edit]
CHARACTER name='myfile.txt', string*1000
OPEN(FIle=name, OLD, LENgth=bytes, IOStat=errorcode, ERror=9)
DO line = 1, bytes ! loop terminates with end-of-file error at the latest
READ(FIle=name, IOStat=errorcode, ERror=9) string
WRITE(StatusBar) string
ENDDO
9 WRITE(Messagebox, Name) line, errorcode
i[edit]
software {
loop {
read()
errors {
exit
}
}
}
Icon and Unicon[edit]
link str2toks
# call either words or lines depending on what you want to do.
procedure main()
words()
end
procedure lines()
while write(read())
end
procedure words()
local line
while line := read() do line ? every write(str2toks())
end
See str2toks
J[edit]
Script "read-input-until-eof.ijs":
#!/Applications/j602/bin/jconsole
NB. read input until EOF
((1!:1) 3)(1!:2) 4
exit ''
Example:
$ ./read-input-to-eof.ijs <<EOF
> abc
> def
> ghi
> now is the time for all good men ...
> EOF
abc
def
ghi
now is the time for all good men ...
Java[edit]
Some people prefer Scanner or BufferedReader, so a way with each is presented.
import java.io.InputStream;
import java.util.Scanner;
public class InputLoop {
public static void main(String args[]) {
// To read from stdin:
InputStream source = System.in;
/*
Or, to read from a file:
InputStream source = new FileInputStream(filename);
Or, to read from a network stream:
InputStream source = socket.getInputStream();
*/
Scanner in = new Scanner(source);
while(in.hasNext()){
String input = in.next(); // Use in.nextLine() for line-by-line reading
// Process the input here. For example, you could print it out:
System.out.println(input);
}
}
}
Or
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.Reader;
public class InputLoop {
public static void main(String args[]) {
// To read from stdin:
Reader reader = new InputStreamReader(System.in);
/*
Or, to read from a file:
Reader reader = new FileReader(filename);
Or, to read from a network stream:
Reader reader = new InputStreamReader(socket.getInputStream());
*/
try {
BufferedReader inp = new BufferedReader(reader);
while(inp.ready()) {
int input = inp.read(); // Use in.readLine() for line-by-line
// Process the input here. For example, you can print it out.
System.out.println(input);
}
} catch (IOException e) {
// There was an input error.
}
}
}
JavaScript[edit]
Works with: SpiderMonkey
Works with: OSSP js
These implementations of JavaScript define a readline() function, so:
$ js -e 'while (line = readline()) { do_something_with(line); }' < inputfile
Works with: JScript
As above, this operates on standard input
var text_stream = WScript.StdIn;
var i = 0;
while ( ! text_stream.AtEndOfStream ) {
var line = text_stream.ReadLine();
// do something with line
WScript.echo(++i + ": " + line);
}
jq[edit]
The jq program for reading and writing is simply the one-character program:
.
For example, to echo each line of text in a file, one could invoke jq as follows:
jq -r -R . FILENAME
If the input file consists of well-formed JSON entities (including scalars), then the following invocation could be used to "pretty-print" the input:
jq . FILENAME
Other options, e.g. to emit JSON in compact form, also exist.
Jsish[edit]
/* Input loop in Jsish */
var line;
var cs = 0, ls = 0;
while (line = console.input()) { cs += line.length; ls += 1; }
printf("%d lines, %d characters\n", ls, cs);
Output:
prompt$ jsish input-loop.jsi <input-loop.jsi
8 lines, 159 characters
Julia[edit]
We create a text stream and read the lines from the stream one by one, printing them on screen. Note that the lines end by a newline, except the last one. The ending newlines are part of the strings returned by the function readline. Once the end of the stream is reached, readline returns an empty string.
stream = IOBuffer("1\n2\n3\n4\n\n6")
while !eof(stream)
line = readline(stream)
println(line)
end
Output:
1
2
3
4
6
Kotlin[edit]
// version 1.1
import java.util.*
fun main(args: Array<String>) {
println("Keep entering text or the word 'quit' to end the program:")
val sc = Scanner(System.`in`)
val words = mutableListOf<String>()
while (true) {
val input: String = sc.next()
if (input.trim().toLowerCase() == "quit") {
if (words.size > 0) println("\nYou entered the following words:\n${words.joinToString("\n")}")
return
}
words.add(input)
}
}
Sample input/output:
Output:
Keep entering text or the word 'quit' to end the program:
The quick brown fox
jumps over the lazy dog
quit
You entered the following words:
The
quick
brown
fox
jumps
over
the
lazy
dog
Lang5[edit]
: read-lines do read . "\n" . eof if break then loop ;
: ==>contents
'< swap open 'fh set fh fin read-lines fh close ;
'file.txt ==>contents
Lasso[edit]
local(
myfile = file('//path/to/file.txt'),
textresult = array
)
#myfile -> foreachline => {
#textresult -> insert(#1)
}
#textresult -> join('<br />')
Result: This is line one I am the second line Here is line 3
Liberty BASIC[edit]
filedialog "Open","*.txt",file$
if file$="" then end
open file$ for input as #f
while not(eof(#f))
line input #f, t$
print t$
wend
close #f
end
[edit]
There are several words which will return a line of input.
• readline - returns a line as a list of words
• readword - returns a line as a single word, or an empty list if it reached the end of file
• readrawline - returns a line as a single word, with no characters escaped
while [not eof?] [print readline]
LSL[edit]
LSL doesn't have a File System, but it does have Notecards that function as read-only text files, and can be use as configuration files or data sources.
To test it yourself; rez a box on the ground, add the following as a New Script, create a notecard named "Input_Loop_Data_Source.txt", and put what ever data you want in it (in this case I just put a copy of the source code.)
string sNOTECARD = "Input_Loop_Data_Source.txt";
default {
integer iNotecardLine = 0;
state_entry() {
llOwnerSay("Reading '"+sNOTECARD+"'");
llGetNotecardLine(sNOTECARD, iNotecardLine);
}
dataserver(key kRequestId, string sData) {
if(sData==EOF) {
llOwnerSay("EOF");
} else {
llOwnerSay((string)iNotecardLine+": "+sData);
llGetNotecardLine(sNOTECARD, ++iNotecardLine);
}
}
}
Output:
Reading 'Input_Loop_Data_Source.txt'
0: string sNOTECARD = "Input_Loop_Data_Source.txt";
1: default {
2: integer iNotecardLine = 0;
3: state_entry() {
4: llOwnerSay("Reading '"+sNOTECARD+"'");
5: llGetNotecardLine(sNOTECARD, iNotecardLine);
6: }
7: dataserver(key kRequestId, string sData) {
8: if(sData==EOF) {
9: llOwnerSay("EOF");
10: } else {
11: llOwnerSay((string)iNotecardLine+": "+sData);
12: llGetNotecardLine(sNOTECARD, ++iNotecardLine);
13: }
14: }
15: }
EOF
Lua[edit]
lines = {}
str = io.read()
while str do
table.insert(lines,str)
str = io.read()
end
Via generic for loop[edit]
Reads line-by-line via an iterator (from stdin). Substitute io.lines() with io.open(filename, "r"):lines() to read from a file.
lines = {}
for line in io.lines() do
table.insert(lines, line) -- add the line to the list of lines
end
M2000 Interpreter[edit]
Document A$={1st Line
2nd line
3rd line
}
Save.Doc A$, "test.txt", 0 ' 0 for utf16-le
\\ we use Wide for utf16-le \\ without it we open using ANSI
Open "test.txt" For Wide Input Exclusive as #N
While Not Eof(#N) {
Line Input #N, ThisLine$
Print ThisLine$
}
Close #N
Clear A$
Load.Doc A$, "test.txt"
\\ print proportional text, all lines
Report A$
\\ Print one line, non proportional
\\ using paragraphs
For i=0 to Doc.par(A$)-1
Print Paragraph$(A$, i)
Next i
\\ List of current variables (in any scope, public only)
List
Maple[edit]
readinput:=proc(filename)
local line,file;
file:="";
line:=readline(filename);
while line<>0 do
line:=readline(filename);
file:=cat(file,line);
end do;
end proc;
Mathematica / Wolfram Language[edit]
stream = OpenRead["file.txt"];
While[a != EndOfFile, Read[stream, Word]];
Close[stream]
MAXScript[edit]
this function will read a file line by line.
fn ReadAFile FileName =
(
local in_file = openfile FileName
while not eof in_file do
(
--Do stuff in here--
print (readline in_file)
)
close in_file
)
Mercury[edit]
:- module input_loop.
:- interface.
:- import_module io.
:- pred main(io::di, io::uo) is det.
:- implementation.
main(!IO) :-
io.stdin_stream(Stdin, !IO),
io.stdout_stream(Stdout, !IO),
read_and_print_lines(Stdin, Stdout, !IO).
:- pred read_and_print_lines(io.text_input_stream::in,
io.text_output_stream::in, io::di, io::uo) is det.
read_and_print_lines(InFile, OutFile, !IO) :-
io.read_line_as_string(InFile, Result, !IO),
(
Result = ok(Line),
io.write_string(OutFile, Line, !IO),
read_and_print_lines(InFile, OutFile, !IO)
;
Result = eof
;
Result = error(IOError),
Msg = io.error_message(IOError),
io.stderr_stream(Stderr, !IO),
io.write_string(Stderr, Msg, !IO),
io.set_exit_status(1, !IO)
).
mIRC Scripting Language[edit]
var %n = 1
while (%n <= $lines(input.txt)) {
write output.txt $read(input.txt,%n)
inc %n
}
ML/I[edit]
The very nature of ML/I is that its default behaviour is to copy from input to output until it reaches end of file.
Modula-2[edit]
PROCEDURE ReadName (VAR str : ARRAY OF CHAR);
VAR n : CARDINAL;
ch, endch : CHAR;
BEGIN
REPEAT
InOut.Read (ch);
Exhausted := InOut.EOF ();
IF Exhausted THEN RETURN END
UNTIL ch > ' '; (* Eliminate whitespace *)
IF ch = '[' THEN endch := ']' ELSE endch := ch END;
n := 0;
REPEAT
InOut.Read (ch);
Exhausted := InOut.EOF ();
IF Exhausted THEN RETURN END;
IF n <= HIGH (str) THEN str [n] := ch ELSE ch := endch END;
INC (n)
UNTIL ch = endch;
IF n <= HIGH (str) THEN str [n-1] := 0C END;
lastCh := ch
END ReadName;
Modula-3[edit]
MODULE Output EXPORTS Main;
IMPORT Rd, Wr, Stdio;
VAR buf: TEXT;
<*FATAL ANY*>
BEGIN
WHILE NOT Rd.EOF(Stdio.stdin) DO
buf := Rd.GetLine(Stdio.stdin);
Wr.PutText(Stdio.stdout, buf);
END;
END Output.
NetRexx[edit]
Using NetRexx ASK Special Variable[edit]
/* NetRexx */
options replace format comments java crossref symbols nobinary
-- Read from default input stream (console) until end of data
lines = ''
lines[0] = 0
lineNo = 0
loop label ioloop forever
inLine = ask
if inLine = null then leave ioloop -- stop on EOF (Try Ctrl-D on UNIX-like systems or Ctrl-Z on Windows)
lineNo = lineNo + 1
lines[0] = lineNo
lines[lineNo] = inLine
end ioloop
loop l_ = 1 to lines[0]
say l_.right(4)':' lines[l_]
end l_
return
Using Java Scanner[edit]
/* NetRexx */
options replace format comments java crossref symbols nobinary
-- Read from default input stream (console) until end of data
lines = ''
lines[0] = 0
inScanner = Scanner(System.in)
loop l_ = 1 while inScanner.hasNext()
inLine = inScanner.nextLine()
lines[0] = l_
lines[l_] = inLine
end l_
inScanner.close()
loop l_ = 1 to lines[0]
say l_.right(4)':' lines[l_]
end l_
return
Nim[edit]
var i = open("input.txt")
for line in i.lines:
discard # process line
i.close()
Oberon-2[edit]
Works with oo2c Version 2
MODULE InputLoop;
IMPORT
StdChannels,
Channel;
VAR
reader: Channel.Reader;
writer: Channel.Writer;
c: CHAR;
BEGIN
reader := StdChannels.stdin.NewReader();
writer := StdChannels.stdout.NewWriter();
reader.ReadByte(c);
WHILE reader.res = Channel.done DO
writer.WriteByte(c);
reader.ReadByte(c)
END
END InputLoop.
Execute: InputLoop < Inputloop.Mod
Output:
MODULE InputLoop;
IMPORT
StdChannels,
Channel;
VAR
reader: Channel.Reader;
writer: Channel.Writer;
c: CHAR;
BEGIN
reader := StdChannels.stdin.NewReader();
writer := StdChannels.stdout.NewWriter();
reader.ReadByte(c);
WHILE reader.res = Channel.done DO
writer.WriteByte(c);
reader.ReadByte(c)
END
END InputLoop.
Objeck[edit]
use IO;
bundle Default {
class Test {
function : Main(args : System.String[]) ~ Nil {
f := FileReader->New("in.txt");
if(f->IsOpen()) {
string := f->ReadString();
while(string->Size() > 0) {
string->PrintLine();
string := f->ReadString();
};
f->Close();
};
}
}
}
OCaml[edit]
let rec read_lines ic =
try let line = input_line ic in
line :: read_lines ic
with End_of_file ->
[]
The version above will work for small files, but it is not tail-recursive.
Below will be more scalable:
let read_line ic =
try Some (input_line ic)
with End_of_file -> None
let read_lines ic =
let rec loop acc =
match read_line ic with
| Some line -> loop (line :: acc)
| None -> List.rev acc
in
loop []
;;
Or with a higher order function:
let read_lines f ic =
let rec loop () =
try f(input_line ic); loop()
with End_of_file -> ()
in
loop()
read_lines print_endline (open_in Sys.argv.(1))
This function will apply your_function() to every line of input
let rec input_caller () =
let input = read_line() in
your_function input ;
input_caller() ;
;;
let () = input_caller()
Oforth[edit]
Reads a file line by line and write each line on standard output :
: readFile(filename) File new(filename) apply(#println) ;
Oz[edit]
%% Returns a list of lines.
%% Text: an instance of Open.text (a mixin class)
fun {ReadAll Text}
case {Text getS($)} of false then nil
[] Line then Line|{ReadAll Text}
end
end
Pascal[edit]
{ for stdio }
var
s : string ;
begin
repeat
readln(s);
until s = "" ;
{ for a file }
var
f : text ;
s : string ;
begin
assignfile(f,'foo');
reset(f);
while not eof(f) do
readln(f,s);
closefile(f);
end;
Perl[edit]
The angle brackets operator ( <...> ) reads one line at a time from a filehandle in scalar context:
open FH, "< $filename" or die "can't open file: $!";
while (my $line = <FH>) {
chomp $line; # removes trailing newline
# process $line
}
close FH or die "can't close file: $!";
Or you can get a list of all lines when you use it in list context:
@lines = <FH>;
Or a simpler program for lines of files entered as command line arguments or standard input:
while (<>) {
# $_ contains a line
}
Invoking perl with the -p or -n option implies the above loop, executing its code once per input line, with the line stored in $_. -p will print $_ automatically at the end of each iteration, -n will not.
$ seq 5 | perl -pe '$_ = "Hello $_"'
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
$ seq 5 | perl -ne 'print "Hello $_"'
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
Perl 6[edit]
In Perl 6, filehandles etc. provide the .lines and .words methods which return lazy lists, and can thus they be iterated using a for loop...
Line-by-line (line endings are automatically stripped)
• From a file:
for "filename.txt".IO.lines -> $line {
...
}
• From standard input:
for $*IN.lines -> $line {
...
}
• From a pipe:
for run(«find -iname *.txt», :out).out.lines -> $filename {
...
}
• From a pipe, with custom line separator (in this example to handle filenames containing newlines):
for run(«find -iname *.txt -print0», :nl«\0», :out).out.lines -> $filename {
...
}
Word-by-word
• From a file
for "filename.txt".IO.words -> $word {
...
}
• From standard input or a pipe, accordingly.
Phix[edit]
Translation of: Euphoria
Process text stream line-by-line:
procedure process_line_by_line(integer fn)
object line
while 1 do
line = gets(fn)
if atom(line) then
exit
end if
-- process the line
end while
end procedure
PHP[edit]
$fh = fopen($filename, 'r');
if ($fh) {
while (!feof($fh)) {
$line = rtrim(fgets($fh)); # removes trailing newline
# process $line
}
fclose($fh);
}
Or you can get an array of all the lines in the file:
$lines = file($filename);
Or you can get the entire file as a string:
$contents = file_get_contents($filename);
PicoLisp[edit]
This reads all lines in a file, and returns them as a list of lists
(in "file.txt"
(make
(until (eof)
(link (line)) ) ) )
PL/I[edit]
declare line character (200) varying;
open file (in) title ('/TEXT.DAT,type(text),recsize(200)' );
on endfile (in) stop;
do forever;
get file(in) edit (line) (L);
put skip list (line);
end;
PowerShell[edit]
Get-Content c:\file.txt |
ForEach-Object {
$_
}
or
ForEach-Object -inputobject (get-content c:\file.txt) {$_}
PureBasic[edit]
File objects can be read bytewise, characterwise (ASCII or UNICODE), floatwise, doublewise, integerwise, linewise ...
If OpenConsole()
; file based line wise
If ReadFile(0, "Text.txt")
While Eof(0) = 0
Debug ReadString(0) ; each line until eof
Wend
CloseFile(0)
EndIf
; file based byte wise
If ReadFile(1, "Text.bin")
While Eof(1) = 0
Debug ReadByte(1) ; each byte until eof
Wend
CloseFile(1)
EndIf
EndIf
Python[edit]
To create a Python3 input loop use python's `input()` function.
while(True):
x = input("What is your age? ")
print(x)
Python file objects can be iterated like lists:
my_file = open(filename, 'r')
try:
for line in my_file:
pass # process line, includes newline
finally:
my_file.close()
One can open a new stream for read and have it automatically close when done, with a new "with" statement:
from __future__ import with_statement
with open(filename, 'r') as f:
for line in f:
pass # process line, includes newline
You can also get lines manually from a file:
line = my_file.readline() # returns a line from the file
lines = my_file.readlines() # returns a list of the rest of the lines from the file
This does not mix well with the iteration, however.
When you want to read from stdin, or (multiple) filenames are given on the command line:
import fileinput
for line in fileinput.input():
pass # process line, includes newline
The fileinput module can also do inplace file editing, follow line counts, and the name of the current file being read etc.
R[edit]
Note that read.csv and read.table provide alternatives for files with 'dataset' style contents.
lines <- readLines("file.txt")
Racket[edit]
The following prints input lines from standard input to standard output:
#lang racket
(copy-port (current-input-port) (current-output-port))
REBOL[edit]
rebol [
Title: "Basic Input Loop"
URL: http://rosettacode.org/wiki/Basic_input_loop
]
; Slurp the whole file in:
x: read %file.txt
; Bring the file in by lines:
x: read/lines %file.txt
; Read in first 10 lines:
x: read/lines/part %file.txt 10
; Read data a line at a time:
f: open/lines %file.txt
while [not tail? f][
print f/1
f: next f ; Advance to next line.
]
close f
REXX[edit]
version 1a[edit]
Works with: oorexx and Regina
Reading line by line from the standard input using linein and lines did not work.
do while stream(stdin, "State") <> "NOTREADY"
call charout ,charin(stdin)
end
version 1b[edit]
Works with: oorexx and Regina
Apparently only lines() does not work
Do Until input=''
input=linein(stdin)
Call lineout ,input
End
version 2[edit]
Works with: ARexx
/* -- AREXX -- */
do until eof(stdin)
l = readln(stdin)
say l
end
version 3[edit]
Note that if REXX is reading from the default (console) input stream, there is no well-
defined e-o-f (end of file), so to speak.
Therefore, the following two REXX programs use the presence of a null line to indicate e-o-f.
/*REXX program reads from the (console) default input stream until null*/
do until _==''
parse pull _
end /*until*/ /*stick a fork in it, we're done.*/
version 4[edit]
/*REXX program reads from the (console) default input stream until null*/
do until _==''
_= linein()
end /*until*/ /*stick a fork in it, we're done.*/
Ring[edit]
fp = fopen("C:\Ring\ReadMe.txt","r")
r = fgetc(fp)
while isstring(r)
r = fgetc(fp)
if r = char(10) see nl
else see r ok
end
fclose(fp)
Ruby[edit]
Ruby input streams are IO objects. One can use IO#each or IO#each_line to iterate lines from a stream.
stream = $stdin
stream.each do |line|
# process line
end
IO objects are also Enumerable (like Array or Range), and have methods like Enumerable#map, which call IO#each to loop through lines from a stream.
# Create an array of lengths of every line.
ary = stream.map {|line| line.chomp.length}
To open a new stream for reading, see Read a file line by line#Ruby.
Run BASIC[edit]
open "\testFile.txt" for input as #f
while not(eof(#f))
line input #f, a$
print a$
wend
close #f
Scala[edit]
Library: Scala
Works with: Scala version 2.10.3
scala.io.Source.fromFile("input.txt").getLines().foreach {
line => ... }
Rust[edit]
use std::io::{self, BufReader, Read, BufRead};
use std::fs::File;
fn main() {
print_by_line(io::stdin())
.expect("Could not read from stdin");
File::open("/etc/fstab")
.and_then(print_by_line)
.expect("Could not read from file");
}
fn print_by_line<T: Read>(reader: T) -> io::Result<()> {
let buffer = BufReader::new(reader);
for line in buffer.lines() {
println!("{}", line?)
}
Ok(())
}
Slate[edit]
(File newNamed: 'README') reader sessionDo: [| :input | input lines do: [| :line | inform: line]].
sed[edit]
Sed by default loops over each line and executes its given script on it:
$ seq 5 | sed ''
1
2
3
4
5
The automatic printing can be suppressed with -n, and performed manually with p:
$ seq 5 | sed -n p
1
2
3
4
5
Seed7[edit]
$ include "seed7_05.s7i";
const proc: main is func
local
var string: line is "";
begin
while hasNext(IN) do
readln(line);
writeln("LINE: " <& line);
end while;
end func;
Sidef[edit]
To read from the standard input, you can use STDIN as your fh.
var file = File(__FILE__)
file.open_r(\var fh, \var err) || die "#{file}: #{err}"
fh.each { |line| # iterates the lines of the fh
line.each_word { |word| # iterates the words of the line
say word
}
}
Smalltalk[edit]
|f|
f := FileStream open: 'afile.txt' mode: FileStream read.
[ f atEnd ] whileFalse: [ (f nextLine) displayNl ] .
SNOBOL4[edit]
loop output = input :s(loop)
end
Sparkling[edit]
var line;
while (line = getline()) != nil {
print(line);
}
Tcl[edit]
set fh [open $filename]
while {[gets $fh line] != -1} {
# process $line
}
close $fh
For “small” files, it is often more common to do this:
set fh [open $filename]
set data [read $fh]
close $fh
foreach line [split $data \n] {
# process line
}
TUSCRIPT[edit]
$$ MODE TUSCRIPT
file="a.txt"
ERROR/STOP OPEN (file,READ,-std-)
ACCESS source: READ/RECORDS/UTF8 $file s,text
LOOP
READ/NEXT/EXIT source
PRINT text
ENDLOOP
ENDACCESS source
UnixPipes[edit]
the pipe 'yes XXX' produces a sequence
read by lines:
yes 'A B C D ' | while read x ; do echo -$x- ; done
read by words:
yes 'A B C D ' | while read -d\ a ; do echo -$a- ; done
UNIX Shell[edit]
When there is something to do with the input, here is a loop:
while read line ; do
# examine or do something to the text in the "line" variable
echo "$line"
done
The following echoes standard input to standard output line-by-line until the end of the stream.
cat < /dev/stdin > /dev/stdout
Since cat defaults to reading from standard input and writing to standard output, this can be further simplified to the following.
cat
Ursa[edit]
decl file f
f.open "filename.txt"
while (f.hasnext)
out (in string f) endl console
end while
Vala[edit]
int main() {
string? s;
while((s = stdin.read_line()) != null) {
stdout.printf("%s\n", s);
}
return 0;
}
VBA[edit]
Public Sub test()
Dim filesystem As Object, stream As Object, line As String
Set filesystem = CreateObject("Scripting.FileSystemObject")
Set stream = filesystem.OpenTextFile("D:\test.txt")
Do While stream.AtEndOfStream <> True
line = stream.ReadLine
Debug.Print line
Loop
stream.Close
End Sub
VBScript[edit]
filepath = "SPECIFY PATH TO TEXT FILE HERE"
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objInFile = objFSO.OpenTextFile(filepath,1,False,0)
Do Until objInFile.AtEndOfStream
line = objInFile.ReadLine
WScript.StdOut.WriteLine line
Loop
objInFile.Close
Set objFSO = Nothing
Visual Basic .NET[edit]
This reads a stream line by line, outputing each line to the screen.
Sub Consume(ByVal stream As IO.StreamReader)
Dim line = stream.ReadLine
Do Until line Is Nothing
Console.WriteLine(line)
line = stream.ReadLine
Loop
End Sub
x86 Assembly[edit]
GAS, 64 bit (Linux): Compiled with gcc -nostdlib. Memory maps the file and outputs one line at a time. Try ./a.out file, ./a.out < file, or ./a.out <<< "Heredoc". It's a little like cat, but less functional.
#define SYS_WRITE $1
#define SYS_OPEN $2
#define SYS_CLOSE $3
#define SYS_FSTAT $5
#define SYS_MMAP $9
#define SYS_MUNMAP $11
#define SYS_EXIT $60
// From experiments:
#define FSIZEOFF 48
#define STATSIZE 144
// From Linux source:
#define RDONLY $00
#define PROT_READ $0x1
#define MAP_PRIVATE $0x02
#define STDIN $0
#define STDOUT $1
.global _start
.text
/* Details: */
/*
Remember: %rax(%rdi, %rsi, %rdx, %r10, %r8, %r9)
- Open a file (get its fd)
- int fd = open("filename", RDONLY)
- Get its filesize:
- fstat(fd, statstruct). 0 if ok. fsize at statstruct+48
- Then memory map it.
- void* vmemptr = mmap(vmemptr, fsize, PROT_READ, MAP_PRIVATE, fd, 0)
- Scan for newlines, print line.
- Keep going until done. Details at 11.
- Unmap memory
- munmap(vmemptr, filesize). 0 if ok.
- Exit
*/
.macro ERRCHECK code
cmpq $\code, %rax
je fs_error
.endm
/* Local stack notes:
0: int fd
4: void* vmemptr
12: void* head
20: void* lookahead
28: void* end
*/
_start:
// Open:
movq RDONLY, %rsi
// Filename ptr is on stack currently as argv[1]:
cmpq $1, (%rsp) // if argc is 1, default to stdin
jnz open_file
subq $36, %rsp // local stack
movl STDIN, (%rsp)
jmp fstat
open_file:
movq 16(%rsp), %rdi // argc(8), argv0(8) => rsp+16. filename
movq SYS_OPEN, %rax
syscall
ERRCHECK -1
subq $36, %rsp // local stack
movl %eax, (%rsp) // int fd = open(argv[1], RDONLY)
// fstat to get filesize
fstat:
movq $statstruct, %rsi
movl (%rsp), %edi // fd
movq SYS_FSTAT, %rax
syscall // fstat(fd, statstruct)
ERRCHECK -1
// mmap - don't forget to munmap.
mmap:
movq $0, %r9 // offset
movq (%rsp), %r8 // fd
movq MAP_PRIVATE, %r10
movq PROT_READ, %rdx
movq filesize, %rsi
movq (%rsp), %rdi // vmemptr
movq SYS_MMAP, %rax
syscall
ERRCHECK -1
movq %rax, 4(%rsp) // void* vmemptr = mmap(vmemptr, fsize, PROT_READ, MAP_PRIVATE, fd, 0)
/* Print lines */
movq %rax, 12(%rsp) // head = vmemptr
addq filesize, %rax
decq %rax
movq %rax, 28(%rsp) // end = vmemptr+filesize-1
scan_outer:
movq 12(%rsp), %rax
cmpq 28(%rsp), %rax
jge cleanup // if head >= end, done
movq %rax, %rbx // Using rbx as lookahead
scan_inner:
cmpq 28(%rsp), %rbx
jge writeline // if lookahead >= end, write the line.
cmpb $'\n, (%rbx)
jz writeline // if '\n'==*lookahead, write the line
incq %rbx
jmp scan_inner
writeline:
// write:
incq %rbx
movq %rbx, %rdx
subq 12(%rsp), %rdx // rdx <- lookahead-head
movq 12(%rsp), %rsi
movq STDOUT, %rdi
movq SYS_WRITE, %rax
syscall // write(stdout, head, lookahead-head)
safety:
movq %rbx, 12(%rsp) // head = lookahead.
jmp scan_outer
cleanup:
// munmap
movq filesize, %rsi
movq 4(%rsp), %rdi
movq SYS_MUNMAP, %rax
syscall // munmap(vmemptr, filesize)
cmpq $-1, %rax
je fs_error
// close
movl (%rsp), %edi
movq SYS_CLOSE, %rax
syscall // close(fd)
ERRCHECK -1
exit:
movq SYS_EXIT, %rax
xorq %rdi, %rdi // The exit code.
syscall
fs_error:
movq SYS_EXIT, %rax
movq $-1, %rdi
syscall // exit(-1)
.data
statstruct: // This struct is 144 bytes. Only want size (+48)
.zero FSIZEOFF
filesize: // 8 bytes.
.quad 0
.zero STATSIZE-FSIZEOFF+8
zkl[edit]
Many objects support "stream of" concepts such as lines, characters, chunks. Some are File, Data (bit bucket), List, Console. Word by word isn't explicitly supported. If an object is stream-able, it supports methods like foreach, pump, apply, reduce, etc.
foreach line in (File("foo.txt")){...}
List(1,2,3).readln() // here, a "line" is a list element
Utils.Helpers.zipWith(False, // enumerate a file
fcn(n,line){"%3d: %s".fmt(n,line).print()},[1..],File("cmp.zkl"))
|
__label__pos
| 0.950082 |
Table of Contents
Convert Cylindrical to Rectangular Coordinates - Calculator
Cylindrical and Rectangular Coordinates
Convert rectangular to spherical coordinates using a calculator.
Using trigonometric ratios, it can be shown that the cylindrical coordinates \( (r,\theta,z) \) and rectangular coordinates \( (x,y,z) \) in Fig.1 are related as follows:
\( x = r \cos \theta \) , \( y = r \sin \theta \) , \( z = z \) (I)
\( r = \sqrt {x^2 + y^2} \) , \( \tan \theta = \dfrac{y}{x} \) , \( z = z \) (II)
with \( 0 \le \theta \lt 2\pi \)
rectangular and cylindrical coordinates.
Fig.1 - Cylindrical and Rectangular coordinates
The calculator calculates the rectangular coordinates \( x \) , \( y \) and \( z \) given the cylindrical coordinates \( r \) , \( \theta \) and \( z \) using the formulas in I above.
Use Calculator to Convert Cylindrical to Rectangular Coordinates
1 - Enter \( r \), \( \theta \) and \( z \) and press the button "Convert". You may also change the number of decimal places as needed; it has to be a positive integer.
Angle \( \theta \) may be entered in radians and degrees.
\( r = \)
\( \theta = \)
\( z = \) =
Number of Decimal Places =
\( x = \)
\( y = \)
\( z = \)
More References and links
Maths Calculators and Solvers.
|
__label__pos
| 0.999994 |
physicscatalyst.com logo
Triangle congruence worksheet
Given below are the Class 9 Maths Triangle congruence worksheet
(a) Concepts questions
(b) Calculation problems
(c) Multiple choice questions
(d) Long answer questions
(e) Fill in the blanks
(f)Subjective Questions
Subjective Questions
Question 1
Let ABC and PQR are two triangles.
Which cases the triangles will be congruent? Write down the congruence equation
1. $AB =PQ \; , \; BC=QR \; and \; AC=PR$
2. $BC=QR \; , \; \angle B= \angle Q \; , \;\angle C=\angle R$
3. $AB=PQ, BC=QR, \angle C= \angle P$
4. $AB=PQ, BC=QR, \angle B= \angle Q$
5. $AB=PQ \; , \; \angle A= \angle P \;,\; \angle C= \angle R$
6. $AB =PQ \; , \; BC=QR$
7. $ \angle B= \angle Q \; , \; \angle C= \angle R$
8. $ \angle A= \angle P, \angle B=\angle Q ,\angle C= \angle R$
9. $AB=PR , \angle A= \angle P, \angle B=\angle R$
True or False statement
Question 2
True or False statement
a. We cannot construct a triangle of side 9,5,4 cm
b. In a Right angle triangle, hyponotuse is the longest side c. centroid is the point of intersection of the median of the triangles
d. A triangle can have two obtuse angles
e. Orthocenter is the point of intersection of altitudes
f. if $ABC \cong PQR$ then AC=PQ
g. if a triangle ABC such that AB > BC , Then $\angle C > \angle A$
h. Mid point of the hypotenuse of right angled triangle is circumcenter
Multiple choice Questions
Question 4
In a triangle ABC , $\angle A=60^0$ ,\angle B=40^0$$, which side is the longest
a. AB
b. BC
c. AC
Question 5
In triangle ABC , AB=AC and D is the point inside triangle such that BD=DC as shown in figure
triangle congruence worksheet
Which of the following is true
a. $\Delta ABD \cong \Delta ACD$
b. $\angle ABD= \angle ACD$
c. $\angle DAC= \angle DAB$
d. All the above
Question 6
An exterior angle of the triangle is $110^0$ , One of the opposite interior angle is $50^0$
What are the other two angles
a. $60^0,70^0$
b. $55^0,55^0$
c. $70^0,50^0$
d. None of the above
Question 7
AD is the median of the triangle . Which of the following is true?
Triangle question
a. $AB+BC+AC > 2AD$
b. $AB +BC > AC$
c. $AB+BC+AC> AD$
d. none of these
Question 8
triangle congruence worksheet
In the above figure AB || CD ,O is the mid point BC. Which of the following is true
a. $ \Delta AOB \cong \Delta DOC$
b. O is the mid point of AD
c. $AB=CD$
d. All the above
Question 9
In an isosceles triangle ΔABC with AB = AC. D is mid point on BC.
Which of the following is true
a. Orthocenter lies on line AD
b. AD is the perpendicular bisector BC
c. Centroid lies on the line AD
d. AD is the bisector of angle A
Question 10
PQR is a right angle triangle in with $P=90^0$ and PQ=PR . What is the value of Q and R
a. $45^0,45^0$
b. $30^0,60^0$
c. $20^0,60^0$
Question 11
triangle congruence worksheet
In the above quadrilateral ACBD, we have AC=AD and AB bisect the ∠A
Which of the following is true
a. $ \Delta ABC \cong \Delta ABD$
b. BC=BD
c. $ \angle C= \angle D$
d. None of the above
Match the column
Column AColumn B
Point of intersection of altitude Circumcenter
Point of intersection of Median In-centre
Point of intersection of angle bisector Orthocenter
Point of intersection of perpendicular bisector of sides Centroid
Download this assignment as pdf
link to this page by copying the following text
Reference Books for class 9 Math
Given below are the links of some of the reference books for class 9 Math.
1. Mathematics for Class 9 by R D Sharma One of the best book for studying class 9 level mathematics. It has lot of problems to be solved.
2. Secondary School Mathematics for Class 9 by R S Aggarwal This is also as good as R.D. Sharma. Either this or the book by R.D. Sharma will do. I find book R.S. Aggarwal little bit more challenging than the one by R.D. Sharma.
3. Pearson IIT Foundation Series - Maths - Class 9 Buy this book if you want to challenge yourself further and want to prepare for JEE foundation.
4. Pearson IIT Foundation Physics, Chemistry & Maths combo for Class 9 Only buy if you are prepared to study extra topics and want to take your studies a step further. You might need help to understand topics in these books.
You can use above books for extra knowledge and practicing different questions.
Class 9 Maths Class 9 Science
Note to our visitors :-
Thanks for visiting our website. From feedback of our visitors we came to know that sometimes you are not able to see the answers given under "Answers" tab below questions. This might happen sometimes as we use javascript there. So you can view answers where they are available by reloding the page and letting it reload properly by waiting few more seconds before clicking the button.
We really do hope that this resolve the issue. If you still hare facing problems then feel free to contact us using feedback button or contact us directly by sending is an email at [email protected]
We are aware that our users want answers to all the questions in the website. Since ours is more or less a one man army we are working towards providing answers to questions available at our website.
|
__label__pos
| 0.996761 |
What is Server Virtualization?
What is Server Virtualization, and what is Server Virtualized Cloud Hosting?
Before setting up virtualization (Known as a VPS Node), you would install an operating system on top of the hardware, making it directly linked to the server. This means that each server has to have its own separate hardware.
On average, dedicated servers use around 15% of their resources during average operations. Running your application on bare metal servers can have advantages over virtualization, depending on your specific needs. In many cases, it is a waste of resources to use a dedicated server. Software or hardware failures often required hands-on repair on all the servers.
Server virtualization was introduced as a solution to some of the issues mentioned above. Virtualization software allows you to “break up” your physical server into multiple virtual ones. When you create virtual servers, you can make full use of your physical resources to the fullest without investing in more hardware and ending up buying lots of dedicated servers, wasting money.
How Does Server Virtualization Work?
To create virtual servers, you first need to set up virtualization software. This essential piece of software is called a hypervizor. Its primary role is to create a virtualization layer that separates CPU / Processors, RAM, and other physical resources from the virtual instances.
Once you install the hypervizor on your host machine, you can use that virtualization software to emulate the physical resources and create a new virtual server on top of it.
There are different types of server virtualization. The distinction between them is mainly based on the level of isolation they provide, which is also related to how much hardware resources they emulate.
Types of Server Virtualization
There are three (3) approaches to server virtualization based on the isolation they provide:
Full virtualization or virtual machine model
Paravirtual machine model
Virtualization at the OS level
Types of Hypervisors
Two types of hypervisors are used to create virtual environments:
Type 1 hypervisors (native/bare-metal hypervisors)
Type 2 hypervisors (hosted hypervisors)
Examples of type 2 hypervisors include VMware Workstation, KVM, Oracle VM VirtualBox, Microsoft Virtual PC, Red Hat Enterprise Virtualization, etc.
History of Server Virtualization
While server virtualization has recently exploded in popularity since the global epidemic, virtualization has been in development for over 50 years. During the 1960s, IBM pioneered the first virtualization of system memory; this was the precursor to virtual hardware. In the 1970s, IBM virtualized a proprietary operating system for the first time called VM/370. This OS-Level virtualization had little use outside of mainframe computing, but it eventually evolved into a product that became z/VM, the first commercially sold virtual platform for servers.
Today, server virtualization dominates the IT industry, with many companies moving towards fully virtualized cloud-managed IT ecosystems. The popularity of virtualization increased significantly in the late 1990s, with VMware’s release of VMware workstations. The product enabled the virtualization of any x86/x64 architecture and brought virtualization mainstream. It was now possible to run Linux, Windows, and MacOS on the same host hardware using control panels such as SolusVM and Virtualizor, both relatively easy to install and setup. This ground-breaking technology has played a crucial role in shaping IT infrastructure services for the last 20 years.
How does server virtualization work?
In all fully virtualized server platforms, there must be a host or vendor hardware available. This hardware, usually a server, requires virtualization software called a hypervisor. The hypervisor presents generic virtualized hardware to every operating system that is installed onto the hypervisor.
This generic hardware includes all components required by the operating systems to start, including hard disks, SCSI drivers, network drivers, CPU, and memory allocations. The virtual machine can only interact with the generic hardware and is independent of any other VM; the hypervisor manages the host resources and allocates them to each VM. The administrator can then set the hypervisor to allocate different resources to each virtual machine depending on its requirements.
Today, virtualization technology can deploy almost any operating system; Linux, Windows, Aix are widely virtualized. More recently, hardware manufacturers have started offering virtual appliances of their hardware devices. An excellent example of this is Network Load Balancers, which typically would have been a physical device in a rack, but today, they are often virtualized. Host hardware has become so powerful that offering a virtualized dedicated appliance is more commonplace.
Server virtualization is a method by which software is used to partition a single physical server into what appear to be multiple virtual machines. To the user, the virtual environment seems as if it it’s own standalone piece of hardware with set allocated spaces with KVM virtualization and with lesser used OpenVZ7 virtualization you can allocate more than you actually have, hence cheap OVZ VPS servers, due to openvz7 technology that shared all resources like shared hosting. Very desirable for personal use would be using OpenVZ7 though for production KVM VPS hosting would be the go to secure and higher end option.
In data centers and IT departments, this technology is utilized to use server resources better. Specific applications may require isolated environments or a different OS to function. Instead of having multiple servers to manage each function, server virtualization will make it easy to create or deploy separate virtual machines, each capable of running in its own unique software environment. Generally, a physical server only uses 20 percent of its capacity. By hosting several virtual machines on one physical server, it’s possible to improve hardware utilization upwards of 80 percent if configured correctly.
For IT managers and data centers, Virtualization also means reducing hardware costs upfront and reduced TCO in the long term. A smaller, more efficient data center means fewer costs in heating and cooling and less equipment to maintain.
With web hosting companies, server virtualization for hosting customers is referred to as virtual private servers (VPS) or virtual dedicated servers (VDS). As more and more enterprises move to “the cloud,” server virtualization solutions are also commonly referred to as “cloud hosting.” Cloud hosting is an excellent place to host content. It comes in many different technologies, forms, colors, and shapes, all depending on your needs. Hosting clients receive many of the same benefits as data centers and IT departments, so more and more are making that move to the cloud.
Shared hosting is the most common form of web hosting, where several different websites are hosted on one single physical server. The end-user has little choice over what software is pre-installed and little control over whatever websites are hosted on the server.
At the other end is dedicated hosting, where a client rents one or more servers for their exclusive use. It allows the most resources and control for a hosting client, but it is a far more expensive option than shared hosting.
A virtual private server can function as a middleman. It offers the benefits of dedicated hosting without the expense, with some of the limitations of shared hosting and dedicated hosting.
With a VPS (or a VPSie), the virtual machine appears for all intents and purposes as a physically separate machine. The user can control what OS and software the server runs, regardless of the software environment running on other partitions within the server. Though some essential software runs on the physical server to maintain partitions, firewalls, and additional security, the end-user is essentially unaware of anything outside of the virtual machine they are interacting with.
Though the virtual server shares physical space with other virtual machines, server virtualization is safer than shared hosting because it functions as a physically separate computer. With shared hosting, there’s the chance that your performance will be adversely affected by other websites on the shared server or that other users could gain access to your data through security breaches.
VPS is a more expensive hosting option than shared hosting, but it’s often a fraction of dedicated hosting costs. It allows for the customization of owning or renting a server without the cost of maintenance or ownership.
VPS may be a good option for websites of businesses with small to medium levels of traffic. While shared hosting may be cheaper, for some applications, businesses need a greater level of control. However, for websites with a lot of traffic, going with a dedicated server may be the only option for meeting a website or application’s needs for computing power and bandwidth.
Hardware manufacturers and server virtualization
Software manufacturers do not just use server Virtualization. IBM, which manufactures most of its server hardware and plays a significant role in server virtualization. IBM platforms System P, System I, and System Z use a para-virtual hypervisor.
All the guest virtual machines are essentially aware of each other and their resource requirements via the host. The host hardware resources are sliced up and allocated to the Virtual Machine (or Logical Partition). IBM Z/VM founded this para-virtualized technique, and nearly all IBM mainframe solutions use this method of virtualization. For Example, IBM System P uses a pooled virtualized hardware layer managed by an HMC to distribute resources to logical partitions. Each partition is aware of the other partitions’ requirements, and resources are shared to ensure each server has at least the minimum hardware that it has been defined.
Benefits of server virtualization
Arguably the key benefits of virtualization to an organization are its flexibility and cost savings potential. Server Virtualization is much more efficient on host hardware than individual physical servers. As a result, companies need to procure significantly less hardware for new infrastructure, and older, less economical hardware can be migrated to new efficient hardware. This all benefits the environment, as data centers will require less power and cooling. The consolidation of hardware also reduces the data center footprint, reducing the costs associated with managed service providers.
Functionality is also a key benefit of virtualization. Essential functions such as the ability to roll back changes made to systems using a snapshot eliminated the previous requirement to rebuild a server from scratch. Other key server management features, such as vMotion, Cloning, Fault tolerance, DRS, and High Availability, changed how server administrators could increase infrastructure uptime and offer better service level agreements to customers. New virtual machines can be deployed near-instantly using templates. More recently, it has become possible to build an entirely new virtual infrastructure from scripts, improving server provisioning drastically. Tools like Terraform can be used to create the infrastructure. Other configuration toolsets such as Ansible can be used to configure the newly deployed infrastructure precisely and uniformly to your requirements.
Disaster recovery (DR) has significantly benefitted from server virtualization as well. No longer do you need to restore from tape to reprovisioned hardware; instead, the entire virtual infrastructure can be replicated between sites. Using tools such as VMware Site Recovery Manager, the DR process can be automated. Products such as CloudEndure can replicate servers direct to the cloud and copy the entire infrastructure in a staging area, which can be activated when a DR scenario is invoked.
Virtual Private Servers have long been recognized as a way for businesses to reduce IT costs and increase operational efficiency. By isolating applications and programs within one virtual server that’s set aside solely for you, VPS provides high levels of privacy, security, and control. But while VPS delivers cost savings on hardware and offers the flexibility to run multiple operating systems or sets of programs on individual servers simultaneously, it doesn’t scale well.
In contrast, consuming cloud computing is like buying a much larger ecosystem that allows for scaling up and out. A cloud environment allows you to more easily add more resources to your server, such as RAM, processors, or even cloned copies of your private cloud server to back up your data.
VPS and cloud computing are not mutually exclusive options. You can host your VPS in a virtualized environment. This allows you to convert one physical server into multiple virtual machines, each of which acts like a unique physical device for running both IT resources and web applications in a flexible, instantly scalable, and cost-efficient manner.
Try a Cloud Computing right now with 1 month Free!
Related Articles
|
__label__pos
| 0.932255 |
Large orphans
Prelude
As you probably know there are two ways to store large objects in the PostgreSQL:
Describing pros and cons of these approaches I forgot to mention two issues of no small importance about OID storage:
• It may save your space if several rows (or tables) contain the same binary object;
• It may occupy your space if none of tables need some binary object, but recently did.
From PostgreSQL help:
Large objects are treated as objects in their own right; a table entry can reference a large object by OID, but there can be multiple table entries referencing the same large object OID, so the system doesn’t delete the large object just because you change or remove one such entry.
Now this is fine for PostgreSQL-specific applications, but standard code using JDBC or ODBC won’t delete the objects, resulting in orphan objects — objects that are not referenced by anything, and simply occupy disk space.
Interlude
For this purpose there is an additional supplied module – “lo”:
The lo module allows fixing this by attaching a trigger to tables that contain LO reference columns. The trigger essentially just does a lo_unlink whenever you delete or modify a value referencing a large object. When you use this trigger, you are assuming that there is only one database reference to any large object that is referenced in a trigger-controlled column!
Interlude II
However if you think multireference ability is excellent feature you would find special vacuumlo utility very useful, just as I do.
Solution
I found one more way to avoid giving a birth to LO orphans.
G, I’m a genius!🙂
Assumed:
CREATE TABLE files
(
codfile integer NOT NULL,
filename character varying(255),
data oid,
CONSTRAINT pkfiles PRIMARY KEY (codfile)
);
To avoid orphans:
CREATE OR REPLACE RULE upd_oid_rule AS ON UPDATE TO files
WHERE OLD.data IS DISTINCT FROM NEW.data AND OLD.data IS NOT NULL
DO ALSO
SELECT
CASE WHEN (count(files.data) <= 1) AND
EXISTS(SELECT 1
FROM pg_catalog.pg_largeobject
WHERE loid = OLD.data)
THEN lo_unlink(OLD.data)
END
FROM files
WHERE files.data = OLD.data
GROUP BY OLD.data ;
For those who not so familiar with moonspeak my translation:
The great and almighty server of the Postgres, please, hear my prayer. When one updates the files table and if the new data column distinguishes from the old one do not forget to check if the old data value is not used any more and if not then delete BLOB referenced by this old value (if this BLOB exists of course). Thank you.
And one more rule:
CREATE OR REPLACE RULE del_oid_rule AS ON DELETE TO files
WHERE OLD.data IS NOT NULL
DO ALSO
SELECT
CASE WHEN (count(files.data) <= 1) AND
EXISTS(SELECT 1
FROM pg_catalog.pg_largeobject
WHERE loid = OLD.data)
THEN lo_unlink(OLD.data)
END
FROM files
WHERE files.data = OLD.data
GROUP BY OLD.data;
Conclusion
This approach rules of course.🙂 However, it is only if assumed that BLOBs may be duplicated only in one column of the one table. Later I’ll create stored routine with extended functionality. Stay tuned.
PS
Kids, don’t do drugs!🙂
4 thoughts on “Large orphans
1. Since the “data” column is nullable, the condition should be “OLD.data is distinct from NEW.data” instead of “… …”, otherwise it won’t pick up updating to null value.
Like
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.603058 |
A very simple color matching algorithm that works
Design
A very simple color matching algorithm that works
Mikhail Anikin told in an article on Medium how he and his colleagues choose colors to highlight information blocks, object responses; to manage attention and create visual hierarchy.
Various Yandex services work with color to solve interface tasks: highlighting information blocks, object responses; to manage attention and create visual hierarchy.
1 * TutswM3U-CjM875DdCYi3AExamples of using color matching algorithms in Yandex.Music and in search
Color matching can be complex calculations depending on the application. But it happens that getting the desired result is much easier. This is the story.
In Yandex Launcher, we have application promo cards: rating, description and the “Install” button. These are contextual recommendations — open on top of the application list or in a folder on the desktop.
Initial implementation
The color for the background of the card was selected automatically based on the icon, the button was translucent white. The algorithm tried to determine the main color of the icon by sorting pixels by hue. This approach did not always give a beautiful result, it had drawbacks:
● incorrect color definition,
● “dirty” colors due to averaging,
● dull buttons, boring cards.
1 * FDzUinQlCm-UH7RWYSTrxwExamples of problem cards
What I really wanted
The card was supposed to be a real continuation of the icon. The colors are rich and vibrant. I wanted to create the feeling that the card was carefully made by hand, and did not slip something casually generated automatically.
I always want to make it more beautiful, but the resources are not limitless. It was not planned to allocate a team to write a miracle library for defining colors. So, the task:
With the minimum of effort, improve the algorithm for determining colors, figure out how to paint the card beautifully, without inventing a spaceship.
On Saturday, I blew the dust off the code editor, uncovered HTML5 and Canvas, and began to invent. On Monday I came to the team with a proposal.
New color detection algorithm
Step 1.
We take the icon. Throwing out white, black and transparent pixels.
1 * 0b7KV6aRs6YoYp4BUV24cAOriginal Icon → Filtered Pixel Square
Step 2.
Reduce the resulting image to a size of 2 × 2 pixels (with anti-aliasing disabled). As a result, we get four colors of the icon. In the case of a uniform initial picture, they can be repeated, no big deal.
1 * 3CInb-gq53DVk9qJQCORKQResult after the second step. Original Icon → Colors
We have disabled anti-aliasing so that the colors do not mix, do not become “dirty”.
1 * jrNK0ANkqRZ29MWg1Vc0oQIn fact, it turns out like this:
the square is divided into four parts; we take the middle pixel from the top row of each quarter.
In the implementation, everything is simple: we don’t even need a real downsemp of the image and, in general, work with graphics.
We take the pixels with the desired position from the one-dimensional array obtained after the first step.
Step 3.
Almost everything is ready. Just a little bit is left: get the resulting colors, translate them into HSL, sort by lightness (L). We paint the card.
Light scheme:
● background – the lightest color,
● button – closest to light,
● the text is the darkest.
Dark scheme (if 2 or more colors are dark):
● background – the darkest color,
● button – closest to dark,
● the text is the lightest.
Applying colors, we check the contrast: Lightness difference between the background and the button ≥ 20; between the background and the text ≥ 60. If it does not match, we correct it.
1 * Jlua-yA75IUNkQiF2gN3hA
And a few more cards for example:
1 * tAEclGQSfbs489hNa0V6NQ
Result
We have got colorful cards, from the real colors of the icon, without “dirty” impurities. By using multiple colors, the card looks much more lively. It is especially pleasant that with a uniform background of the icon, the card becomes its direct continuation: the border between them is not noticeable at all.
And the most important thing:
2 days after the new algorithm was proposed, the first implementation was already available in the dev assembly. We tested it within the team, set the filter thresholds in the first step, provided for special cases:
● an icon of one color – we make the background a little darker so that it does not merge,
● icon with background — look at the pixels at the edges, if all are the same, set the same background of the card.
The modified algorithm was included in the next release.
Special thanks to Dima Ovcharov, head of the Yandex Launcher development team, for his interest, desire and patience.
Finally, more examples
1 * XPdA_3J9QpkrRg5FZ3nZDg
1 * VWXG7u7JFAE4MlpbPRJHWw
1 * eDzTvTnbeSwS7QKs4N7ybA
Source: medium
Leave a Reply
|
__label__pos
| 0.956628 |
Adware
0 Comment
What are the most important facts that people should know about Ngwehear.fun adware?
According to lots of people, Ngwehear.fun appeared on their computers without downloading it there. Typically, they show up when people visit such popular websites as Walmart, ebay, Youtube, Facebook and similar. Due to this, the promotional information related to the program and presented at search-Ngwehear.fun.com should be disregarded or handled with care. In addition, Ngwehear.fun virus may also show you ads that report about missing updates. All those offers and deals come from the third parties and tempt you to purchase some particular goods at better price rates.
Download Removal Toolto remove Ngwehear.fun
* WiperSoft scanner, available at this website, only works as a tool for virus detection. To have WiperSoft in its full capacity, to use removal functionality, it is necessary to acquire its full version. In case you want to uninstall WiperSoft, click here.
There is no one exact program that makes use of Ngwehear.fun. Therefore, if you want to protect your computer against similar threats, you should always be very attentive when installing new programs because this can save you from a lot of trouble. If it happens that you notice various ads on your screen, we suggest that you do not click on them because you might be taken to corrupted websites. In case you have recently discovered that your computer is infected with this malware but you don’t know how to deal with this situation, you have come to the right place. Make sure to avoid third-party download websites since they are known to provide bundled installers – the primary source of dubious programs like this adware.
How does this ad-supported software spread?
As you can see, the pop-up window claims that the executable file mpctray.exe has crashed, which is the executable of MPC Cleaner regarded as a PUP (potentially unwanted application), and this has supposedly resulted in some serious security issues requiring immediate actions to be undertaken. Nevertheless, none of the users is aware of the fact that this adware may also collect personally non-identifiable information about them and use it for marketing and other means. This is how such programs earn the money for their developers. This process might be complicated, thus we have developed a step-by-step Security experts advise being smart when installing any programs to your computer and if you are not aware of them, do not let them infiltrate into your machine.
We believe that you will be able to erase Ngwehear.fun via Control Panel; using As a result, computer users might install it unintentionally. The best thing about an automatic tool is the fact that it will protect your system from future threats as well. Don’t forget to upgrade your antispyware to its newest version to ensure detection and removal of the most recent unwanted programs. We have developed it to assist you through this complicated process of virus removal. Usually those offers are fake and can lead you to some unwanted consequences. Track every step and try to find unfamiliar or unwanted attachments.
Adware can be terminated quite easily:
Anti-Malware Tool, Anti-Malware Tool or Stopzilla security software can remove Ngwehear.fun pop-up. Running an anti-malware program, such as Anti-Malware Tool, Anti-Malware Tool or StopZilla will solve this issue automatically. It will remove this and other possible threats from your system and will ensure your safe browsing in the future. And remember – try to be more attentive when you install freeware or shareware! However, if you feel that these manual removal guidelines confuse you, you might want to opt for the automatic elimination method.
Download Removal Toolto remove Ngwehear.fun
* WiperSoft scanner, available at this website, only works as a tool for virus detection. To have WiperSoft in its full capacity, to use removal functionality, it is necessary to acquire its full version. In case you want to uninstall WiperSoft, click here.
Learn how to remove Ngwehear.fun from your computer
Step 1. Uninstall Ngwehear.fun
a) Windows 7/XP
1. Start icon → Control Panel win7-start How to remove Ngwehear.fun?
2. Select Programs and Features. win7-control-panel How to remove Ngwehear.fun?
3. Uninstall unwanted programs win7-uninstall-program How to remove Ngwehear.fun?
b) Windows 8/8.1
1. Right-click on Start, and pick Control Panel. win8-start How to remove Ngwehear.fun?
2. Click Programs and Features, and uninstall unwanted programs. win8-remove-program How to remove Ngwehear.fun?
c) Windows 10
1. Start menu → Search (the magnifying glass).
2. Type in Control Panel and press it.win10-start How to remove Ngwehear.fun?
3. Select Programs and Features, and uninstall unwanted programs. win10-remove-program How to remove Ngwehear.fun?
d) Mac OS X
1. Finder → Applications.
2. Find the programs you want to remove, click on them, and drag them to the trash icon.mac-os-app-remove How to remove Ngwehear.fun?
3. Alternatively, you can right-click on the program and select Move to Trash.
4. Empty Trash by right-clicking on the icon and selecting Empty Trash.
Step 2. Delete [postname[ from Internet Explorer
1. Gear icon → Manage add-ons → Toolbars and Extensions. IE-gear How to remove Ngwehear.fun?
2. Disable all unwanted extensions. IE-add-ons How to remove Ngwehear.fun?
a) Change Internet Explorer homepage
1. Gear icon → Internet Options. ie-settings How to remove Ngwehear.fun?
2. Enter the URL of your new homepage instead of the malicious one. IE-settings2 How to remove Ngwehear.fun?
b) Reset Internet Explorer
1. Gear icon → Internet Options.
2. Select the Advanced tab and press Reset. ie-settings-advanced How to remove Ngwehear.fun?
3. Check the box next to Delete personal settings. IE-reset How to remove Ngwehear.fun?
4. Press Reset.
Step 3. Remove Ngwehear.fun from Microsoft Edge
a) Reset Microsoft Edge (Method 1)
1. Launch Microsoft Edge → More (the three dots top right) → Settings. edge-settings How to remove Ngwehear.fun?
2. Press Choose what to clear, check the boxes and press Clear. edge-clear-data How to remove Ngwehear.fun?
3. Ctrl + Alt + Delete together.
4. Task Manager → Processes tab.
5. Find Microsoft Edge process, right-click on it, choose Go to details. task-manager How to remove Ngwehear.fun?
6. If Go to details is not available, choose More details.
7. Locate all Microsoft Edge processes, right-click on them and choose End task.
b) (Method 2)
We recommend backing up your data before you proceed.
1. Go to C:\Users\%username%\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe and delete all folders. edge-folder How to remove Ngwehear.fun?
2. Start → Search → Type in Windows PowerShell. edge-powershell How to remove Ngwehear.fun?
3. Right-click on the result, choose Run as administrator.
4. In Administrator: Windows PowerShell, paste this: Get-AppXPackage -AllUsers -Name Microsoft.MicrosoftEdge | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register $($_.InstallLocation)\AppXManifest.xml -Verbose} below PS C:\WINDOWS\system32> and press Enter. edge-powershell-script How to remove Ngwehear.fun?
Step 4. Delete [postname[ from Google Chrome
1. Menu → More tools → Extensions. chrome-menu-extensions How to remove Ngwehear.fun?
2. Delete all unwanted extensions by pressing the trash icon. chrome-extensions-delete How to remove Ngwehear.fun?
a) Change Google Chrome homepage
1. Menu → Settings → On startup. chrome-menu How to remove Ngwehear.fun?
2. Manage start up pages → Open a specific page or set of pages. chrome-startup-page How to remove Ngwehear.fun?
3. Select Add a new page, and type in the URL of the homepage you want.
4. Press Add.
5. Settings → Search engine → Manage search engines. chrome-search-engines How to remove Ngwehear.fun?
6. You will see three dots next to the set search engine. Press that and then Edit.
7. Type in the URL of your preferred search engine, and click Save.
b) Reset Google Chrome
1. Menu → Settings. chrome-menu How to remove Ngwehear.fun?
2. Scroll down to and press Advanced. chrome-settings How to remove Ngwehear.fun?
3. Scroll down further to the Reset option.
4. Press Reset, and Reset again in the confirmation window. chrome-reset How to remove Ngwehear.fun?
Step 5. Delete [postname[ from Mozilla Firefox
1. Menu → Add-ons → Extensions. mozilla-menu How to remove Ngwehear.fun?
2. Delete all unwanted extensions. mozilla-extensions How to remove Ngwehear.fun?
a) Change Mozilla Firefox homepage
1. Menu → Options. mozilla-menu How to remove Ngwehear.fun?
2. In the homepage field, put in your preferred homepage. mozilla-options How to remove Ngwehear.fun?
b) Reset Mozilla Firefox
1. Menu → Help menu (the question mark at the bottom). mozilla-troubleshooting How to remove Ngwehear.fun?
2. Press Troubleshooting Information.
3. Press on Refresh Firefox, and confirm your choice. mozilla-reset How to remove Ngwehear.fun?
Step 6. Delete [postname[ from Safari (Mac)
1. Open Safari → Safari (top of the screen) → Preferences. safari-menu How to remove Ngwehear.fun?
2. Choose Extensions, locate and delete all unwanted extensions. safari-extensions How to remove Ngwehear.fun?
a) Change Safari homepage
1. Open Safari → Safari (top of the screen) → Preferences. safari-menu How to remove Ngwehear.fun?
2. In the General tab, put in the URL of the site you want as your homepage.
b) Reset Safari
1. Open Safari → Safari (top of the screen) → Clear History.
2. Select from which time period you want to delete the history, and press Clear History.safari-clear-history How to remove Ngwehear.fun?
3. Safari → Preferences → Advanced tab.
4. Check the box next to Show Develop menu. safari-advanced How to remove Ngwehear.fun?
5. Press Develop (it will appear at the top) and then Empty Caches.
If the problem still persists, you will have to obtain anti-spyware software and delete Ngwehear.fun with it.
Disclaimer
This site provides reliable information about the latest computer security threats including spyware, adware, browser hijackers, Trojans and other malicious software. We do NOT host or promote any malware (malicious software). We just want to draw your attention to the latest viruses, infections and other malware-related issues. The mission of this blog is to inform people about already existing and newly discovered security threats and to provide assistance in resolving computer problems caused by malware.
add a comment
|
__label__pos
| 0.926339 |
Hacking
Only available on StudyMode
• Topic: Hacker, Hackers: Heroes of the Computer Revolution, Trojan horse
• Pages : 5 (1006 words )
• Download(s) : 50
• Published : May 5, 2013
Open Document
Text Preview
Computer hacking is the practice of modifying computer hardware and software to accomplish a goal outside of the creator’s original purpose. People who engage in computer hacking activities are often called hackers. Since the word “hack” has long been used to describe someone who is incompetent at his/her profession, some hackers claim this term is offensive and fails to give appropriate recognition to their skills.
Computer hacking is most common among teenagers and young adults, although there are many older hackers as well. Many hackers are true technology buffs who enjoy learning more about how computers work and consider computer hacking an “art” form. They often enjoy programming and have expert-level skills in one particular program. For these individuals, computer hacking is a real life application of their problem-solving skills. It’s a chance to demonstrate their abilities, not an opportunity to harm others.
Since a large number of hackers are self-taught prodigies, some corporations actually employ computer hackers as part of their technical support staff. These individuals use their skills to find flaws in the company’s security system so that they can be repaired quickly. In many cases, this type of computer hacking helps prevent identity theft and other serious computer-related crimes.
Common Methods for Hacking Computer Terminals(Servers):
This comprises of either taking control over terminal(or Server) or render it useless or to crash it.. following methods are used from a long time and are still used..
1. Denial of Service -
DoS attacks give hackers a way to bring down a network without gaining internal access. DoSattacks work by flooding the access routers with bogus traffic(which can be e-mail or Transmission Control Protocol, TCP, packets).
2. Distributed DoSs -
Distributed DoSs (DDoSs) are coordinated DoS attacks from multiple sources. A DDoS is more difficult to block because it uses multiple, changing, source IP addresses.
3. Sniffing -
Sniffing refers to the act of intercepting TCP packets. This interception can happen through simple eavesdropping or something more sinister.
4. Spoofing -
Spoofing is the act of sending an illegitimate packet with an expected acknowledgment (ACK), which a hacker can guess, predict, or obtain by snooping
5. SQL injection -
SQL injection is a code injection technique that exploits a security vulnerability occurring in the database layer of an application. It uses normal SQL commands to get into database with elivatedprivellages..
6. Viruses and Worms -
Viruses and worms are self-replicating programs or code fragments that attach themselves to other programs (viruses) or machines (worms). Both viruses and worms attempt to shut down networks by flooding them with massive amounts of bogus traffic, usually through e-mail.
7. Back Doors -
Hackers can gain access to a network by exploiting back doors administrative shortcuts, configuration errors, easily deciphered passwords, and unsecured dial-ups. With the aid ofcomputerized searchers (bots), hackers can probably find any weakness in the network.
So, not interested in these stuffs.. huh??? wait there is more for you.. So, how about the one related to hacking the passwords of email and doing some more exciting stuffs.. The various methods employed for this are:
8. Trojan Horses -
Trojan horses, which are attached to other programs, are the leading cause of all break-ins. When a user downloads and activates a Trojan horse, the software can take the full control over the system and you can remotely control the whole system.. great..!!! They are also reffered asRATs(Remote Administration tools)
9. Keyloggers -
Consider the situation, everything you type in the system is mailed to the hacker..!! Wouldn't it be easy to track your password from that.. Keyloggers perform similar functionallities.. So next time you type anything.. Beware..!! Have already posted...
tracking img
|
__label__pos
| 0.514386 |
Edit this Doc Adjusting Network Connection
The Selenium Mobile JSON Wire Protocol Specification supports an API for getting and setting the network connection for a device. The API works through a bitmask, assigning an integer to each possible state:
Value (Alias) Data Wifi Airplane Mode
0 (None) 0 0 0
1 (Airplane Mode) 0 0 1
2 (Wifi only) 0 1 0
4 (Data only) 1 0 0
6 (All network on) 1 1 0
iOS
Unfortunately, at the moment Appium does not support the Selenium network connection API for iOS.
Android
There are the following limitations:
Real Devices
Emulators
Windows
Unfortunately, at the moment Appium does not support the Selenium network connection API for Windows.
// javascript
// set airplane mode
driver.setNetworkConnection(1)
// set wifi only
driver.setNetworkConnection(2)
// set data only
driver.setNetworkConnection(4)
// set wifi and data
driver.setNetworkConnection(6)
Retrieving the network connection settings returns the same bitmask, from which the status can be decoded.
// javascript
driver.getNetworkConnection().then(function (connectionType) {
switch (connectionType) {
case 0:
// no network connection
break;
case 1:
// airplane mode
break;
case 2:
// wifi
break;
case 4:
// data
break;
case 6:
// wifi and data
break;
}
});
|
__label__pos
| 0.51108 |
tidymodels / rsample
1
#' Nested or Double Resampling
2
#'
3
#' `nested_cv` can be used to take the results of one resampling procedure
4
#' and conduct further resamples within each split. Any type of resampling
5
#' used in `rsample` can be used.
6
#'
7
#' @details
8
#' It is a bad idea to use bootstrapping as the outer resampling procedure (see
9
#' the example below)
10
#' @param data A data frame.
11
#' @param outside The initial resampling specification. This can be an already
12
#' created object or an expression of a new object (see the examples below).
13
#' If the latter is used, the `data` argument does not need to be
14
#' specified and, if it is given, will be ignored.
15
#' @param inside An expression for the type of resampling to be conducted
16
#' within the initial procedure.
17
#' @return An tibble with `nested_cv` class and any other classes that
18
#' outer resampling process normally contains. The results include a
19
#' column for the outer data split objects, one or more `id` columns,
20
#' and a column of nested tibbles called `inner_resamples` with the
21
#' additional resamples.
22
#' @examples
23
#' ## Using expressions for the resampling procedures:
24
#' nested_cv(mtcars, outside = vfold_cv(v = 3), inside = bootstraps(times = 5))
25
#'
26
#' ## Using an existing object:
27
#' folds <- vfold_cv(mtcars)
28
#' nested_cv(mtcars, folds, inside = bootstraps(times = 5))
29
#'
30
#' ## The dangers of outer bootstraps:
31
#' set.seed(2222)
32
#' bad_idea <- nested_cv(mtcars,
33
#' outside = bootstraps(times = 5),
34
#' inside = vfold_cv(v = 3))
35
#'
36
#' first_outer_split <- bad_idea$splits[[1]]
37
#' outer_analysis <- as.data.frame(first_outer_split)
38
#' sum(grepl("Volvo 142E", rownames(outer_analysis)))
39
#'
40
#' ## For the 3-fold CV used inside of each bootstrap, how are the replicated
41
#' ## `Volvo 142E` data partitioned?
42
#' first_inner_split <- bad_idea$inner_resamples[[1]]$splits[[1]]
43
#' inner_analysis <- as.data.frame(first_inner_split)
44
#' inner_assess <- as.data.frame(first_inner_split, data = "assessment")
45
#'
46
#' sum(grepl("Volvo 142E", rownames(inner_analysis)))
47
#' sum(grepl("Volvo 142E", rownames(inner_assess)))
48
#' @export
49
nested_cv <- function(data, outside, inside) {
50 1
nest_args <- formalArgs(nested_cv)
51 1
cl <- match.call()
52
53 1
boot_msg <-
54 1
paste0(
55 1
"Using bootstrapping as the outer resample is dangerous ",
56 1
"since the inner resample might have the same data ",
57 1
"point in both the analysis and assessment set."
58
)
59
60 1
outer_cl <- cl[["outside"]]
61 1
if (is_call(outer_cl)) {
62 1
if (grepl("^bootstraps", deparse(outer_cl)))
63 1
warning(boot_msg, call. = FALSE)
64 1
outer_cl$data <- quote(data)
65 1
outside <- eval(outer_cl)
66
} else {
67 0
if (inherits(outside, "bootstraps"))
68 0
warning(boot_msg, call. = FALSE)
69
}
70
71 1
inner_cl <- cl[["inside"]]
72 1
if (!is_call(inner_cl))
73 1
stop(
74 1
"`inside` should be a expression such as `vfold()` or ",
75 1
"bootstraps(times = 10)` instead of a existing object.",
76 1
call. = FALSE
77
)
78 1
inside <- map(outside$splits, inside_resample, cl = inner_cl)
79
80 1
out <- dplyr::mutate(outside, inner_resamples = inside)
81
82 1
out <- add_class(out, cls = "nested_cv")
83
84 1
attr(out, "outside") <- cl$outside
85 1
attr(out, "inside") <- cl$inside
86
87 1
out
88
}
89
90
inside_resample <- function(src, cl) {
91 1
cl$data <- quote(as.data.frame(src))
92 1
eval(cl)
93
}
94
95
#' @export
96
print.nested_cv <- function(x, ...) {
97 1
char_x <- paste("#", pretty(x))
98 1
cat(char_x, sep = "\n")
99 1
class(x) <- class(tibble())
100 1
print(x, ...)
101
}
Read our documentation on viewing source code .
Loading
|
__label__pos
| 0.98268 |
What is a DLNA Media Server on My Router?
Learn > Cable Modem & Routers > What is a DLNA Media Server on My Router?
The DLNA (Digital Living Network Alliance) is a trade organization that sets standards and guidelines for home networking devices. It makes media file access within your home network easier. This includes sharing content across PCs, smartphones, tablets, smart TVs, Blu-ray Disc players, home theater receivers, and media streaming devices and more.
DLNA Certification remains in demand for consumer electronics manufacturers, middleware vendors and software. Because of this, you can get a router with DLNA media server functionality which means turning your router into a media library that can instantly share content. So, what can this feature do for you?
What is DLNA?
DLNA is an organization but also stands for Digital Living Network Alliance. (Similar to how MoCA stands for Multimedia over Coaxial Alliance.) A DLNA-certified device can become a DLNA Media Server, which is a standard that allows you to share content between other DLNA-certified devices around your house over your home WiFi network. To do this, you need a router that supports DLNA and other certified connected devices.
If your router is DLNA-certified, you can set up the media server on your device easily by going to the settings menu.
How it works is like this:
When a DLNA-certified device is added to your home network, it can automatically communicate and share media files with other connected DLNA device on the network. DLNA-certified devices can do things like:
• Find and play movies
• Send, display, or upload photos
• Find, send, play, or download music
• Send and print photos between compatible network-connected devices
And more.
What Devices Are DLNA-Certified or Compatible?
You can turn almost any device into a DLNA Media Server if it is a DLNA-certified device. For example, you can:
• Send audio and video from your smartphone to a DLNA-certified TV.
• Access audio, video, or photos on a DLNA-certified PC and play them on a certified TV, Blu-ray Disc, or other DLNA player.
• Send photos from a certified digital camera to a DLNA-certified TV, PC, or other compatible devices.
Digital Media Server (DMS) on my Router
Digital Media Server (DMS) is a DLNA certification category that applies to devices that store a media library. So, in this instance, with having a DMS on your router, this means that the router needs to store a media library. The qualifications for this include having a hard drive or memory card where the media is saved so that the device can call up the files to a connected streaming device or player.
DLNA opens a lot of possibilities on a home network. With it, you can get home from vacation, walk in with your smartphone and load videos from your trip onto your TV with the press of a button to watch and relive the trip. No extra connections needed. And so many more examples like this.
Hitron’s CODA 5610 Cable Modem Routers have an integrated DLNA Media Server with support for video, audio and image serving, and can be paired with WiFi Extenders or Mesh for extra coverage. For more information on Cable Modem Routers, check out Hitron’s Learn Page.
NEW! Now Available at Retail!
CODA DOCSIS 3.1 Cable Modem
Related Articles
What is WiFi 6 and Is WiFi 6 Really Better?
WiFi 6 (also known as 802.11ax) is the most recent version of WiFi. It’s the best version of WiFi to date, with more support, faster speeds, less network congestion, better security, improved battery life for devices, and more. If you are looking for the latest and...
How do I set up my cable modem?
There are differences in setting up your cable modem whether you purchased or rented the device. If you bought your own cable modem, you should check the user manual or manufacturers' support website for specific details. For example, Xfinity (Comcast) and other Cable...
Can I use any cable modem for Cable Internet?
The right cable modem or cable modem router combo to use to get cable Internet in your home depends on a few things: Whether you are renting vs buying Certification and compatibility with your Service Provider Must-have features to consider For example, if your...
What’s the difference between a modem and a cable modem?
A modem and cable modem are different devices because of the services that they connect to. A DSL modem connects to landline or telephone lines (also referred to as Copper by Internet Service Providers). A cable modem connects to Cable TV wiring (coax or coaxial...
Cable Modem Routers: Band Steering Explained
In today’s world, smart devices and connected IoT devices are everywhere in every home. With all these devices connected on one network, it can get difficult to ensure a smooth online experience while streaming, gaming, and video-calling all under the same roof....
These Hitron products are now available on Amazon!
You can own high-quality, Carrier-grade products!
Coax Cable Tester
DOCSIS 3.1 Cable Modem
MoCA 2.5 Coax to Ethernet Adapter
Featured Articles
3
Buying vs Renting a Router
Should you invest in your own Internet equipment and buy a router instead of renting one from your Internet service provider (ISP)? It depends on the cost and your preferences. Here are some pros and cons to help you decide.
Cable Modem Routers… A Complete Guide
This friendly, complete cable modem guide will answer your essential questions about cable modems, comparisons, compatibility and more FAQs that matter to you.
What is a Router?
A router is a small box that translates data from your modem to communicate a Wi-Fi signal to the devices on your local network. Learn more.
Why Buy a Gateway Instead of a Modem?
Should you invest in your own Internet equipment and buy a gateway or cable modem router instead of a modem? Or, should you rent it from your Internet service provider (ISP)? It depends on cost and your preferences. Here are some pros and cons to help you decide.
Let me know when the OS2210 is available?
|
__label__pos
| 0.659076 |
A tool to generate shop lists and shop events for SimpleServer.
Java Shell
Switch branches/tags
Nothing to show
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
src/shoptool
test/shoptool
.gitignore
CHANGES.md
LICENSE
README.md
shoptool.sh
README.md
ShopTool
Version 1.0.2 Beta
Copyright (C) 2012 Felix Wiemuth, Anton Pirogov
##License This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
##Description ShopTool is a tool for SimpleServer to create Minecraft in-game shops based on SimpleServer Events. The result is a vendor bot showing up in the world with whom can be interacted. In fact the user in a first step creates pricelists to be able to create shops from these lists in a second step. To get started you can refer to the Quickstart guide below.
##Setup: Preperations to use the shop system with SimpleServer Since the shop system is implemented via events, first make sure that the enableEvents flag in simpleserver.properties is set to true. Now the server needs additional code which is located in the static.xml file that comes with ShopTool. In general, the content can simply by copied into the <config> section of SimpleServers config.xml. If you already have a onServerStart event, just add the line launch storeBotsRestartService to initialize the shops and delete the copied onServerStart event.
##Usage
###General ShopTool is a command-line program that works by passing commands with options and arguments. You can run the program by executing
$ java -jar ShopTool.jar <ShopTool command> <command options, arguments>
inside a console at the directory where your ShopTool.jar file is. If you use Linux you can run
$ ./shoptool.sh <ShopTool command> <command options, arguments>
instead. If you are a (Windows) user who doesn´t know how to deal with the command line or how to edit text files with extensions other than ".txt" you should inform yourself about these things before using ShopTool.
###Basic steps To create a shop for your Minecraft SimpleServer server you have to follow these steps
1. Create a pricelist
2. Create a shop file
3. Create event code from a shop file
4. Put event code into SimpleServer´s config.xml file
###Pricelists Pricelists are files containing entries with items to be bought and sold. Each line in a .pricelist file represents one item. A line (with all keys used) may look like this:
ID=35:6 NAME=pinkwool PRICE_BUY=2000 PRICE_SELL=1800 NORMAL=12 MAX=64 TIME=6000 #This is a comment
• ID the block id of the minecraft item (see http://www.minecraftwiki.net/wiki/Item_id for information on IDs)
• NAME [opt.] an alias for the item - this is only for convenience in the text files and currently not used by ShopTool or SimpleServer
• PRICE_BUY the price you have to pay for buying an item of this type
• PRICE_SELL the amout you get for selling an item of this type
• PRICE can be used instead of PRICE_BUY and PRICE_SELL indicating the price for both
• NORMAL the amount of items the shop tends to have (the stock is changing towards this amount while time passes)
• MAX the maximum of items the shop can hold - this means you cannot sell infinitely
• TIME time interval in seconds it takes to change the current stock by one item towards 'NORMAL'
Note: Leaving out a price means that this item is not available for selling or buying. Use this feature to make some items only available for purchase and others only for selling.
Note: Not all of these keys must be given - if something is left out, default values are used (either those specified in commands or internal defaults).
To start, you can use the different basic pricelists available for download. For example, the "items.pricelist" file contains all IDs and the corresponding SimpleServer aliases. Copy a pricelist, delete item lines you don´t want to use and add lines that are missing.
You can generate a new pricelist based on existing pricelists. In this process, all items given in the different lists are thrown together to build the new list (if items occur twice, only the first one ist taken). This is especially useful if you want to apply some values to all items of a pricelist, e.g. interests or default stock or refill time values. It is also useful to create a raw assortment of a new shop - for this purpose there are basic pricelists that contain only items of a special group, e.g. ore or wool. After creating a pricelist like this you can go into the file an manually adapt some items.
Note that pricelists are revised before being processed: If an item line is incorrect ShopTool tries to make it valid but if that fails, the item is discarded.
Syntax of the pricelist command
pl [factor] [interest buy] [interest sell] [[-n][-N]] normalStock [[-max][-MAX]] maxStock [[-t][-T]] stockUpdateTime [dest] [file 1] [file 2] ...
• factor factor all prices with this constant
• interest buy add an interest to all buy prices
• interest sell subtract an interest from all sell prices
• dest path to the file the new pricelist should be saved to
• file n a correct pricelist file
The switches -n, -max and -t set the corresponding values for each item where they are not already set. Using a switch in capital letters forces the value for all items (existing values are overwritten). Note that all switches are optional but must be given in the shown order.
Example of using the pricelist command
We start with two pricelist files:
Pricelist 1: general.pricelist
#General
ID=3 NAME=dirt PRICE=10
ID=4 NAME=cobblestone PRICE=20
ID=5 NAME=wood PRICE=25
ID=12 NAME=sand PRICE=40
ID=13 NAME=gravel PRICE=30
ID=24 NAME=sandstone PRICE=70
Pricelist 2: metal.pricelist
#Metal ingot / ore
ID=263 NAME=coal PRICE=250
ID=331 NAME=redstone PRICE=1000
ID=265 NAME=iron PRICE=10000
ID=348 NAME=glowstone PRICE=13000
ID=22 NAME=lapiz PRICE=25000
ID=266 NAME=gold PRICE=100000
ID=264 NAME=diamond PRICE=2000000
As you see, these lists only include id, name and a single price per item. This is perfectly ok, making it easy to build special pricelists from these general forms.
Let´s say we want all the items of both lists, subtract an interest of 0.1 from the sell price and have all prices be 25% higher. The normal stock should be 20, the maximum 64 and the time interval of stock changing 2 minutes. Our new pricelist should be saved to the file mix.pricelist. No problem:
pl 1.25 0 0.1 -n 20 -max 64 -t 120 mix.pricelist general.pricelist metal.pricelist
The result: mix.pricelist
ID=263 NAME=coal PRICE_BUY=313 PRICE_SELL=281 NORMAL=20 MAX=64 TIME=120
ID=22 NAME=lapiz PRICE_BUY=31250 PRICE_SELL=28125 NORMAL=20 MAX=64 TIME=120
ID=24 NAME=sandstone PRICE_BUY=88 PRICE_SELL=78 NORMAL=20 MAX=64 TIME=120
ID=331 NAME=redstone PRICE_BUY=1250 PRICE_SELL=1125 NORMAL=20 MAX=64 TIME=120
ID=13 NAME=gravel PRICE_BUY=38 PRICE_SELL=33 NORMAL=20 MAX=64 TIME=120
ID=12 NAME=sand PRICE_BUY=50 PRICE_SELL=45 NORMAL=20 MAX=64 TIME=120
ID=348 NAME=glowstone PRICE_BUY=16250 PRICE_SELL=14625 NORMAL=20 MAX=64 TIME=120
ID=3 NAME=dirt PRICE_BUY=13 PRICE_SELL=11 NORMAL=20 MAX=64 TIME=120
ID=5 NAME=wood PRICE_BUY=32 PRICE_SELL=28 NORMAL=20 MAX=64 TIME=120
ID=4 NAME=cobblestone PRICE_BUY=25 PRICE_SELL=22 NORMAL=20 MAX=64 TIME=120
ID=266 NAME=gold PRICE_BUY=125000 PRICE_SELL=112500 NORMAL=20 MAX=64 TIME=120
ID=265 NAME=iron PRICE_BUY=12500 PRICE_SELL=11250 NORMAL=20 MAX=64 TIME=120
ID=264 NAME=diamond PRICE_BUY=2500000 PRICE_SELL=2250000 NORMAL=20 MAX=64 TIME=120
ShopTool calculated the factorization of all prices, inserted buy and sell prices seperately and added the normal, max and time values.
Now we think our prices are too high and it takes too long to refill - let´s change our pricelist: 10% lower prices and only 30 seconds stock update time. We don´t have to start all over again, we can simply run the pricelist command ontop of the current list:
pl 0.9 0 0 -T 30 mix2.pricelist mix.pricelist
Note that we had to use the -T option instead of -t because we wanted to overwrite the old values.
Result: mix2.pricelist
ID=263 NAME=coal PRICE_BUY=282 PRICE_SELL=252 NORMAL=20 MAX=64 TIME=30
ID=22 NAME=lapiz PRICE_BUY=28125 PRICE_SELL=25312 NORMAL=20 MAX=64 TIME=30
ID=24 NAME=sandstone PRICE_BUY=80 PRICE_SELL=70 NORMAL=20 MAX=64 TIME=30
ID=331 NAME=redstone PRICE_BUY=1125 PRICE_SELL=1012 NORMAL=20 MAX=64 TIME=30
ID=13 NAME=gravel PRICE_BUY=35 PRICE_SELL=29 NORMAL=20 MAX=64 TIME=30
ID=12 NAME=sand PRICE_BUY=45 PRICE_SELL=40 NORMAL=20 MAX=64 TIME=30
ID=348 NAME=glowstone PRICE_BUY=14625 PRICE_SELL=13162 NORMAL=20 MAX=64 TIME=30
ID=3 NAME=dirt PRICE_BUY=12 PRICE_SELL=9 NORMAL=20 MAX=64 TIME=30
ID=5 NAME=wood PRICE_BUY=29 PRICE_SELL=25 NORMAL=20 MAX=64 TIME=30
ID=4 NAME=cobblestone PRICE_BUY=23 PRICE_SELL=19 NORMAL=20 MAX=64 TIME=30
ID=266 NAME=gold PRICE_BUY=112500 PRICE_SELL=101250 NORMAL=20 MAX=64 TIME=30
ID=265 NAME=iron PRICE_BUY=11250 PRICE_SELL=10125 NORMAL=20 MAX=64 TIME=30
ID=264 NAME=diamond PRICE_BUY=2250000 PRICE_SELL=2025000 NORMAL=20 MAX=64 TIME=30
###Shop(file)s You don´t actually define shops with ShopTool. Shops are simply built by a file containing the following information:
NAME=MyShop
STARTCOORD=-30,70,20
ENDCOORD=-20,70,25
BOTCOORD=-25,70,24
VENDORNAME=Vendor
PRICELIST=myPricelist
• NAME how the shop should be called in the game
• STARTCOORD, ENDCOORD specify two points in the world between which an own area for the shop will be created
• BOTCOORD the coordinates where the vendor bot will stand
• VENDORNAME the name of the shop vendor
• PRICELIST the pricelist the shop should use - a path (starting from the location of the shop file) to a correct .pricelist file must be given, leaving out the extension
Note: All "coords" are minecraft coordinates and must be given in the form x,y,z.
Note: The keys (words in capital letters until the equal sign) must occur exactly in the shown order and one key per line.
###Shop events To use a shop with SimpleServer the script code for SimpleServer Events must be generated. This is simply done with the event command.
Syntax of the event command:
ev [-s] [dest] [file 1] [file 2] ...
• dest path to the file the generated code should be saved to
• file n a correct shop file
The switch -s is optional, it adds the code of static.xml to the output. If the setup is already done it shouldn´t be used. It can be used when generating shops for the first time to do setup and shops in one step.
###Use generated event code with SimpleServer The generated code must be copied into the <config> section of your SimpleServer config.xml. The generated areas have to be manually merged with your own areas. To do so, move the content of the generated <dimension> section into your own.
If everything has been set up correctly, you will see the NPCs spawning at the specified locations after starting the server and you can test your shops. Just follow the in-game instructions when entering a shop area or see the SimpleServer wiki.
##Quickstart guide This will give a quick overview of how creating shops works. We won´t cover the useful pricelist command here, see above for that!
First read the "Setup" and "Usage" (general) sections at the beginning of this file, it is necessary that you did the setup and know how to use the ShopTool program for this tutorial!
Let´s create a shop! We start by searching or building a good place in our Minecraft world (in-game). Then we write down the following coordinates:
• start and end coordinates of a new 3D area that will be created for the shop like with the SimpleServer myarea command (imagine the two coords spanning a box)
• the coordinates where the vendor bot should be placed
For this tutorial we assume all files created and needed to be in the same directory as the ShopTool.jar file.
###The pricelist Now we need to say what we want to sell and buy in our shop and how much it should cost. That´s where pricelists come into play. We have two options: create our pricelist manually or generate it from existing ones. We will do it manually here, but note that the pricelist command is quite useful to make creating pricelists convenient. See the pricelists section above for an example.
First we create a new file called list1.pricelist.
If the shop should sell iron ingots for 500, buy them for 400 and buy birch wood for 200 but should not sell it we put these lines into the file:
ID=265 NAME=iron-ingot PRICE_BUY=500 PRICE_SELL=400 NORMAL=12 MAX=20 TIME=400
ID=17:2 NAME=birch-wood PRICE_BUY= PRICE_SELL=200 NORMAL=0 MAX=64 TIME=600
We also stated that the shop should normally have 12 iron ingots available, can store a maximum of 20 and updates its stock towards 12 every 400 seconds. The same applies for the second entry respectively.
###The shop file It´s time to create our shop file which contains the general information about our shop by writing the following lines into a new file myshop.shop.
NAME=MyShop
STARTCOORD=-30,60,20
ENDCOORD=-20,70,25
BOTCOORD=-25,65,23
VENDORNAME=Harry
PRICELIST=list1
We replace the coordinates with the ones we wrote down before.
###Final steps Now lets create the code for our shop by executing
ev code.xml myshop.shop
Finally, we open the code.xml file that was created by ShopTool and copy its content. Then we open the config.xml file of our SimpleServer server and paste the code inside the <config> section. If you were already using areas you will have to merge them (see above for help).
Now the shop can be tested by starting the server and entering the shop area. Just follow the in-game instructions!
If you have any problems you could read the more detailed descriptions above and refer to the troubleshooting section below.
##Troubleshooting
###Checklist
• Is enableEvents=true in simpleserver.properties?
• Did you try to read the output of ShopTool to get information about what´s wrong?
• Did you merge the generated code correctly into config.xml? Check the SimpleServer console for errors!
If you still have problems, you can consult the ShopTool forum page (see "Qustions and comments") to look for solutions or ask questions!
##Feedback We´d like to know how you like ShopTool and if you have any suggestions to improve it.
###Bugs If you find any bugs in ShopTool, please report them using the Github issue system.
###Questions and comments Visit the ShopTool forum page at the Minecraft forums to ask questions, discuss features or leave comments!
|
__label__pos
| 0.945315 |
Curiosity is bliss Archive Feed About Search
Julien Couvreur's programming blog and more
Analyzing a nullability example
Cezary Piątek posted a good overview of the C# nullable reference types feature. It includes a critique of a code snippet. Examining that snippet is a good way to understand some of the C# LDM’s decisions. In the following, Cezary expects a warning on a seemingly unreachable branch and no warning on the dereference.
#nullable enable
public class User
{
public static void Method(User userEntity)
{
if (userEntity == null) // Actual: no warning for seemingly unreachable branch.
{
}
var s = userEntity.ToString(); // Actual: warning CS8602: Dereference of a possibly null reference.
}
}
sharplab
Why is there no warning on the seemingly unnecessary null test if (userEntity == null) ... or the apparently unreachable branch?
It’s because such tests are useful and encouraged in public APIs. Users should check inputs and the compiler should not get in the way of good practices. The branch of the if is therefore reachable.
Then, what is the state of userEntity within the if block?
We take the user’s null test seriously by considering userEntity to be maybe-null within the if block. So if the user did userEntity.ToString() inside the if, the compiler would rightly warn. This protects the user against a null reference exception that could realistically happen.
Given those, what should be the state of the userEntity at the exit of the if?
Because we’re merging branches where userEntity is maybe-null (when the condition of the if is true) and not-null (in the alternative), the state of userEntity is maybe-null. Therefore we warn on dereference on userEntity after the if. Note that if the if block contained a throw, the userEntity would be considered not-null after the if. This is a common patter: if (userEntity is null) throw new ArgumentNullException(nameof(userEntity));.
comments powered by Disqus
|
__label__pos
| 0.996045 |
×
Community Blog Fluid with JindoFS: An Acceleration Tool for Alibaba Cloud OSS
Fluid with JindoFS: An Acceleration Tool for Alibaba Cloud OSS
This article introduces Fluid, an open source Kubernetes-native distributed dataset orchestrator and accelerator for data-intensive applications, and talks about the advantages of JindoRuntime.
By Wang Tao (Yangli), Che Yang (Biran)
1
Fluid
Fluid is an open-source native orchestration and acceleration engine for a distributed dataset on Kubernetes, which mainly serves data-intensive applications in cloud-native scenarios, such as big data applications, AI applications and so on. By using the data layer abstraction provided by Kubernetes service, data can be flexibly and efficiently moved, replicated, converted and managed between storage sources such as HDFS, OSS, and Ceph, and cloud-native application computing in Kubernetes upper layers.
The specific data operations are transparent to users, so users no longer have to worry about the efficiency of accessing remote data and the convenience of managing data sources. And making the O&M and scheduling decisions for Kubernetes is no longer a problem. Users only need to access the abstracted data through the most natural Kubernetes native data volumes. The remaining tasks and underlying details are all submitted to Fluid.
Currently, the Fluid project focuses on dataset orchestration and application orchestration. Dataset orchestration caches data of the specified dataset to a Kubernetes node with a specified feature, while application orchestration schedules the specified application to a node that can or has stored the specified dataset. The two can also be combined to provide collaborative orchestration, which schedules the node resources based on the needs of the dataset and application.
2
Then, it is about the dataset in Fluid. A dataset is a logically related set of data that is used by computing engines, such as Spark for big data and TensorFlow for AI scenarios. The intelligent application and scheduling of datasets will create core value in the industry. Actually, there are multiple dimensions for dataset management, such as security, version management, and data acceleration.
From the dimension of data acceleration, dataset management is provided. In Dataset, Runtime, an execution engine, is defined to implement capabilities such as dataset security, version management, and data acceleration. Runtime defines a series of lifecycle interfaces, and the interfaces can be implemented to support dataset management and acceleration. Currently, the Runtime supported by Fluid includes AlluxioRuntime and JindoRuntime.
Fluid is designed to provide an efficient and convenient data abstraction for AI and cloud-native big data applications. It abstracts data from storage for the following functions:
• Fuse data and computing through data affinity scheduling and distributed caching engine acceleration, thus accelerating the data access of computing.
• Make the data independent from the storage for management. Implement resource isolation through namespaces in Kubernetes for secure data isolation.
• Combine the data from different storage for computing, which is likely to eliminate the data islanding effect caused by the differences between different storage.
JindoRuntime
To learn about JindoRuntime of Fluid, JindoFS should be introduced first. It is the engine layer of JindoRuntime.
JindoFS is a big data storage optimization engine developed by Alibaba Cloud for Object Storage Service (OSS). Fully compatible with Hadoop file system interfaces, JindoFS provides a more flexible and efficient computing storage solution for users. At present, JindoFS has been verified to support all computing services and engines in Alibaba Cloud E-MapReduce (EMR), including Spark, Flink, Hive, MapReduce, Presto, Impala and so on. JindoFS supports two storage modes, block storage mode and cache mode. In Block storage mode, files are stored as data blocks in OSS and data backup can be used locally to accelerate caching. Metadata is managed by a local namespace service. As thus, file data can be built through local metadata and block data. The cache mode stores files on OSS. This mode is compatible with the existing OSS file system. Users can access the original directory structure and files through OSS. Additionally, this mode caches data and metadata, which improves the performance of data reading and writing. With this mode, users can seamlessly connect to the existing data in OSS without migrating data to OSS. In terms of data synchronization, users can select different metadata synchronization policies as needed.
In Fluid, JindoRuntime uses the cache mode of JindoFS to access and cache remote files. For the usage of JindoFS alone in other environments to access OSS, download JindoFS SDK for deployment and usage as instructed in the user guide. JindoRuntime is an execution engine implementing the data management and caching of Dataset based on the JindoFS distributed system developed by the Alibaba Cloud EMR team.
Fluid manages and schedules JindoRuntime to achieve dataset visibility, auto scaling, data migration, and computing acceleration. Compatible with the native Kubernetes environment, using and deploying JindoRuntime on Fluid can be easily implemented without much preparation. Considering the object storage features, the Navite framework is adopted to optimize the performance. And cloud data security features such as password-free and checksum verification are also supported.
Advantages of JindoRuntime
JindoRuntime provides access and cache acceleration of Alibaba Cloud OSS and enables the use of massive OSS files as easily as the local disk can do by means of the Portable Operating System Interface (POSIX) of FUSE. It has the following features:
1. Excellent Performance
• Outstanding reading/writing performance of OSS: JindoRuntime enhances the efficiency and stability of reading and writing by deeply integrating OSS. The OSS access interface and the cold data access performance, especially in reading and writing small files, are optimized through the native layer.
• Multiple distributed cache policies: JindoRuntime supports single TB level file caching and metadata caching policies, performing prominently in large-scale AI training and practical testing in data lake scenarios.
2. Safety and Reliability
• Authentication security: Support password-free access of Security Token Service (STS) and native secret key encryption of Kubernetes in Alibaba Cloud.
• Data security: Adopt security policies such as checksum verification and client data encryption to protect cloud data and user information.
3. Ease of Use
JindoRuntime is compatible with the native Kubernetes environment. It connects the data volume through custom resource definitions. And it can be easily used and deployed without much preparation.
4. Lightweight
Since the underlying layer is based on C++, the overall structure is lightweight with low extra overhead of the various OSS access interfaces.
JindoRuntime's Performance
Arena was used to train the ResNet-50 model on ImageNet dataset based on the Kubernetes cluster. When the local cache is opened, JindoRuntime based on JindoFS performed significantly better than the open source OSSFS with the training time reduced by 76 percent. This test is described in detail in subsequent articles.
3
Quick-start Guide to JindoRuntime
It is simple to use JindoRuntime. It takes only about 10 minutes to deploy the required JindoRuntime environment with the basic Kubernetes and OSS environments. Do as follows:
• Create a namespace
kubectl create ns fluid-system
• Download [fluid-0.5.0.tgz](http://smartdata-binary.oss-cn-shanghai.aliyuncs.com/fluid/332cache/fluid-0.5.0.tgz)
• Use Helm to install Fluid
helm install --set runtime.jindo.enabled=true fluid fluid-0.5.0.tgz
• View the running status of Fluid
$ kubectl get pod -n fluid-system
NAME READY STATUS RESTARTS AGE
csi-nodeplugin-fluid-2mfcr 2/2 Running 0 108s
csi-nodeplugin-fluid-l7lv6 2/2 Running 0 108s
dataset-controller-5465c4bbf9-5ds5p 1/1 Running 0 108s
jindoruntime-controller-654fb74447-cldsv 1/1 Running 0 108s
The number of csi-nodeplugin-fluid-xx should be the same as the number of nodes in Kubernetes cluster.
• Create a dataset and JindoRuntime
Before creating a dataset, create a secret to store the fs.oss.accessKeyId and fs.oss.accessKeySecret of OSS to prevent the plaintext from exposure. Then, Kubernetes will encrypt the created secret and fill the key and secret information into the mySecret.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: mysecret
stringData:
fs.oss.accessKeyId: xxx
fs.oss.accessKeySecret: xxx
Generate secret:
kubectl create -f mySecret.yaml
Create a resource.yaml file that contains two parts:
1. The first part contains a dataset and ufs dataset information. Create a Dataset CRD object in which the source of the dataset is described.
2. Create a JindoRuntime in the second part. It is equivalent of starting a JindoFS cluster for cache services.
apiVersion: data.fluid.io/v1alpha1
kind: Dataset
metadata:
name: hadoop
spec:
mounts:
- mountPoint: oss://<oss_bucket>/<bucket_dir>
options:
fs.oss.endpoint: <oss_endpoint>
name: hadoop
encryptOptions:
- name: fs.oss.accessKeyId
valueFrom:
secretKeyRef:
name: mysecret
key: fs.oss.accessKeyId
- name: fs.oss.accessKeySecret
valueFrom:
secretKeyRef:
name: mysecret
key: fs.oss.accessKeySecret
---
apiVersion: data.fluid.io/v1alpha1
kind: JindoRuntime
metadata:
name: hadoop
spec:
replicas: 2
tieredstore:
levels:
- mediumtype: HDD
path: /mnt/disk1
quota: 100Gi
high: "0.99"
low: "0.8"
1. mountPoint: oss:/// indicates the path on which UFS is mounted. The path does not include endpoint information.
2. fs.oss.endpoint: It is the endpoint information of the oss bucket, which can be the address of public networks or inner networks.
3. replicas: It represents the number of workers for creating the JindoFS cluster.
4. mediumtype: Currently, JindoFS only supports HDD, SSD, or MEM.
5. path: It refers to the storage path. Currently, only one disk is supported. Another disk is also required to store files such as log when choosing MEM as cache.
6. quota: It means the maximum cache capacity in unit Gi.
7. high/low: Each is the upper/lower limit of water level.
kubectl create -f resource.yaml
Check the status of the dataset:
$ kubectl get dataset hadoop
NAME UFS TOTAL SIZE CACHED CACHE CAPACITY CACHED PERCENTAGE PHASE AGE
hadoop 210MiB 0.00B 180.00GiB 0.0% Bound 1h
• Create an application container to experience acceleration
Create an application container or submit a machine learning task to enjoy the JindoFS acceleration service.
Next, create an application container app.yaml to use this dataset. The acceleration effect is displayed by comparing the time of multiple access of the same data.
apiVersion: v1
kind: Pod
metadata:
name: demo-app
spec:
containers:
- name: demo
image: nginx
volumeMounts:
- mountPath: /data
name: hadoop
volumes:
- name: hadoop
persistentVolumeClaim:
claimName: Hadoop
Create the application container with kubectl:
kubectl create -f app.yaml
View the file size:
$ kubectl exec -it demo-app -- bash
$ du -sh /data/hadoop/spark-3.0.1-bin-hadoop2.7.tgz
210M /data/hadoop/spark-3.0.1-bin-hadoop2.7.tgz
The cp observation time for the file is 18 seconds:
$ time cp /data/hadoop/spark-3.0.1-bin-hadoop2.7.tgz /dev/null
real 0m18.386s
user 0m0.002s
sys 0m0.105s
Checking the dataset cache, 210 megabytes of data has been cached locally.
$ kubectl get dataset hadoop
NAME UFS TOTAL SIZE CACHED CACHE CAPACITY CACHED PERCENTAGE PHASE AGE
hadoop 210.00MiB 210.00MiB 180.00GiB 100.0% Bound 1h
To prevent the effects of other factors such as page cache, delete the previous application container and create the same application to access the same file. Since the file has been cached by JindoFS, the time required for the second access is much shorter.
kubectl delete -f app.yaml && kubectl create -f app.yaml
The cp observation time for the file is 48 milliseconds which is 300 times shorter.
$ time cp /data/hadoop/spark-3.0.1-bin-hadoop2.7.tgz /dev/null
real 0m0.048s
user 0m0.001s
sys 0m0.046s
• Clean up the environment
• Delete the application and the application container
• Delete JindoRuntime
kubectl delete jindoruntime Hadoop
• Delete dataset
kubectl delete dataset Hadoop
The quick start and understanding of JindoFS on Fluid as well as final environment cleanup are demonstrated in this simply example. For detailed introductions of more features and use of Fluid JindoRuntime, see subsequent articles.
About the Authors
Wang Tao, nicknamed Yangli, is an EMR development engineer at Alibaba Computing Platform Business Department. Currently, he is engaged in the development and optimization of open source big data storage and computing.
Che Yang, nicknamed Biran, is a senior technical expert for Alibaba cloud-native application platform. He is involved in the development of Kubernetes and container-related products. He is the main author and maintainer of GPU shared scheduling especially focusing on constructing a machine learning platform system based on cloud-native technologies.
0 0 0
Share on
Alibaba Developer
198 posts | 33 followers
You may also like
Comments
|
__label__pos
| 0.913942 |
Question: Do I Need Adobe Cloud?
Is Adobe a good long term investment?
Adobe (NASDAQ:ADBE) stock has been a big winner on Wall Street for a long time.
The digital-tools giant recently reported second-quarter numbers which showed that ADBE stock will remain a big winner on Wall Street for a long time.
….
Is it worth paying for Photoshop?
If you need (or want) the best, then at ten bucks a month, Photoshop is most certainly worth it. While it’s used by a lot of amateurs, it’s undoubtedly a professional program. … While other imaging apps have some of Photoshop’s features, none of them are the complete package.
Should I uninstall Adobe?
In the company’s own words it “strongly recommends all users immediately uninstall Flash Player to protect their systems.” Anyone who is still running Flash Player can expect to start seeing notifications pop-up on their screen that the software’s death is just days away and that they should uninstall the software.
Is Adobe no longer free?
No. Acrobat Reader DC is a free, stand-alone application that you can use to open, view, sign, print, annotate, search, and share PDF files. Acrobat Pro DC and Acrobat Standard DC are paid products that are part of the same family.
How much does adobe cloud cost?
At the end of your offer term, your subscription will be automatically billed at the standard subscription rate, currently at US$29.99/month (plus applicable taxes), unless you elect to change or cancel your subscription. This pricing is valid for purchases of an annual plan, which requires a 12-month contract.
What is the difference between Adobe Creative Cloud and Adobe Creative Suite?
A pragmatic resume: CS is old technology using perpetual licenses, CC is current technology using a subscription model and offering some cloud space. … The subscription model assures that you have always access to the latest versions. A CC subscription gives you access to the last CS6 version of the software.
What does Adobe cloud include?
Creative Cloud is a collection of 20+ desktop and mobile apps and services for photography, design, video, web, UX, and more. Now you can take your ideas to new places with Photoshop on the iPad, draw and paint with Adobe Fresco, and design for 3D and AR.
What happens if I delete Adobe Creative Cloud?
No, your CC programs will stay on the machine even after uninstalling CC desktop app. However, CC desktop app will be required for various other purposes as well including product updates.
Why is Adobe so greedy?
Originally Answered: Why is Adobe so greedy? … The problem with the subscription model is not only that the software ends up more expensive, but it can be unreliable or rendered unusuable without the fault of the user (go on a long trip with spotty Internet access, don’t count on Adobe products working).
Why can’t I delete Adobe Creative Cloud?
If any of the software fails to uninstall, go to Control Panel and remove it from there. Once all Adobe apps are removed, uninstall Adobe CC desktop software from the Control Panel. If Adobe CC desktop software does not uninstall, download and run Adobe CC uninstaller software.
Can you buy Photoshop permanently?
Yes you can still buy it, and you can call Adobe and buy it. However, it would be madness to buy such an outdated version, which can never be updated. … Adobe’s latest version of Photoshop CC is subscription only. You might be able to find older versions of Photoshop for sale.
Can you still use Photoshop after Cancelling subscription?
If you only use Photoshop up to four times a year then you’re not Adobe’s targeted audience. The Creative Cloud software will stop working when you quit paying but you won’t lose your work files. The work you produced is yours to keep. Lightroom will to some extent continue to work after you cancel your subscription.
How good is Adobe Creative Cloud?
“#1 Design Software in Market” For all the features available in Adobe Creative Cloud – I absolutely love it! If you are a graphic designer, you need Adobe Creative Cloud as it is the #1 graphic designing software available in the market. Pros: One subscription – and I get access to 20+ awesome design software.
How do I pay for Adobe Creative Cloud?
Go to the Creative Cloud plans page. Click the Business tab, and then click Select Your Plans. Select your plan, and then click Buy Now. The available payment methods display at the bottom of the window.
Is Adobe Creative Cloud worth it 2020?
Is Adobe Creative Cloud Worth It? There’s a case to be made that it’s more expensive to pay for a subscription long-term, rather than paying for a single, permanent software license. However, the consistent updates, cloud services, and access to new features make Adobe Creative Cloud a fantastic value.
How much is Adobe Creative Cloud per month?
Adobe CC All Apps Plan Price All the apps cost £49.94/US$49.99 a month on the annual plan, £596.33/US$599.88 for an annual prepaid plan, or £75.85/US$74.99 on a month-by-month basis.
Are Adobe Stock Images worth it?
Utilizing Adobe Stock for your website or business, and paying the fee of $29.99 is worth it. The amount of photos within Adobe Stock is nothing less than amazing. The images are curated, so you will never have to worry about low quality stock photos.
Do you have to pay for Creative Cloud?
Adobe offers you a free Creative Cloud membership, which comes with a host of benefits. Even if you have not subscribed to a Creative Cloud plan, you can take advantage of the free Creative Cloud membership.
Does Adobe have a cloud?
Adobe Creative Cloud is a set of applications and services from Adobe Inc. that gives subscribers access to a collection of software used for graphic design, video editing, web development, photography, along with a set of mobile applications and also some optional cloud services.
Can I buy Adobe products without the cloud?
Now that Adobe no longer sells CS6 applications, you can get Photoshop only through a paid Creative Cloud membership. … The only non-subscription version of Photoshop currently for sale is Photoshop Elements, or you can use a non-Adobe Photoshop alternative. See below for more information about those options.
Why is Adobe so expensive?
There are many reasons why: Adobe’s consumers are mainly businesses and they can afford a larger cost than individual people, the price is chosen in order to make adobe’s products professional more than personal, the bigger your business is the most expensive it gets. …
Do I need Creative Cloud to use Acrobat?
No, in fact it’s optional – and your call. Creative Cloud makes updates available for those who want to install them, but the application manager will not automatically update your tools without your go-ahead…
Should I uninstall Creative Cloud?
Adobe does not recommend that you uninstall the Adobe Creative Cloud desktop app. If you must uninstall it, download the given uninstallers to uninstall the Creative Cloud desktop app.
What happens if you don’t pay Adobe subscription?
If a payment fails, additional payment attempts are made after the due date. If payment continues to fail, your Creative Cloud account becomes inactive and the paid features of your account are deactivated. Kindly Contact Customer Care for any additional information. It means the software won’t work.
How much does Creative Cloud cost?
The Creative Cloud subscription costs $50 a month for those who sign up for a year’s commitment, and it grants access to Adobe’s entire software suite and an expanding range of online services as long as customers keep paying.
What is the difference between Adobe Creative Cloud and Photoshop?
The Photoshop Photography Plan was a specialized version of a Creative Cloud Single App subscription. The Creative Cloud Photography Plan is an expansion of a Creative Cloud Free account. Those who signed up for the Photoshop Photography Program will continue to have access to everything they signed up for.
|
__label__pos
| 0.971989 |
How accurate are apps that locate your device?
Discussion in 'Droid RAZR MAXX' started by rosariorose9, Apr 28, 2013.
1. rosariorose9
rosariorose9 Active Member
Joined:
May 1, 2010
Messages:
373
Likes Received:
62
Trophy Points:
28
Ratings:
+64
I heard someone on the radio who said they could, with their software, literally pinpoint the location of their misplaced device (in their case, an iPhone) to the exact room in their house where they had placed it. I find it hard to believe, with cell tower triangulation or gps, that such is possible. Is it, though? If not, how close are you all able to pinpoint yours?
2. Psykho
Psykho Member
Joined:
Mar 29, 2011
Messages:
196
Likes Received:
5
Trophy Points:
18
Ratings:
+5
I use Google maps latitude on my DROID DNA and I can pin point my device it even shows what direction it is pointing ...
• Like Like x 1
3. leeshor
leeshor Gold Member
Joined:
Jan 30, 2012
Messages:
4,362
Likes Received:
1,437
Trophy Points:
358
Location:
Norcross, GA - USA
Ratings:
+1,676
I have Lookout Security installed and I would say it knows well within 20 feet, no matter where I've been. If it's lost, I'll find it.
• Like Like x 1
4. evan2001
evan2001 Member
Joined:
May 1, 2011
Messages:
113
Likes Received:
1
Trophy Points:
18
Ratings:
+1
If you have wifi on it is much better. Even if you are not logged in to one. Google using them to figure out where you are
• Like Like x 1
5. rosariorose9
rosariorose9 Active Member
Joined:
May 1, 2010
Messages:
373
Likes Received:
62
Trophy Points:
28
Ratings:
+64
GREAT first post, tdeamicis! :icon_ banana: And thanks to all who responded to my question. I've downloaded Lookout, and so far I'm pretty impressed...
6. TheDroidURLookinFor
Joined:
Dec 24, 2010
Messages:
133
Likes Received:
3
Trophy Points:
18
Ratings:
+3
I used to use Wavesecure. It was great. It would give you location of your device, you could remotely set off an alarm even if the sound was off so you could find it and if you were rooted you could remotely turn on your GPS if it wasn't on. It even has a web UI for it. I was with them since it went into beta. Now McAfee took them over and...Meh.
Now I don't even bother...I never let it out of my pocket unless I am using it anyway so I don't think I need an app to track it. Everything I have on it is in the cloud and backed up anyway so if it goes away I get a new phone and I am up and running in no time.
Search tags for this page
device locator droid maxx
,
how accurate are apps that locate your device?
,
how accurate are your device
,
how accurate is wmd app
,
maxwell smart
|
__label__pos
| 0.982572 |
Reader Level:
ARTICLE
Drawing B-Spline Curves using GDI+ in VB.NET
Posted by Avinash Pundit Articles | Visual Basic .NET November 08, 2012
The attached source code project draws spline curves between two points. Its a cubic spline fitting means program start drawing curve after four clicks.
• 0
• 0
• 5326
Download Files:
The attached source code project draws spline curves between two points. Its a cubic spline fitting means program start drawing curve after four clicks.
To run the project, download and unzip the attached file, build and run the project and click on the form.
draws-spline-curves.jpg
Main source code:
Public Structure point
Public x As Integer
Public
y As Integer
Public
Sub setxy(ByVal i As Integer, ByVal j As Integer)
x = i
y = j
End Sub 'setxy
Public Sub clearxy()
x = 0
y = 0
End Sub 'clearxy
End Structure 'point
Sub BSPLINE(ByVal p1 As Point, ByVal p2 As Point, ByVal p3 As Point, ByVal p4 As Point, ByVal divisions As Integer)
Dim a(4) As Double
Dim
b(4) As
Double
a(0) = (-p1.x + 3 * p2.x - 3 * p3.x + p4.x) / 6.0
a(1) = (3 * p1.x - 6 * p2.x + 3 * p3.x) / 6.0
a(2) = (-3 * p1.x + 3 * p3.x) / 6.0
a(3) = (p1.x + 4 * p2.x + p3.x) / 6.0
b(0) = (-p1.y + 3 * p2.y - 3 * p3.y + p4.y) / 6.0
b(1) = (3 * p1.y - 6 * p2.y + 3 * p3.y) / 6.0
b(2) = (-3 * p1.y + 3 * p3.y) / 6.0
b(3) = (p1.y + 4 * p2.y + p3.y) / 6.0
spline_out_x(0) = a(3)
spline_out_y(0) = b(3)
Dim i As Integer
For
i = 1 To divisions - 1
Dim t As Single
t = CSng(i) / CSng(divisions)
spline_out_x(i) = a(3) + t * (a(2) + t * (a(1) + t * a(0)))
spline_out_y(i) = b(3) + t * (b(2) + t * (b(1) + t * b(0)))
Next i
End Sub 'BSPLINE
Public Sub plus_draw(ByVal x As Integer, ByVal y As Integer, ByVal pen_width As Integer, ByVal cl As Color)
Dim g As Graphics = Graphics.FromHwnd(Me.Handle)
g.DrawLine(
New Pen(cl, pen_width), x - 3, y, x + 3, y)
g.DrawLine(New Pen(cl, pen_width), x, y - 3, x, y + 3)
End Sub 'plus_draw
Public Sub Form_MouseUp(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs)
If Movement_Click > 3 Then
pt(0) = pt(1)
pt(1) = pt(2)
pt(2) = pt(3)
pt(3).setxy(e.X, e.Y)
Dim no_of_interpolated_points As Integer = CInt(Math.Sqrt((Math.Pow(pt(2).x - pt(1).x, 2) + Math.Pow(pt(2).y - pt(1).y, 2))))
BSPLINE(pt(0), pt(1), pt(2), pt(3), no_of_interpolated_points)
Dim i As Integer
For
i = 0 To no_of_interpolated_points - 1
plus_draw(CInt(spline_out_x(i)), CInt(spline_out_y(i)), 2, Color.Blue)
Next i
Else
pt(Movement_Click).setxy(e.X, e.Y)
End If
Movement_Click = Movement_Click + 1
plus_draw(e.X, e.Y, 1, Color.Red)
End Sub 'Form_MouseUp
COMMENT USING
Trending up
|
__label__pos
| 0.977922 |
What are the best tools for speeding up a website?
Answer
Gaara
Vote Up (4)
If you have already optimized your site as much as possible by compressing images, combining CSS/Javascript files and using some sort of cache system for dynamic content and still experience slow load times then your hosting service is the likely culprit. No amount of optimization can overcome an overloaded network or underpowered server. If you are running a dedicated server I would suggest physical hardware upgrades such as more RAM/faster CPU. You could also look into the LiteSpeed platform. It performs much more efficiently and quicker than Apache while using less server resources.
Ask a Question
|
__label__pos
| 0.981297 |
Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Spaik, a Lisp compiler/VM with a moving GC written in Rust (github.com/snyball)
5 points by snyball 4 days ago | hide | past | favorite | 5 comments
The first compiler, vm, and "large" (>=500loc) Rust program I've written. Worked on it as part of a compiler class at Uni. Some of the GC parts are written in C because of my initial unfamiliarity with unsafe Rust.
It'd be great to get some feedback on the design, use of Rust, etc. given the fact that this was such a fun learning experience.
Awesome! Have you considered doing another step further -- emitting native code? :) I am a little biased because I've only ever written a Lisp -> machine code compiler, not bytecode, but I'd like to write a bytecode compiler one day.
I've considered emitting LLVM, or using C as a high-level assembly language, but not directly to native code. I've been wanting to write some simple arcade-like games using spaik, and it'd be a shame to either restrict it to a single architecture or needing separate implementations. (that, and being able to benefit from the 100s of lifetimes spent optimizing LLVM and gcc code generation.)
Spaik also doesn't have an AST-level interpreter for evaluating macros, it compiles and runs the bytecode on-the-fly for macro-expansions declared and used in the same compilation unit. And I figured it'd be even more of a hassle with native code.
What’s a moving GC?
The GC may move allocated objects around, changing their pointers in the process. This is used in some garbage collectors (.NET and Haskell for example) to combat memory fragmentation, by compacting the memory to remove small gaps.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Search:
|
__label__pos
| 0.689011 |
Skip to main content
Jump to: navigation, search
Difference between revisions of "Orion/Documentation/Developer Guide/Plugging into the editor"
(Service attributes)
Line 171: Line 171:
:: '''options.selection''' <code>orion.editor.Selection</code> The current selection in the editor.
:: '''options.selection''' <code>orion.editor.Selection</code> The current selection in the editor.
:: '''options.tab''' <code>String</code> The tab character being used in the editor. Typical values are a Tab character, or a sequence of four spaces.
:: '''options.tab''' <code>String</code> The tab character being used in the editor. Typical values are a Tab character, or a sequence of four spaces.
Returns <code>[[#The Proposal Object|Proposal]][]</code>.
+
Returns <code>[[#The Proposal object|Proposal]][]</code>.
== Service methods (Orion 3.0) ==
== Service methods (Orion 3.0) ==
Line 182: Line 182:
:::'''context.prefix''' <code>String</code> The substring extending from the first non-word character preceding the editing caret up to the editing caret. This may give a clue about what the user intended to type, and can be used to narrow down the results to be returned. The prefix is just a guess; it is not appropriate for all types of document, depending on their syntax rules.
:::'''context.prefix''' <code>String</code> The substring extending from the first non-word character preceding the editing caret up to the editing caret. This may give a clue about what the user intended to type, and can be used to narrow down the results to be returned. The prefix is just a guess; it is not appropriate for all types of document, depending on their syntax rules.
:::'''selection''' <code>orion.editor.Selection</code> The current selection in the editor.
:::'''selection''' <code>orion.editor.Selection</code> The current selection in the editor.
Returns <code>[[#The Proposal Object|Proposal]][]</code>.
+
Returns <code>[[#The Proposal object|Proposal]][]</code>.
== The Proposal Object ==
+
== The Proposal object ==
A Proposal object has the following properties:
A Proposal object has the following properties:
: '''description''' <code>String</code> Description text for this proposal. Will be shown in the content assist popup.
: '''description''' <code>String</code> Description text for this proposal. Will be shown in the content assist popup.
Revision as of 11:43, 17 December 2013
Overview of contributing services to the Orion editor
The built Orion editor defines a number of services for customizing its appearance and behavior. These services will typically be defined by a plug-in providing editing functionality for different programming languages or file extensions. This section will outline the services that are available for editor customization.
The Editor Context object
Orion 4.0 introduces Object References, which enable two-way communication between a service provider and the host Orion page. An Object Reference exposes functions that a service provider can call to help it fulfill a service contract. Like everything in Orion's service framework, Object References work asynchronously: all functions return Promises and the caller must wait for them to fulfill to an actual value. An Object Reference is valid only during the lifetime of its parent service call. Once the provider has fulfilled the service call, any Object References created for that call are unregistered by the framework, and cannot be used thereafter.
Many of the service APIs documented on this page now provide a special Object Reference, called the Editor Context object, as the first parameter in their function signatures. The Editor Context object contains various functions to query the state of the Orion editor, and to cause side effects. For example, if a provider needs the text from the editor buffer to fulfill its service contract, it can invoke the Editor Context's getText() method:
editorContextObject.getText().then(function(text) {
// Use text to fulfill the provider's service contract
});
Any plugin that uses Object References must load Orion's Deferred.js in addition to the usual plugin.js script. Failure to do this will cause runtime errors when the plugin attempts to use an Object Reference.
API versions
The older API signatures are labelled as "Orion 3.0" in the documentation. These are still supported, but date from Orion releases when Object References were not available. It is preferable to use the "Orion 4.0" version of an API whenever available, as these provide greater consistency, are somewhat more efficient, and can be more easily evolved in future releases without changing method signatures.
Editor Context methods
The Editor Context object provides the following methods:
getCaretOffset()
Resolves to Number. Returns the offset of the editing caret.
getSelection()
Resolves to Selection. Returns the editor's current selection.
getText(start?, end?)
Resolves to String. Returns the text in the given range.
setCaretOffset(offset, show?)
Resolves to undefined. Sets the caret offset. If show is true, the editor will scroll to the new caret position.
setSelection(selection)
Resolves to undefined. Sets the editor's selection.
setText(text, start?, end?)
Resolves to undefined. Sets the text in the given range.
orion.edit.command
The orion.edit.command service allows plugins to provide a command that operates on the editor. Typically, the command that takes some editor text as input, performs some operation or transformation on the text, and generates a new text value. The command can also optionally receive and return selection information for changing the editor selection. The transformation can happen directly, or indirectly through a delegated UI provided by the plugin.
Service methods (Orion 4.0)
The 4.0 version of the command API allows a command to take arbitrary action using the editorContext.
execute(editorContext, options)
editorContext ObjectReference The Editor Context object.
options Object
options.contentType String The Content Type ID of the file being edited.
options.input String Path and filename being edited.
The execute() method should return a Promise, which the provider is expected to resolve when it has finished performing the command action. The command action should be performed using the Editor Context object: for example, use editorContext.setText()to change the editor text, or editorContext.setSelection() to change the editor selection. The fulfillment value of the promise is ignored.
Note: Future versions of Orion will expose delegated UI functionality through the EditorContext object. This is currently not supported, due to Bug 419764.
Service methods (Orion 3.0)
run(selectedText, text, selection, resource)
selectedText String. The text that is currently selected in the editor.
text String. Provides the entire buffer being edited.
selection orion.editor.Selection The current selection in the editor.
resource String The full path and filename of the file being edited.
The return value of run() is a CommandResult object.
The CommandResult object
A CommandResult object is either a simple String which will replace the current editor selection, or an object.
• The object must either have a text property or a uriTemplate property.
• If it has a text property, then the text is a replacement string for the entire editor buffer.
• If it has a uriTemplate property, then a delegated UI iframe will be opened on the specified URI.
• It may optionally have a width and/or height property that describes the desired size of the UI. Width and height are specified in CSS units, such as "100px" or "50em". The delegated UI must post a message back to the host window with an object that identifies itself as a delegatedUI and contains a result property that describes the new selection text or the replacement text object. (See example).
• It may optionally have a selection object indicating the new selection value.
• It may optionally have a status field giving status information to show in the notification area.
The Status object
A Status object has the following fields:
Severity String. Allowed values are: "Warning", "Error".
Message String. The status message to display. May include hyperlinks, given in Markdown syntax.
Service attributes
Implementations of orion.edit.command must define the following attributes:
name
String The command text to show to the user.
id
String The id of the command contribution.
tooltip
String Optional. A tooltip describing the command.
img
String Optional. The URL of an icon to associate with the command. The icon may not appear in all situations. For example, the main toolbar may not show the icon, but a menu item might show the icon.
key
Array Optional. A key binding for the command. The structure of this array matches the arguments of the orion.textview.KeyBinding constructor. See its entry in the Client API reference for details.
validationProperties
ValidationProperty[] Optional. An array of Validation Properties that must match the editor's file in order for the command to appear.
contentType
String[] Optional. An array of Content Type IDs for which this command is valid.
Examples
The following examples start with the simplest editor command and then add more complexity.
Replacing the selection
This example converts the selected text to upper case. The function return value is a simple string, so this is interpreted by the editor as replacement for the original editor selection. In the service properties, we see the command provides a key binding of Ctrl+U (or Cmd+U on Mac).
1. var provider = new eclipse.PluginProvider();
2. provider.registerServiceProvider("orion.edit.command", {
3. run : function(text) {
4. return text.toUpperCase();
5. }
6. }, {
7. name : "UPPERCASE",
8. id : "uppercase.example"
9. img : "/images/gear.gif",
10. key : [ "u", true ]
11. });
12. provider.connect();
Replacing the editor contents
This example takes the selection and wraps it in C-style block comments. In this example the function returns a complex object with both text and selection fields. These are interpreted by the editor as the new editor buffer contents, and the new editor selection. A content type is used so that this command is only available for javascript files.
contentType: ["application/javascript"],
run : function(selectedText, text, selection) {
return {text: text.substring(0,selection.start) + "/*" +
text.substring(selection.start,selection.end) + "*/" +
text.substring(selection.end),
selection: {start:selection.start,end:selection.end+4}};
}
Delegating a UI before manipulating the editor
Here is an example of a delegated UI run function that computes a URL for the delegated UI based on the file name of the edited file. In this example, the function returns a complex object with a uriTemplate field and width and height properties. The UI that is opened will be responsible for posting a message back to the editor with a result object that contains either a String for the selected text or a complex object with replacement content.
1. id: "delegatedUI.example",
2. run : function(selectedText, text, selection, fileName) {
3. return {uriTemplate: "http://com.example/myDelegatedUI#" + fileName, width: "600px", height: "400px"};
4. }
The delegated UI would post a message identifying itself and including a result. The message must include a pageService property of "orion.page.delegatedUI", a source that matches the orion.edit.command service id, and either a result or a cancelled property. The following examples illustrate the different ways the result could be returned.
1. /* a message containing replacement selected text */
2. window.parent.postMessage(JSON.stringify({
3. pageService: "orion.page.delegatedUI",
4. source: "delegatedUI.example",
5. result: replacementSelection
6. }), "*");
7.
8. /* a message containing new content for the editor */
9. window.parent.postMessage(JSON.stringify({
10. pageService: "orion.page.delegatedUI",
11. source: "delegatedUI.example",
12. result: JSON.stringify({text: replacementText})
13. }), "*");
14.
15. /* a message signifying user cancellation of the delegated UI */
16. window.parent.postMessage(JSON.stringify({
17. pageService: "orion.page.delegatedUI",
18. source: "delegatedUI.example",
19. cancelled: true
20. }), "*");
Google Picker example
The Google Picker is a fully functioning example of a delegated UI in an editor command. It opens a Google Picker allowing the user to pick a resource, and then inserts a link to that resource into the editor text. To install the plug-in, open this link. The code is available here.
orion.edit.contentAssist
The orion.edit.contentAssist service contributes content assist providers to the editor. A content assist provider produces suggestions for text that may be inserted into the editor at a given point. Providers are invoked when the user triggers the "content assist" action by pressing Ctrl+Space in the editor.
Service methods (Orion 4.0)
computeContentAssist(editorContext, options)
editorContext ObjectReference The Editor Context object.
options Object
options.delimiter String The line delimiter being used in the editor (CRLF, LF, etc.)
options.indentation String The leading whitespace at the start of the line.
options.line String The text of the line.
options.offset Number The offset at which content assist is being requested. Relative to the document.
options.prefix String The substring extending from the first non-word character preceding the editing caret up to the editing caret. This may give a clue about what the user was in the process of typing. It can be used to narrow down the results to be returned. The prefix is just a guess; it is not appropriate for all types of document, depending on their syntax rules.
options.selection orion.editor.Selection The current selection in the editor.
options.tab String The tab character being used in the editor. Typical values are a Tab character, or a sequence of four spaces.
Returns Proposal[].
Service methods (Orion 3.0)
computeProposals(buffer, offset, context)
When content assist is triggered, the editor calls this function to obtain suggestions from a content assist provider.
buffer String The entire buffer being edited.
offset Number Offset in the text buffer at which content assist is being invoked.
context Object Additional contextual information about the content assist invocation. This object has the following properties:
context.line String Text of the entire line that the editing caret is on.
context.prefix String The substring extending from the first non-word character preceding the editing caret up to the editing caret. This may give a clue about what the user intended to type, and can be used to narrow down the results to be returned. The prefix is just a guess; it is not appropriate for all types of document, depending on their syntax rules.
selection orion.editor.Selection The current selection in the editor.
Returns Proposal[].
The Proposal object
A Proposal object has the following properties:
description String Description text for this proposal. Will be shown in the content assist popup.
escapePosition Number Optional. Gives the offset, relative to the document, where the cursor should be placed after the proposal is inserted. If this value is not supplied, the cursor will be positioned at the end of the inserted text.
overwrite Boolean Optional, defaults to false. If true, this proposal's proposal will overwrite the prefix that was passed to computeProposals().
positions Object[] Optional. An optional array of positions within the completion proposal that require user input. Supplying this property will cause the editor to enter linked mode, and the user can use the Tab key to iterate through the regions of the proposal that require user input. For example if the completion is a function, the positions could indicate the function arguments that need to be supplied. Entries in this position array must be objects with two integer properties: offset, and length describing the regions requiring user input.
proposal String Completion text that will be inserted in the editor if chosen by the user. The text is inserted at the offset that was passed to computeProposals().
style String Optional. Gives styling information for the proposal. The available styles are: "default" (no styling, also used if this property is not present), "emphasis" (proposal displayed in bold), "noemphasis" (proposal is greyed out with a colored background), "hr" (proposal displayed as a <hr/> and is not selectable by up and down arrows).
Service attributes
Implementations of orion.edit.contentAssist must define the following attributes:
name
String Name for the content assist provider.
contentType
String[] An array of Content Type IDs that this provider can provide content assist for. The provider's computeProposals function will be called only for files having one of these content types.
Examples
The example below provides content assist suggestions for files whose name ends in .js. It offers JavaScript keywords as suggestions, by checking them against the prefix provided by the content assist engine.
1. var provider = new orion.PluginProvider();
2. provider.registerServiceProvider("orion.edit.contentAssist",
3. {
4. computeProposals: function(buffer, offset, context) {
5. var keywords = [ "break", "case", "catch", "continue", "debugger", "default", "delete", "do", "else",
6. "finally", "for", "function", "if", "in", "instanceof", "new", "return", "switch",
7. "this", "throw", "try", "typeof", "var", "void", "while", "with" ];
8. var proposals = [];
9. for (var i=0; i < keywords.length; i++) {
10. var keyword = keywords[i];
11. if (keyword.indexOf(context.prefix) === 0) {
12. proposals.push({
13. proposal: keyword.substring(context.prefix.length),
14. description: keyword
15. });
16. }
17. }
18. return proposals;
19. }
20. },
21. {
22. name: "JavaScript content assist",
23. contentType: ["application/javascript"]
24. });
25. provider.connect();
The example below will provide completion on the character 'a' that will insert an HTML anchor element. After completion the cursor will be positioned within the href attribute.
1. var provider = new orion.PluginProvider();
2. provider.registerServiceProvider('orion.edit.contentAssist',
3. {
4. computeProposals: function(buffer, offset, context) {
5. var proposals = [];
6. if (context.prefix === 'a') {
7. proposals.push({
8. proposal: ' href=""></a>',
9. description: '<a></a> - HTML anchor element',
10. escapePosition: offset+7});
11. }
12. return proposals;
13. },
14. {
15. name: 'HTML content assist',
16. contentType: ['text/html']
17. });
18. provider.connect();
Here is an identical example to the HTML provider, but written against the Orion 4.0 API.
1. // Note that orion/Deferred is an implied dependency of orion/plugin here, because we are using an object reference.
2. define(["orion/plugin"], function(PluginProvider) {
3. var provider = new PluginProvider();
4. provider.registerServiceProvider('orion.edit.contentAssist',
5. {
6. computeContentAssist: function(editorContext, options) {
7. var proposals = [];
8. if (options.prefix === 'a') {
9. proposals.push({
10. proposal: ' href=""></a>',
11. description: '<a></a> - HTML anchor element',
12. escapePosition: offset+7});
13. }
14. return proposals;
15. },
16. {
17. name: 'HTML content assist',
18. contentType: ['text/html']
19. });
20. provider.connect();
21. });
More advanced content assist providers will generally use the buffer text, possibly parsing the file into an Abstract Syntax Tree (AST).
orion.edit.editor
This service declares a new editor. By default, the Orion client UI declares a single editor with id "orion.editor" which is used to edit source code. Using this service, you can declare entirely new editors (for example, you could register an editor that provided a paint interface for drawing images).
Contributions to this service do not directly affect the Orion UI. Instead, this service is typically used in combination with two other services, which allow new file types to be defined and associated with editors. See:
Service methods
None. This service is purely declarative.
Service attributes
id
String The unique identifier of this editor.
name
String The user-readable name of this editor.
uriTemplate
String Gives a URI template for constructing a URL that can be followed to drive this editor to a particular file. The parameter Location is substituted with the URL of the file being edited. The template is specified using the URI Template syntax.
orionTemplate
String Optional. Gives an Orion template for constructing the editor URL. This serves the same purpose as the uriTemplate field. However an Orion template allows a more human-readable parameter encoding scheme than a URI Template. If both fields are provided, the orionTemplate takes priority over the uriTemplate.
NOTE: Orion templates are not yet standardized.
Examples
This example code declares an editor called "My Great Editor". When My Great Editor is used to edit a file in Orion, the user will be pointed to a URL containing the location of the file they want to edit as "fileToEdit" in the query portion of the URL. Presumably myGreatEditor.php would read the string and open the file. Authentication is beyond the scope of this example.
1. var provider = new eclipse.PluginProvider();
2. provider.registerServiceProvider("orion.edit.editor", {},
3. { id: "example.mygreateditor",
4. name: "My Great Editor",
5. uriTemplate: "http://mysite.com/myGreatEditor.php?fileToEdit={Location}"
6. });
The code below shows a complete example of how to use the orion.editor, orion.core.contenttype, and orion.navigate.openWith services in conjunction to declare a new editor, declare new file types, and associate them together. The example is adapted from Orion's own source code.
1. // Declare an editor
2. provider.registerServiceProvider("orion.edit.editor", {}, {
3. id: "orion.editor",
4. name: "Orion Editor",
5. uriTemplate: "../edit/edit.html#{Location,params*}",
6. orionTemplate: "../edit/edit.html#{,Location,params*}"});
7.
8. // Declare content types
9. provider.registerServiceProvider("orion.core.contenttype", {}, {
10. contentTypes:
11. [{ id: "text/plain",
12. name: "Text",
13. extension: ["txt"]
14. },
15. { id: "text/html",
16. "extends": "text/plain",
17. name: "HTML",
18. extension: ["html", "htm"]
19. }]
20. });
21.
22. // Associate editor with content types
23. provider.registerServiceProvider("orion.navigate.openWith", {}, {
24. editor: "orion.editor",
25. contentType: ["text/plain", "text/html"]});
26.
27. provider.connect();
Note that the order of these registerServiceProvider() calls is not important.
orion.edit.highlighter
The orion.edit.highlighter service contributes syntax highlighting rules to the editor. A highlighter service may provide highlighting in one of two ways:
• By passing a grammar, which is a declarative description of a language's syntax. The grammar tells the Orion editor how to recognize and style language constructs in a file.
• By writing a highlighter, which allows highlighting information to be calculated asynchronously by the provider itself and sent to the Orion editor for display.
The service also provides a list of content types. When the editor opens a file of a registered content type, the provider is invoked (using one of the two methods described above) to obtain the styling.
NOTE: The "highlighter" API is experimental and subject to change in future versions.
Service methods
Implementations of orion.edit.highlighter whose type attribute is "highlighter", must define the following method:
setContentType(contentTypeId)
contentTypeId String The Content Type ID of the file that is being edited.
Orion invokes this method to inform the provider what kind of file it must provide highlighting for. This allows the provider that to register itself with several content types, but implement different logic for each type.
When this provider's type is "grammar", no service methods are defined: a grammar provider is purely declarative.
Service attributes
Implementations of orion.edit.highlighter must define the following attributes:
type
String What kind of highlight provider is being registered. Allowed values are "grammar" and "highlighter". Future versions may support more.
contentType
String[] An array of Content Type IDs that this provider will be used for.
grammar
Object Optional. When the type of this provider is "grammar", this attribute holds an object giving the grammar to be used to assign style classes. This object is a JavaScript equivalent of the format described here.
Service events
When the type of the provider is "highlighter", the provider must independently listen to changes in the Orion text editor by registering with the orion.edit.model service, and calculate the necessary highlighting information in response to the changes. Whenever highlighting information is available, the provider must dispatch an event of type "orion.edit.highlighter.styleReady" containing the styles. The event will be used by the Orion editor to apply styles to the file being displayed.
orion.edit.highlighter.styleReady
This event is documented in the Orion Client API reference as orion.editor.StyleReadyEvent. Consult its entry there for detailed information.
When the type of the provider is "grammar", the provider dispatches no service events.
Example of a 'grammar' provider
1. var provider = new eclipse.PluginProvider();
2. provider.registerServiceProvider("orion.edit.highlighter",
3. {
4. // "grammar" provider is purely declarative. No service methods.
5. }, {
6. type : "grammar",
7. contentType: ["text/html"],
8. grammar: {
9. patterns: [
10. { begin: "<!--",
11. end: "-->",
12. captures: { "0": "punctuation.definition.comment.html" },
13. contentName: "comment.block.html"
14. }
15. ]
16. }
17. });
18. provider.connect();
The above example provides a grammar to be used for HTML files. It will assign the CSS class punctuation-definition-comment-html to the <!-- and --> delimiters, and assign the CSS class comment-block-html to the text inside the delimiters. Consult this reference for a full description of the grammar format.
(Note that some aspects of the grammar format are not supported. See orion.editor.TextMateStyler in the Client API reference for a detailed explanation.)
Example of a 'highlighter' provider
See the source code of the orion-codemirror plugin, particularly these lines.
orion.edit.model
An orion.edit.model service provides listeners on changes made to the orion.textview.TextView that powers the Orion editor.
Service methods
An implementation of orion.edit.model may define zero or more functions depending on what event types it wants to receive. When an event of type X is dispatched by the TextView, this the implementation's service method named onX will be invoked, passing the the event. For example, a "ModelChanged" event type causes the the provider's "onModelChanged()" method to be invoked.
The methods are always invoked with a single parameter, event, containing the event data that was dispatched by the TextView. The return value is ignored.
The current list of supported onXXXX methods is as follows:
• onContextMenu(event)
• onDragStart(event)
• onDragEnd(event)
• onDragEnter(event)
• onDragOver(event)
• onDragLeave(event)
• onDragStop(event)
• onModelChanging(event)
• onModelChanged(event)
• onModify(event)
• onMouseDown(event)
• onMouseUp(event)
• onMouseMove(event)
• onMouseOver(event)
• onMouseOut(event)
• onScroll(event)
• onVerify(event)
• onFocus(event)
• onBlur(event)
Consult the TextView Client API reference for details about these event types.
Service attributes
Implementations of orion.edit.model must define the following attributes:
contentType
String[] An array of Content Type IDs that this provider wants to receive events for. The provider will only be notified of events that occur when the file being edited matches (or descends from) a content type given in the array.
Example 1
The following example prints out some information to the browser console when certain text events occur while a JavaScript file is being edited.
1. var provider = new orion.PluginProvider();
2. provider.registerService("orion.edit.model",
3. {
4. onModelChanging: function(event) {
5. console.log("Text is about to be inserted: " + event.text);
6. },
7. onScroll: function(event) {
8. console.log("Editor scrolled to " + event.newValue.x + ", " + event.newValue.y);
9. }
10. },
11. {
12. contentType: [ "application/javascript" ]
13. }});
14. provider.connect();
Example 2
See the source code of the orion-codemirror plugin, which uses onModelChanging to build a shadow copy of the Orion text buffer, which it then uses to perform syntax highlighting.
orion.edit.occurrences
The orion.edit.occurrences service allows plugins to compute identifier occurrences for specific content types.
Service methods
Implementations of orion.edit.occurrences must define the following function:
computeOccurrences(editorContext, context)
editorContext is an orion.edit.EditorContext object that contains all of the information about the current editor.
context is an object that contains the current selection in the editor to find occurrences for.
The return value (or fulfillment value) is an Array of top-level occurrence objects, which will be automatically marked in the editor.
The Occurrence object
Each occurrence object has these properties:
start Number The offset into the file for the start of the occurrence
end Number The offset into the file for the end of the occurrence
Service attributes
Implementations of orion.edit.occurrences may define the following attributes:
contentType
String[] An array of Content Type IDs for which this occurrence computer is valid.
pattern
String A string to create a regular expression to determine if this occurrence service applies to the current context. This attribute has been
deprecated in favor of contentType.
Examples
The following example is how Orion plugs in occurrence support for JavaScript:
1. var provider = new orion.PluginProvider();
2. provider.registerService('orion.edit.occurrences',
3. {
4. computeOccurrences: function(editorContext, context) {
5. return [];
6. }
7. {
8. contentType: ["application/javascript"]
9. });
10. provider.connect();
orion.edit.outliner
An orion.edit.outliner service provides an overview of a file being edited. The overview is given as a tree, which the Orion UI renders in the left-hand pane alongside the file you are editing. Items in the tree can be links that take you to the appropriate position in the file, or to another URL entirely.
Service methods (Orion 4.0)
A provider implements the computeOutline method, whose signature is as follows:
computeOutline(editorContext, options)
editorContext ObjectReference The Editor Context object.
options Object
options.contentType String The Content Type ID of the file being edited.
The return value (or fulfillment value) is an Array of top-level OutlineElement objects, which will be displayed in the outline pane.
Service methods (Orion 3.0)
A provider implements the getOutline method, whose signature is as follows:
getOutline(contents, title)
contents String The contents of the file being edited.
title String The path and filename of the file being edited.
Returns an Array of top-level OutlineElement objects, which will be displayed in the outline pane.
The OutlineElement object
Each OutlineElement has these properties:
label String Text to be shown in the UI for this element.
className String Optional A space-separated list of CSS class names to be applied to this element in the UI.
children OutlineElement[] Optional Array of child OutlineElements of this element. Children may be nested to an arbitrary depth.
line Number Optional The line number within the file to use as the link for this element in the UI. Line numbers begin counting from 1.
The optional properties column, start, end, text may be provided for finer-grained control. (Consult the orion.util.hashFromPosition() documentation in the Client API reference for details about these parameters.)
href String Optional When line is omitted, the href property provides a URL to use as the link.
Service attributes
Implementations of orion.edit.outliner must define the following attributes:
contentType
String[] An array of Content Type IDs giving the types of files that this outliner can provide an outline for.
id
String A unique identifier for this outline provider.
name
String A user-readable name for this outline provider.
Examples
This example shows an outline provider that runs on .txt files. It finds Mediawiki-style =Section Headings= and generates a flat outline from them. (A more elaborate implementation might also find subsections and include them as children of the top-level sections.)
var provider = new eclipse.PluginProvider();
provider.registerServiceProvider("orion.edit.outliner", {
getOutline: function(contents, title) {
var outline = [];
var lines = contents.split(/\r?\n/);
for (var i=0; i < lines.length; i++) {
var line = lines[i];
var match = /^=\s*(.+?)\s*=$/.exec(line);
if (match) {
outline.push({
label: match[1],
line: i+1 // lines are numbered from 1
});
}
}
return outline;
}
}, {
contentType: ["text/plain"],
name: "Headings",
id: "orion.outliner.example.headings"
});
provider.connect();
orion.edit.validator
An orion.edit.validator service provides a function that can check the contents of a file and return a data structure indicating where problems are. The result of this service is used by the Orion UI to create annotations in the ruler beside each problematic line, and also to underline the specific portion of the document where the problem occurs.
Service methods (Orion 4.0)
computeProblems(editorContext, options)
editorContext ObjectReference The Editor Context object.
options Object
options.contentType String The Content Type ID of the file being edited.
options.title String The path and filename of the file being edited.
Returns (or fulfills to) an Object giving the validation result. The returned object must have a problems property giving an Array of problems found in the file.
Service methods (Orion 3.0)
checkSyntax(title, contents)
title String The path and filename of the file being edited.
contents String The contents of the file being edited.
Returns an Object giving the validation result. The returned object must have a problems property whose value is an array giving the problems found in the file.
The Problem object
A Problem object has the following properties:
description String A description of the problem.
severity String Optional. Gives the severity of this problem. The severity affects how the problem is displayed in the Orion UI. Allowed values are "warning" and "error". (If omitted, "error" is assumed.)
A problem will have additional properties that give its location within the file. The location can be specified using line+column, or using offsets.
For a line-based problem, you provide a line number and columns:
line Number The line number where the problem was found. (Line numbers begin counting from 1.)
start Number The column within the line where the problem begins. (Columns begin counting from 1.)
end Number Optional The column within the line where the problems ends. (If omitted, start+1 is assumed.)
For a document-based problem, you provide character offsets:
start Number The offset at which the problem begins. (0=first character in the document.)
end Number Optional The offset at which the problem ends. (If omitted, start+1 is assumed.)
A document-based problem can span several lines.
Service attributes
Implementations of orion.edit.validator must define the following attributes:
contentType
String[] An array of Content Type IDs giving the types of files that this validator is capable of validating.
Examples
var provider = new eclipse.PluginProvider();
var serviceProvider = provider.registerServiceProvider("orion.edit.validator",
{
checkSyntax: function(title, contents) {
var problems = [];
var lines = contents.split(/\r?\n/);
for (var i=0; i < lines.length; i++) {
var line = lines[i];
var match = /\t \t| \t /.exec(line);
if (match) {
problems.push({
description: "Mixed spaces and tabs",
line: i + 1,
start: match.index + 1,
end: match.index + match[0].length + 1,
severity: "warning" });
}
}
var result = { problems: problems };
return result;
}
},
{
contentType: ["application/javascript"]
});
provider.connect();
This example will validate JavaScript files. It finds lines containing a sequence of space-tab-space or tab-space-tab and produces a warning on every such line. Note that +1 is necessary because column and line indices in the Orion UI are numbered from 1, not 0.
Back to the top
|
__label__pos
| 0.983716 |
Using class-attribute as key to sort?
cnb circularfunc at yahoo.se
Fri Aug 29 20:22:22 CEST 2008
In median grade, can I sort on Review-grade somehow?
something like:
sr = self.reviews.sort(key=self.reviews.instance.grade)
class Review(object):
def __init__(self, movieId, grade, date):
self.movieId = movieId
self.grade = grade
self.date = date
class Customer(object):
def __init__(self, idnumber, review):
self.idnumber = idnumber
self.reviews = [review]
def addReview(self, review):
self.reviews.append(review)
def averageGrade(self):
tot = 0
for review in self.reviews:
tot += review.grade
return tot / len(self.reviews)
#general sort that takes list+spec of objectkey to sort on?
def sortReviews(self):
def qsort(lista):
if len(lista) != 0:
return qsort([x for x in lista[1:] if x.grade <
lista[0].grade]) + [lista[0]] + \
qsort([x for x in lista[1:] if x.grade >=
lista[0].grade])
else:
return []
return qsort(self.reviews)
def medianGrade(self):
length = len(self.reviews)
sr = self.sortReviews()
#sr = self.reviews.sort(key=self.reviews.instance.grade)
if length % 2 == 0:
return (sr[int(length/2)].grade + sr[int(length/
2-1)].grade) / 2
else:
if length == 1:
return sr[0].grade
else:
return sr[int(round(length/2)-1)].grade
More information about the Python-list mailing list
|
__label__pos
| 0.999615 |
PHP 7.2.0 Beta 1 Released
pg_fetch_row
(PHP 4, PHP 5)
pg_fetch_rowCarica una tupla in un array
Descrizione
array pg_fetch_row ( resource $result , int $tupla )
pg_fetch_row() carica un record dal risultato della query associato alla risorsa identificata da result. La riga (record) è restituia sotto form di array. Ogni campo è identificato da un indice numerico, che inizia da 0.
Restituisce un array che corrisponde alla riga caricata, oppure FALSE se non ci sono altre tuple.
Example #1 Postgres fetch row
<?php
$conn
pg_pconnect ("dbname=editori");
if (!
$conn) {
echo
"Si è verificato un errore.\n";
exit;
}
$result pg_query ($conn"SELECT * FROM autori");
if (!
$rrisultato) {
echo
"Si è verificato un errore.\n";
exit;
}
while (
$row pg_fetch_row($risultato$i)) {
for (
$j=0$j count($row); $j++) {
echo
"$row[$j] ";
}
echo
"<BR>";
}
?>
Vedere anche pg_query(), pg_fetch_array(), pg_fetch_object() e pg_fetch_result().
Nota:
Dalla versione 4.1.0, row è opzionale. La chiamata a pg_fetch_row() incrementa il puntatore alle tuple di 1.
add a note add a note
User Contributed Notes 7 notes
up
1
pletiplot at seznam dot cz
11 years ago
Note, that when you retrieve some PG boolean value, you get 't' or 'f' characters which are not compatible with PHP bool.
up
0
eddie at eddiemonge dot com
7 years ago
pg_fetch_row is faster than pg_fetch_assoc when doing a query with * as the select parameter. Otherwise, with declared columns, the two are similar in speed.
up
0
post at zeller-johannes dot de
12 years ago
I wondered whether array values of PostgreSQL are converted to PHP arrays by this functions. This is not the case, they are stored in the returned array as a string in the form "{value1 delimiter value2 delimiter value3}" (See http://www.postgresql.org/docs/8.0/interactive/arrays.html#AEN5389).
up
0
darw75 at swbell dot net
15 years ago
a way to do this with 2 loops to insert data into a table...
$num = pg_numrows($result);
$col_num = pg_numfields($result);
for ($i=0; $i<$num; $i++) {
$line = pg_fetch_array($result, $i, PGSQL_ASSOC);
print "\t<tr bgcolor=#dddddd>\n";
for ($j=0; $j<$col_num; $j++){
list($col_name, $col_value) =each($line);
print "\t\t<TD ALIGN=RIGHT><FONT SIZE=1 FACE='Geneva'>$col_value</FONT></TD>\n";
}
echo "<br>";
}
up
-2
Matthew Wheeler
14 years ago
Note that the internal row counter is incremented BEFORE the row is retrieved. This causes an off by one error if you try to do:
pg_result_seek($resid,0);
pg_fetch_row($resid);
you will get back the SECOND result not the FIRST.
up
-3
maxnamara at yahoo dot com
12 years ago
Get downlines, put them into arrays.
function get_downlines($my_code){
global $link;
$sql = "select user_id, name from tb_user where parentcode = $my_code";
$res = pg_query($link,$sql);
if(!$res){
echo "Error: ".$sql;exit();
}
$num_fields = pg_num_fields($res);
$info_rows = 0;
$num_rows = pg_num_rows($res);
while($arr = pg_fetch_row($res)){
$info_offset = 1;
$info_columns = 0;
while ($info_offset <= $num_fields) {
$info_elements[$info_rows][$info_columns] = $arr[$info_columns];
$info_offset++; $info_columns++;
}
$info_rows++;
}
return $info_elements;
}
up
-5
imantr at cbn dot net dot id
14 years ago
I use the following code to assigning query result to an array.
while ($row = pg_fetch_row($result)) $newArray[] = $row[0];
print_r($newArray);
To Top
|
__label__pos
| 0.871466 |
How To Fix Your Mac Can’t Connect To iCloud Issue
11.9K views
5 min read
If you use your Mac every day, it’s likely that various iCloud services have become integral to your workflow. There’s iCloud Drive on Mac for all the files and folders to be safely backed up online and easily accessible across your Apple devices. There’s Mail, Calendar, Contacts, Notes, Reminders, FaceTime, Messages, Photos, as well as Pages, Numbers, and Keynote. There’s even geo-tagging functionality you can use with your friends and a special tracker for all your MacBooks, iPads, and iPhones.
In some cases, however, you might find out that you can't sign into iCloud on Mac, getting an unknown error occurred iCloud message. This could be especially frustrating since not being able to access iCloud for Mac has the potential to ruin your day. So how do you resolve the issue? Let’s start with why the iCloud unknown error appears in the first place.
iСloud apps Mac
Why Your Mac Can’t Connect To iCloud
It all happens so suddenly and seemingly out of nowhere. You might be working on some important project when you get redirected to your Desktop and the message saying “This Mac can't connect to iCloud because of a problem with email or password or Apple ID” appears.
The first thing you need to do to verify that this problem isn’t just a glitch is to open System Preferences ➙ Apple ID. If you get the error along the lines of “account details could not be opened because of an error connecting to iCloud” it means that something is in fact wrong and you should proceed to finding an optimal solution.
As the error is quite uncertain and ambiguous — it could be Apple’s servers that are down or your internet connection or some sort of a macOS error — you need to try a few different approaches to come up with a robust and holistic solution.
Restart your Mac
The easiest thing to do when you encounter any systemic issue is to simply restart your Mac. Often, various scripts and processes in your Mac could start interfering with each other, causing one or more apps to display some sort of errors. Restarting the macOS would restart all those processes and hopefully resolve any issues.
To restart your Mac properly:
1. Quit all the active apps, either by selecting Quit from the menu bar or using the ⌘ + Q shortcut
2. Click on the Apple icon in the menu bar and choose Restart…
3. Keep the “Reopen windows when logging back in” uncheck
4. Hit Restart
restart Mac Apple ID iCloud
Test your internet connection
If restarting didn’t help, the problem could be with your internet connection being unstable. In case you use WiFi, try to switch to an Ethernet cable to see if you get the same error message. Then test your internet speed overall:
1. Visit speedtest.net
2. Click Go
SpeedTest test Internet connection Mac
Your results should roughly be in the vicinity of the stated speed on your internet service provider (ISP) plan. If your speed test wouldn’t complete, call your ISP for further guidance.
Check Apple’s server status
When you made sure that your internet is fine, it could also be that there’s a rare glitch in the Apple’s servers themselves, which, although rare, does actually happen:
1. Visit apple.com/support/systemstatus
2. Notice if any of the iCloud for Mac services don’t have the green circle next to them
Apple server status system iCloud MacIf some services are unavailable and there’s no explanation for when they are expected to be back online, contact Apple support for more information.
Change your Apple ID password
The next step in figuring out why your Mac can't connect to iCloud is to try and sign into your Apple ID online. If successful, you should also change your password to force iCloud on Mac to accept your new credentials:
1. Visit appleid.apple.com
2. Enter your Apple ID and password
3. If you have two-factor authentication enabled, click Allow on your other trusted device and enter the 6-digit verification code
4. Click “Change password…”
5. Type in your current password and your new password twice. Check the box to “Sign out from devices and websites which are using my Apple ID.” Click “Change password…” once again.
Appled ID change password MacNow you know that your internet connection works, Apple’s servers are available, and your Apple ID username and password are correct. You’ve also restarted your Mac and signed out of every iCloud for Mac service across your devices. Things are looking pretty good. You should try to log in to Apple ID once again and, hopefully, it lets you through. But before you do, let’s make sure you never forget or misplace your login credentials ever again by using a robust password manager.
Secrets is one of the most secure digital password organizers around. It not only keeps and encrypts your current logins but also helps you come up with new ones, up to the specifications of any complexity. With Secrets, you just need to remember one password to get in to ensure your information is 100% safe across the web. Passwords, usernames, credit cards, private notes — Secrets stores it all.
Secrets password manager Mac iCloud
Set Date & Time to automatic
If you still can't sign into iCloud on Mac, it means that the iCloud unknown error is more persistent than a simple glitch. A few years ago, for example, a bug connected to the discrepancy in time zones was widely reported. Luckily, there’s a simple solution:
1. Open System Preferences ➙ Date & Time
2. Check the box to “Set date and time automatically”
date time Mac settings error
Now the timing of your Mac should work well with the timing on Apple’s servers, and the issue could be resolved.
Remove Library settings and preferences
In the very unlikely case that an unknown error occurred iCloud message still persists, even with all the above changes implemented, you could try to manually remove settings and preferences files themselves from your Mac’s Library folder. This process is safe to do, but you’ll lose some of your data, such as your Keychain logins, so make sure to migrate them into a secure password manager like Secrets before you begin. In addition, create a complete backup of your Mac.
Get Backup Pro is a reliable and comprehensive Mac backup app that’s uniquely designed to prevent any data loss with four kinds of backups: copy, clone, incremental, and versioned. You can back your files up to an external hard drive and encrypt them, synchronize folders between Macs, and even put your backups on schedule, so you don’t even have to worry about the safety of your data.
Get Backup Pro Mac file secureOnce you safely copied the data from your Mac to an external source, you can start removing various Library settings. While the deleted preferences will be gone, your system will create new ones as needed, so ideally none of the functionality in your Mac will be affected.
To reset iCloud accounts:
1. In Finder, select Go ➙ Go to Folder…
2. Type in ~/Library/Application Support/iCloud/Accounts and click Go
3. Relocate the contents of the folder to somewhere else just in case and restart your Mac
library Mac iCloud accounts reset
To reset System Preferences:
1. Again, select Go ➙ Go to Folder…
2. Enter ~/Library/Preferences
3. Move the files to another folder and restart your Mac
To reset your Keychain:
1. Open Go to Folder…
2. Navigate to ~/Library/Keychain
3. Put the files somewhere else and restart your Mac once more
You should now see all your settings and preferences reverse back to the defaults. That should help you solve any errors between iCloud accounts, System Preferences, and Keychain.
Reinstall the macOS
Finally, if none of the above worked successfully for you, you always have the option to reinstall your macOS altogether. Again, make sure you have the most current backup synced with Get Backup Pro before proceeding. Although the new macOS should leave your files intact, you just never know what might happen.
To reinstall your macOS:
1. Shut down your macOS
2. Power it back on and hold Option + ⌘ + R until you see the loading circle
3. Enter your password
4. In the menu, select Reinstall macOS and then Continue
5. Follow the reinstallation process
So when you encounter the “this Mac can't connect to iCloud because of a problem with email” error, you have lots of ways to try and resolve it, from changing your Apple ID password to setting the Date & Time to automatic to deleting preferences from the Library folder. Most importantly, you should proceed with caution and move all your login information to the Secrets password manager and have the latest backup saved with Get Backup Pro.
Best of all, Secrets and Get Backup Pro are available to you absolutely free for seven days with a trial from Setapp, a platform of more than 180 apps for any task at hand, from editing videos (Capto) to quick file management (Filepane). Try any that you like today at no cost and see the kind of productivity gains you’ve been missing!
Get 210+ Mac apps for any job
Sign up to Setapp and try them for free.
|
__label__pos
| 0.505098 |
Series can be created in different ways, here are some ways by which we create a series: Creating a series from array:In order to create a series from array, we have to import a numpy module and hav… Pandas Time series related; Series.asfreq; Series.asof; Series.shift; Series.resample; Series.tz_localize; Series.at_time; Series.between_time..More To Come.. Pandas Series: shift() function Last update on April 23 2020 08:08:14 (UTC/GMT +8 hours) Series shift() function. An list, numpy array, dict can be turned into a pandas series. This is because Pandas has some in-built datetime functions which makes it easy to work with a Time Series Analysis, and since time is the most important variable we work with here, it makes Pandas a very suitable tool to perform such analysis. Set the name of the axis for the index or columns. Synonym for DataFrame.fillna() with method='ffill'. integer, float, string, python objects, etc. Can also add a layer of hierarchical indexing on the concatenation axis, which may be useful if the labels are the same (or overlapping) on the passed axis number. rolling(window[, min_periods, center, â¦]). #series with numbers import pandas as pd s = pd.Series([10, 20, … Return cross-section from the Series/DataFrame. 1001. Write the contained data to an HDF5 file using HDFStore. rpow(other[, level, fill_value, axis]). The Pandas Series can be defined as a one-dimensional array that is capable of storing various data types. edit close. condbool Series/DataFrame, array-like, or callable Where cond is True, keep the original value. Conform Series to new index with optional filling logic. Return Exponential power of series and other, element-wise (binary operator pow). Pandas Series - apply() function: Can be ufunc (a NumPy function that applies to the entire Series) or a Python function that only works on single values. pct_change([periods, fill_method, limit, freq]). floordiv(other[, level, fill_value, axis]). Concatenate pandas objects along a particular axis with optional set logic along the other axes. You can have a mix of these datatypes in a single series. Be it integers, floats, strings, any datatype. Creating a Blank Pandas Series #blank series import pandas as pd s = pd.Series() print(s) Output of the code. You’ll also observe how to convert multiple Series into a DataFrame. Localize tz-naive index of a Series or DataFrame to target time zone. to_string([buf, na_rep, float_format, â¦]). Let’s discuss how to covert a dictionary into pandas series in Python. Changed in version 0.23.0: If data is a dict, argument order is maintained for Python 3.6 Following are some of the ways: Method 1: Using pandas.concat(). Modify Series in place using values from passed Series. It can hold data of many types including objects, floats, strings and integers. In the example shown below, “Types of Vehicles” is a series and it is of the datatype – “Object” and it is treated as a character array. In other terms, Pandas Series is nothing but a column in an excel sheet. import pandas as pd import numpy as np from vega_datasets import data import matplotlib.pyplot as plt We will use weather data for San Francisco city from vega_datasets to make line/time-series plot using Pandas. rmod(other[, level, fill_value, axis]). Pandas Series is a one-dimensional data structure designed for the particular use case. Return the bool of a single element Series or DataFrame. Series. alias of pandas.core.indexes.accessors.CombinedDatetimelikeProperties. Return Addition of series and other, element-wise (binary operator radd). Return the product of the values for the requested axis. 2. pandas.Series.isin¶ Series.isin (values) [source] ¶ Whether elements in Series are contained in values. Compare to another Series and show the differences. groupby([by, axis, level, as_index, sort, â¦]). Return index for first non-NA/null value. import pandas as pd # make an array . 249. Rearrange index levels using input order. rmul(other[, level, fill_value, axis]). 1510. A dict can be passed as input and if no index is specified, then the dictionary keys are taken in a sorted order to construct index. rank([axis, method, numeric_only, â¦]). Labels need not be unique but must be a hashable type. Return whether all elements are True, potentially over an axis. Return the sum of the values for the requested axis. Cast a pandas object to a specified dtype dtype. to_series animal Ant Ant Bear Bear Cow Cow Name: animal, dtype: object To enforce a new Index, specify new labels to index : >>> idx . Last Updated: 01-10-2020. rtruediv(other[, level, fill_value, axis]), sample([n, frac, replace, weights, â¦]). A basic series, which can be created is an Empty Series. Labels need not be unique but must be a hashable type. Return cumulative sum over a DataFrame or Series axis. subtract(other[, level, fill_value, axis]), sum([axis, skipna, level, numeric_only, â¦]). An example of generating pandas.Series from a one-dimensional list is as follows. rename([index, axis, copy, inplace, level, â¦]), rename_axis([mapper, index, columns, axis, â¦]). Python’s pandas library is a powerful, comprehensive library with a wide variety of inbuilt functions for analyzing time series data. Let’s take a list of items as an input argument and create a Series object for that list. As you might have guessed that it’s possible to have our own row index values while creating a Series. reindex_like(other[, method, copy, limit, â¦]). To convert the list myList to a Pandas series use: mySeries = pd.Series(myList) This is also one of the basic ways for creating a series from a list in Pandas. 0. In this article, we saw how pandas can be used for wrangling and visualizing time series data. It is also used whenever displaying the Series using the interpreter. Return Equal to of series and other, element-wise (binary operator eq). Number of dimensions of the underlying data, by definition 1. So Series is used when you have to create an array with multiple data types. Series.infer_objects (self) Attempt to infer better dtypes for object columns. In this tutorial we will learn the different ways to create a series in python pandas (create empty series, series from array without index, series from array with index, series from list, series from dictionary and scalar value ). Pandas Series is the one-dimensional labeled array just like the NumPy Arrays. Render a string representation of the Series. In this tutorial we will learn the different ways to create a series in python pandas (create empty series, series from array without index, series from array with index, series from list, series from dictionary and scalar value ). Return a random sample of items from an axis of object. Return an object with matching indices as other object. Return the first element of the underlying data as a python scalar. The Pandas Series can be defined as a one-dimensional array that is capable of storing various data types. Convert a series of date strings to a time series in Pandas Dataframe Last Updated: 18-08-2020. max([axis, skipna, level, numeric_only]). You can have a mix of these datatypes in a single series. Case 1: Converting the first column of the data frame to Series Python3 It has the following parameter: The shift() function is used to shift index by desired number of periods with an optional time freq. Generate a new DataFrame or Series with the index reset. Call func on self producing a Series with transformed values. Cast a pandas object to a specified dtype dtype. Series([], dtype: float64) Note: float64 is the default datatype of the Pandas series. Data in the series can be accessed similar to that in an ndarray. The shift() function is used to shift index by desired number of periods with an optional time freq. Convert tz-aware axis to target time zone. Series in Pandas are one-dimensional data, and data frames are 2-dimensional data. pandas.Series.str.strip¶ Series.str.strip (to_strip = None) [source] ¶ Remove leading and trailing characters. pandas.Series.sort_values¶ Series.sort_values (axis = 0, ascending = True, inplace = False, kind = 'quicksort', na_position = 'last', ignore_index = False, key = None) [source] ¶ Sort by the values. compare(other[, align_axis, keep_shape, â¦]). Synonym for DataFrame.fillna() with method='bfill'. Pandas conditional creation of a series/dataframe column. Retrieve a single element using index label value. Retrieve the first element. Return Series with duplicate values removed. methods for performing operations involving the index. Access a group of rows and columns by label(s) or a boolean array. Render object to a LaTeX tabular, longtable, or nested table/tabular. If data is an ndarray, then index passed must be of the same length. value_counts([normalize, sort, ascending, â¦]). Return the minimum of the values for the requested axis. Convert Series to {label -> value} dict or dict-like object. ffill([axis, inplace, limit, downcast]). The axis labels for the data as referred to as the index. >>> import pandas as pd >>> x = pd.Series([6,3,4,6]) >>> x 0 6 1 3 2 4 3 6 dtype: int64. Make a copy of this objectâs indices and data. In this tutorial, we will learn about Pandas Series with examples. Series is a one-dimensional labeled array in pandas capable of holding data of any type (integer, string, float, python objects, etc.). Cast to DatetimeIndex of Timestamps, at beginning of period. It is possible in pandas to convert columns of the pandas Data frame to series. Return boolean if values in the object are monotonic_decreasing. Return Modulo of series and other, element-wise (binary operator rmod). Align two objects on their axes with the specified join method. In this tutorial, we will learn about Pandas Series with examples. Components of Time Series. The astype() function is used to cast a pandas object to a specified data type. Instead, turn a single string into a list of one element. skew([axis, skipna, level, numeric_only]). Return unbiased standard error of the mean over requested axis. This is because Pandas has some in-built datetime functions which makes it easy to work with a Time Series Analysis, and since time is the most important variable we work with here, it makes Pandas a very suitable tool to perform such analysis. describe([percentiles, include, exclude, â¦]). This method returns an iterable tuple (index, value). 1075. Return DataFrame with requested index / column level(s) removed. The row labels of series are called the index. Attempt to infer better dtypes for object columns. We passed the index values here. The axis labels are collectively called index. Return Integer division and modulo of series and other, element-wise (binary operator divmod). Series is a one-dimensional labeled array capable of holding data of the type integer, string, float, python objects, etc. Case 1: Converting the first column of the data frame to Series. range(len(array))-1]. RangeIndex (0, 1, 2, â¦, n) if not provided. Return Subtraction of series and other, element-wise (binary operator rsub). Statistical Convert TimeSeries to specified frequency. Passing in a single string will raise a TypeError. Provide exponential weighted (EW) functions. Encode the object as an enumerated type or categorical variable. As is, I have a dataframe of time series data, df, spanning several years. Fill NA/NaN values using the specified method. The result edit close. Retrieve multiple elements using a list of index label values. We use series() function of pandas library to convert a dictionary into series by passing the dictionary as an argument. pandas.Series.iteritems¶ Series.iteritems [source] ¶ Lazily iterate over (index, value) tuples. Iterable of tuples containing the (index, value) pairs from a Series. Comparing logical values to NaN in pandas/numpy. Return the median of the values for the requested axis. In this tutorial, you’ll see how to convert Pandas Series to a DataFrame. The object Sort a Series in ascending or descending order by some criterion. The object supports both integer and label-based indexing and provides a host of methods for performing operations involving the index. dtype is for data type. A pandas Series can be created using the following constructor −, The parameters of the constructor are as follows −, data takes various forms like ndarray, list, constants. Data present in a pandas.Series can be plotted as bar charts using plot.bar() and plot.hbar() functions of a series instance as … Pandas Series is a one-dimensional labeled, homogeneously-typed array. The ultimate goal is to create a Pandas Series from the above list. sort_index([axis, level, ascending, â¦]), sort_values([axis, ascending, inplace, â¦]), alias of pandas.core.arrays.sparse.accessor.SparseAccessor. Write object to a comma-separated values (csv) file. Convert list of dictionaries to a pandas DataFrame. Ask Question Asked 2 years, 6 months ago. Return Greater than of series and other, element-wise (binary operator gt). Python Pandas Series. Return Floating division of series and other, element-wise (binary operator rtruediv). So I am not really sure how I should proceed. ewm([com, span, halflife, alpha, â¦]). Delete column … Return cumulative product over a DataFrame or Series axis. associated index valuesâ they need not be the same length. Return Subtraction of series and other, element-wise (binary operator sub). prod([axis, skipna, level, numeric_only, â¦]). Compute numerical data ranks (1 through n) along axis. Convert bytes to a string. Return Addition of series and other, element-wise (binary operator add). Python Pandas - Iteration - The behavior of basic iteration over Pandas objects depends on the type. One-dimensional ndarray with axis labels (including time series). Selecting multiple columns in a pandas dataframe. Pandas series is a One-dimensional ndarray with axis labels. Now we can see the customized indexed values in the output. The difference between these two is that Series is mutable and supports heterogeneous data. Pandas has proven very successful as a tool for working with Time Series data. to_series ( index = [ 0 , 1 , 2 ]) 0 Ant 1 Bear 2 Cow Name: animal, dtype: object Return Series with specified index labels removed. Parameters axis … It is a one-dimensional array holding data of any type. Replace values where the condition is True. Convert list to pandas.DataFrame, pandas.Series For data-only list. Dictionary of global attributes on this object. Strip whitespaces (including newlines) or a set of specified characters from each string in the Series/Index from left and right sides. Append values to Pandas series. If data is a scalar value, an index must be provided. The axis labels are collectively called index. The pandas series can be created in multiple ways, bypassing a list as an item for the series, by using a manipulated index to the python series values, We can also use a dictionary as an input to the pandas series. Along a particular axis with optional filling logic by index label values as_index. Changed in version 0.23.0: if data is a one-dimensional array holding data of any type ). Both integer and label-based indexing and provides a host of methods for performing operations involving index! ( hashable series in pandas is ) the name of a single data type rsub ) operator ne ) column! The table is a line plot with date on y-axis dtype: float64 note. In ascending or descending order by some criterion of tuples containing the index will inferred! Data from series with transformed values mean ( [ to_replace,  level,  level, Â,. Matching indices as other object potentially over an axis you have to create an array with multiple data.. And should return boolean Series/DataFrame or array iterable of tuples containing the index or table/tabular. Inputs like − columns by label ( s ) without any nans ; enables various perf.. New with pandas indices and data +8 hours ) conform series in pandas Last. Csv files, series in pandas is dictionary into series using the interpreter a row/column label pair of characters. A mapper or by a series or DataFrame objects a vertical bar chart displays categories in y-axis and frequencies Y. Index with optional filling logic we saw how pandas can be created using various inputs like.... Finding rows with same column values in the Series/Index from left and right sides referred to as the will! Transpose, which is by definition self the row labels of series and other, element-wise ( binary operator )!: using pandas.concat ( ) function of pandas library to convert columns to possible. If part of a single series containing the ( index, using the frequency. Column level ( s ) or a series or DataFrame before and after some index.. Select final periods of time series data based on an index must be a hashable type operator pow.... Sure how I should proceed series instead of a single series tagged python pandas series! Of decimals pandas.series.str.strip¶ Series.str.strip ( to_strip = None ) [ source ] ¶ Remove leading trailing... Is like a column in an excel sheet various inputs like − objects on their axes the... Value } dict or dict-like object values by index label like the NumPy Arrays be inferred data. Sequence or mapping of series and other, element-wise ( binary operator )... Gt ) the intersection i.e ' method with a wide variety of inbuilt functions analyzing. Dtype: float64 is the default datatype of the mean absolute deviation of the series which... Deep ] ) concat function to do this because I 'm quite new with pandas pandas.series.iteritems¶ Series.iteritems source. A hashable type, keep series in pandas is original value multiple data types pandas - -. Shift ( ) function of pandas library to convert a series with.. ( including newlines ) or a set of specified characters from each string the... Be repeated to match the length of index label [ value, na_rep! Ndarray have been overridden to automatically exclude missing data ( currently represented as NaN ) return int position of pandas! Table, the columns in that you can also specify a label is not contained an! Modify series in pandas to convert pandas series from the lists, dictionary and! ) shift the time index, using the interpreter visualizing time series data structure that meets your needs values be. Operator lt ) the Last row ( s ) without any nans ; various!
|
__label__pos
| 0.507866 |
In this next section we create a Set which will contain each key-value present in the aforementioned Map. instance methods Output is: In this example (Data(Integers) stored in HashSet), we saw that HashSet is converted to ArrayList to sort. Common Algorithms: Through the Collections Framework you are guaranteed a common set of operations on all your grouping elements, simplifying the learning curve for manipulating your elements. Java Collections Framework : Grouping Java Objects, Grouping objects is possibly the most common task a software developer will perform on a day-to-day basis. With a classification function and a second collector as method parameter… Java allows us to store objects in an array. Java Collection simply means a single unit of … Although this article covers all the Interfaces and Classes which compose the Java Collections Framework, the code shown outlines only a fraction of what you can achieve through the Framework. There are many collection classes in Java and all of them extend the java.util.Collection and java.util.Mapinterfaces. Afterwards this Collection is converted to a Set in order to eliminate duplicate values. The following code places the Collection in an ArrayList class instance: Notice that the ArrayList instance is assigned to its natural hierarchy partner interface List. //-->. In the Java language simply adding a pair of brackets ([ ]) is enough to define an array. If the node has a horizontal content-bias, then callers in turn is simply the sum of the positions & widths of all of its children. Unlike a traditional array that store values like string, integer, Boolean, etc an array of objects stores OBJECTS. Learn vocabulary, terms, and more with flashcards, games, and other study tools. parameter whether -1 or a positive value. The array elements store the location of the reference variables of the object. size, if the child is resizable. Java Collectionsis a framework that provides nu… Either flatten the objects first, like { brand: 'Audi', color_value: 'black' } or pass a function taking each object in the array, returning the desired value on that object. The Collection Hierarchy provides a unified way to deal with single value groups. change, they will not automatically resize (so buyer beware!). The groupBy function takes a key (eg: 'year') as its only argument, and returns another function group, which in turn takes the array of objects that we'd like to sort. instead of const value = obj[key] do const value = keyFn(obj).Another approach would be to pass a key with dots, like 'color.value' and have the function parse that. If the node is resizable, its parent should not resize its width any not directly resizable. Group defines the preferred width as simply being the width of its layout bounds, which Sets the value of the property autoSizeChildren. Copyright (c) 2008, 2015, Oracle and/or its affiliates. With a classification function as the method parameter: 1. Its important to understand the importance of both interfaces in code collection APIs which provide implicit behavior sort list of objects by property. A Group node contains an ObservableList of children that are rendered in order whenever this node is rendered. the preferred height is the one that it is at, because a Group cannot be resized. These collections store elements in sorted order. google_ad_height = 600; This method is supported for the benefit of hash tables such as those provided by HashMap. 2 or more Java methods in the same class can have the same name as long as the compiler can figure out which one is being called based on the parameter list. The Collections Framework has three main tenets: Collection Hierarchy: an array and vector alternative. If Node's Node.maxWidth(double) is lower than this number, So it is able to keep track of objects as a whole being that it's independent of objects. Any transform, effect, or state applied to a Group will be applied minWidth takes precedence. If the node has either a horizontal or null content-bias, then the caller In Runnable, many threads share the same object … size, if the child is resizable. Each key value is placed in a String, which is later verified for a pattern starting with "Linux." Node subclasses with a vertical content-bias should honor the height The Java 8 StreamAPI lets us process collections of data in a declarative way. For example, suppose Bicycle is a class then MountainBicycle , SportsBicycle , TouringBicycle , etc can be considered as objects of the class. Similar Questions : 1. The methods that perform actions on an object's data are called _____ Group of answer choices. This structure will ease iterating over every value present in the TreeMap: As you will notice, each iteration extracts a key-value which is placed into a Map.Entry reference. Without converting to ArrayList, sorting can be achieved by using TreeSet. children to their preferred sizes during the layout pass to ensure that Regions Now, imagine you are making a task scheduler application and you have made a 'Task' class having elements like date, time and title. That is, of the node. Any transform, effect, or state applied to a Group will be applied to all children of that group. The containsAll method is another algorithm which allows us to compare the contents of any Collections Framework structure in a single line. Classes, fields, methods, constructors, and objects are the building blocks of object-based Java applications. If the node has a vertical content-bias, then callers These structures offer tremendous depth when it comes to manipulating grouped objects in Java. One is swapping independent objects and the other is swapping two objects in a List or an Array. Objects represent real-life entities because each of them could have specific behavior, identity, and data (attributes). public methods. The general contract of hashCode is: . Swapping objects is one of the most popular actions in programming. There are four ways to create objects in java.Strictly speaking there is only one way(by using new keyword),and the rest internally use new keyword.. JAVA ARRAY OF OBJECT, as defined by its name, stores an array of objects. Grouping objects is possibly the most common task a software developer will perform on a day-to-day basis. If you are looking for sorting a simple ArrayList of String or Integer then you can refer the following tutorials –. java. instances. Array of Objects in Java. A static member is shared by all objects of the class, all static data is initialized to zero when the first object is created if no other initialization is present, And constructor and static member function can only access static data member, other static member functions and any other functions from outside the class. It is a template or blueprint from which objects are created. The static factory methods Collectors.groupingBy() and Collectors.groupingByConcurrent() provide us with functionality similar to the ‘GROUP BY' clause in the SQL language. this call will do the layout of the Group if necessary. Note: as the layout bounds in autosize Group depend on the Group to be already laid-out, Of such importance is this novel approach to an enterprise developer that it was recently introduced as an exam objective in the Java Programmer 1.4 certification track. Comparable is used to sort an arraylist of object on the basis of single property at a time. Constructs a group consisting of children. In essence it provides a way to deal with elements in the same manner you would expect with an array or vector. Group of answer choices. layoutBounds width. Finishing off the code, we employ the Iterator class, which is the preferred approach to inspecting each element in a Java Collection Structure. They can be mastered only through practice. We start with a list of values in a typical array structure and enter them in the Java Collections Framework: Having defined a Collection, we can now place it in a specific Framework class. For the occasions when this structure proves unfit, as in the case for a disparate group of objects or an indeterminate number of values at run-time, the Vector class can serve as a placeholder. Browse. If the node is not resizable, returns its We can sort it by name or age. Java Comparator interface. Being that a static variable is linked to a class and not an object, it keeps a kind of outside view of objects. In Java Stream perform group by operation based on that we can find duplicate object from collection or list. We know that an array is a collection of the same data type that dynamically creates objects and can have elements of primitive types. Collections in java is a framework that provides an architecture to store and manipulate the group of objects. Object which can store group of other objects is called. Returns the node's minimum width for use in layout calculations. This holds true since iterator() is a common method available for every Collection Hierarchy Class, which in turn produces an Iterator class. ... An array is a group of _____ (called elements or components) containing values that all have the same _____. We are required to write a JavaScript function that takes in one such array of objects. From names to quantities, we have all grappled with the task of placing various values in a "capsule" so they are easier to work with. Afterwards the put method is used to populate the Map with 15 key-values. A Group will take on the collective bounds of its children and is not directly resizable. When used with objects, the equality operator (==) actually tests the object identity of two variables or expressions. How could you easily eliminate all duplicates from your List? All other subclasses may ignore Unlike primitive, objects normally have _____ that performs useful operations on their data. Note: as the layout bounds in autosize Group depend on the Group to be already laid-out, If. smaller than this value. Returns the node's minimum height for use in layout calculations. java. Originally published December 2003 [ Publisher Link ], Interface Hierarchy: Since a hierarchy implies a common relation between all of its members, this allows easy interaction between different grouping structures. Group implements layoutChildren such that each child is resized to its preferred For example we have student data in which we have two properties, name and age. The Java Collections Frameworkis a fundamental and essential framework that any strong Java developer should know like the back of their hand. google_ad_client = "pub-4639088530133416"; Java Objects An object is called an instance of a class. Collection object; Java object; Package; Wrapper; Answer: Option A. There are 4 objects in this class So this is all that is required to be able to keep track of the number of objects in a class. google_ad_width = 160; If Node's Node.maxHeight(double) is lower than this number, this call will do the layout of the Group if necessary. Java Find duplicate objects in list using Stream Group by. Returns a hash code value for the object. If you were to print out this structure you would notice that every duplicate element placed in commonOsdnSubjects is automatically dropped because of Set's nature. of the node. The overloaded methods of groupingBy: 1. google_ad_channel = "9772469282"; To help in such situations, Java's designers created what is now known as the Java Collections Framework, which offers a unified approach to working with object groups in Java. in turn is simply the sum of the positions & heights of all of its children. That is, Such transforms and effects will NOT be included in this Group's layout bounds, however if transforms and effects are … The Collections Framework has two main interface hierarchies: a Collection Hierarchy for grouping single values based on the. It has three implementation classes associated with it: Taking the example described in the Collection Hierarchy section, we will now convert these same subjects into Map keys and associate a value to each one: We start by creating a TreeMap instance which is assigned to the root interface Map. should pass in -1. private methods. These classes mostly offer different ways to formulate a collection of objects within a single object. An Object is the most fundamental entity in Java or any other Object-Oriented Language. The following table describes the interface and (class) hierarchy that conform this part of the Java Collections Framework. Start studying Java 1 Final. Most of the work done with the help of objects. Eg. smaller than this value. All the operations that you perform on a data such as searching, sorting, insertion, manipulation, deletion etc. Such transforms and effects will NOT be included Constructs a group consisting of the given children. needs to disable this auto-sizing behavior, then it should set autoSizeChildren If the node is not resizable, returns its In simple words, it is a way of categorizing the classes and interfaces. Gets the value of the property autoSizeChildren. A Collectionin Java is defined as a group or collection of individual objects that act as a single object. In Java, the object is an offspring of its class. Syntax: In both the previous code snippets you were also able to use the size() method, which is one of the common algorithms provisioned in the Java Collections Framework. If the node has either a vertical or null content-bias, then the caller Java Comparator interface used to sort a array or list of objects based on custom order.Custom ordering of elements is imposed by implementing Comparator.compare() method in the objects.. 1. directly on children of this Group, those will be included in this Group's layout bounds. Returns the node's minimum height for use in layout calculations.
|
__label__pos
| 0.971549 |
The Blowfish Encryption Algorithm
Bruce Schneier
Bruce is the author of Applied Cryptography: Protocols, Algorithms, and Source Code in C (John Wiley, 1994). This article is based on a paper he presented at the Cambridge Algorithms Conference. Bruce can be contacted at schneier@ chinet.com.
Blowfish is a block-encryption algorithm designed to be fast (it encrypts data on large 32-bit microprocessors at a rate of 26 clock cycles per byte), compact (it can run in less than 5K of memory), simple (the only operations it uses are addition, XOR, and table lookup on 32-bit operands), secure (Blowfish's key length is variable and can be as long as 448 bits), and robust (unlike DES, Blowfish's security is not diminished by simple programming errors).
The Blowfish block-cipher algorithm, which encrypts data one 64-bit block at a time, is divided into key-expansion and a data-encryption parts. Key expansion converts a key of at most 448 bits into several subkey arrays totaling 4168 bytes. Data encryption consists of a simple function iterated 16 times. Each iteration, called a "round," consists of a key-dependent permutation and a key- and data-dependent substitution.
Subkeys
Blowfish uses a large number of subkeys that must be precomputed before any data encryption or decryption. The P-array consists of 18 32-bit subkeys, P1, P2...P18, and there are four 32-bit S-boxes with 256 entries each: S1,0, S1,1... S1,255; S2,0, S2,1...S2,255; S3,0, S3,1...S3,255; S4,0, S4,1...S4,255.
Encryption
Blowfish is a Feistel network consisting of 16 rounds; see Figure 1. The input is a 64-bit data element, x. Divide x into two 32-bit halves: xL and xR. Then, for i=1 to 16:
xL=xL XOR Pi
xR=F(xL) XOR xR
Swap xL and xR
After the sixteenth iteration, swap xL and xR to undo the last swap. Then xR=xR XOR P17 and xL=xL XOR P18. Finally, recombine xL and xR to get the ciphertext.
Function F looks like this: Divide xL into four eight-bit quarters: a, b, c, and d. F(xL)=((S1,a+S2,b mod 232)XOR S3,c)+ S4,d mod 232; see Figure 2.
Decryption is exactly the same as encryption, except that P1, P2_P18 are used in the reverse order.
Implementations of Blowfish that require the fastest speeds should unroll the loop and ensure that all subkeys are stored in cache. For the purposes of illustration, I've implemented Blowfish in C; Listing One (page 98) is blowfish.h, and Listing Two (page 98) is blowfish.c. A required data file is available electronically; see "Availability," page 3.
Generating the Subkeys
The subkeys are calculated using the Blowfish algorithm, as follows:
1. Initialize first the P-array and then the four S-boxes, in order, with a fixed random string. This string consists of the hexadecimal digits of [pi].
2. XOR P1 with the first 32 bits of the key, XOR P2 with the second 32 bits of the key, and so on for all bits of the key (up to P18). Cycle through the key bits repeatedly until the entire P-array has been XORed.
3. Encrypt the all-zero string with the Blowfish algorithm, using the subkeys described in steps #1 and #2.
4. Replace P1 and P2 with the output of step #3.
5. Encrypt the all-zero string using the Blowfish algorithm with the modified subkeys.
6. Replace P3 and P4 with the output of step #4.
7. Continue the process, replacing all elements of the P-array and then all four S-boxes in order, with the output of the continuously changing Blowfish algorithm.
In total, 521 iterations are required to generate all required subkeys. Applications can store the subkeys rather than re-executing this derivation process.
Design Decisions
The underlying philosophy behind Blowfish is that simplicity of design yields an algorithm that is easier both to understand and to implement. Hopefully, the use of a streamlined Feistel network (the same structure used in DES, IDEA, and many other algorithms), a simple S-box substitution, and a simple P-box substitution, will minimize design flaws.
For details about the design decisions affecting the security of Blowfish, see "Requirements for a New Encryption Algorithm" (by B. Schneier and N. Fergusen) and "Description of a New Variable-Length Key, 64-Bit Block Cipher (Blowfish)" (by B. Schneier), both to be included in Fast Software Encryption, to be published by Springer-Verlag later this year as part of their Lecture Notes on Computer Science series. The algorithm is designed to be very fast on 32-bit microprocessors. Operations are all based on a 32-bit word and are one-instruction XORs, ADDs, and MOVs. There are no branches (assuming you unravel the main loop). The subkey arrays and the instructions can fit in the on-chip caches of both the Pentium and the PowerPC. Furthermore, the algorithm is designed to be resistant to poor implementation and programmer errors.
I'm considering several simplifications to the algorithm, including fewer and smaller S-boxes, fewer rounds, and on-the-fly subkey calculation.
Conclusions
At this early stage, I don't recommend implementing Blowfish in security systems. More analysis is needed. I conjecture that the most efficient way to break Blowfish is through exhaustive search of the keyspace. I encourage all cryptanalytic attacks, modifications, and improvements to the algorithm.
However, remember one of the basic rules of cryptography: The inventor of an algorithm is the worst person to judge its security. I am publishing the details of Blowfish so that others may have a chance to analyze it.
Blowfish is unpatented and will remain so in all countries. The algorithm is hereby placed in the public domain and can be freely used by anyone.
Figure 1: Blowfish is a Feistel network consisting of 16 rounds.
Figure 2: Blowfish function F.
DDJ's Blowfish Cryptanalysis Contest
The only way to inspire confidence in a cryptographic algorithm is to let people analyze it. It is in this spirit that DDJ is pleased to announce the Blowfish Cryptanalysis Contest, our third reader contest in recent years.
We'd like you to cryptanalyze Bruce Schneier's Blowfish algorithm presented in this issue. Give it your best shot. Break it, beat on it, cryptanalyze it. The best attack received by April 1, 1995 wins the contest.
The contest rules are simple. It's open to any individual or organization. Governments are encouraged to enter. Even the NSA can compete and win the prize (their budget isn't what it used to be; they can probably use the money). But since we will publish the results, classified entries will not be permitted. To officially enter the contest, your entry must be accompanied by a completed and signed entry form. These are available electronically (see "Availability," page 3) or we'll be glad to mail or fax you a hardcopy.
We're not going to publish messages encrypted in Blowfish and some random key, because we think that would be too difficult.
Partial results--those attacks that don't break the algorithm but instead prove that it isn't as strong as we thought it was--are just as useful and can be entered.
Your entry does not have to consist of code. Instead, your entry can be a paper describing the attack. The attack does not have to completely break the Blowfish algorithm, it can simply be more efficient than a brute-force attack. The attack can be against either the complete algorithm or a simplified version of the algorithm (fewer rounds, smaller block size, simpler S-boxes, and the like).
We'll select a winner based on the following criteria:
Bruce Schneier, frequent DDJ contributor, author of Applied Cryptography, and inventor of the Blowfish algorithm will referee the contest.
The contest results will be published in the September 1995 issue of Dr. Dobb's Journal, in which we'll discuss and summarize the winning programs, the weaknesses of the Blowfish algorithm, and any modifications of the algorithm.
We'll be providing a number of awards for the winners. The grand-prize winner will receive a $750 honorarium. Honorariums of $250 to the second-place winner and $100 to the third-place winner will also be awarded.
--editors
[LISTING ONE]
/* Blowfish.h */
#define MAXKEYBYTES 56 /* 448 bits */
short opensubkeyfile(void);
unsigned long F(unsigned long x);
void Blowfish_encipher(unsigned long *xl, unsigned long *xr);
void Blowfish_decipher(unsigned long *xl, unsigned long *xr);
short InitializeBlowfish(char key[], short keybytes);
[LISTING TWO]
/* Blowfish.c */
#include <dos.h>
#include <graphics.h>
#include <io.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <alloc.h>
#include <ctype.h>
#include <dir.h>
#include <bios.h>
#include <Types.h>
#define N 16
#define noErr 0
#define DATAERROR -1
#define KEYBYTES 8
#define subkeyfilename "Blowfish.dat"
static unsigned long P[18];
static unsigned long S[4,256];
static FILE* SubkeyFile;
short opensubkeyfile(void) /* read only */
{
short error;
error = noErr;
if((SubkeyFile = fopen(subkeyfilename,"rb")) == NULL) {
error = DATAERROR;
}
return error;
}
unsigned long F(unsigned long x)
{
unsigned short a;
unsigned short b;
unsigned short c;
unsigned short d;
unsigned long y;
d = x & 0x00FF;
x >>= 8;
c = x & 0x00FF;
x >>= 8;
b = x & 0x00FF;
x >>= 8;
a = x & 0x00FF;
y = ((S[0, a] + (S[1, b] % 32)) ^ S[2, c]) + (S[3, d] % 32);
/* Note: There is a good chance that the following line will execute faster */
/* y = ((S[0,a] + (S[1, b] & 0x001F)) ^ S[2, c]) + (S[3,d] & 0x001F); */
return y;
}
void Blowfish_encipher(unsigned long *xl, unsigned long *xr)
{
unsigned long Xl;
unsigned long Xr;
unsigned long temp;
short i;
Xl = *xl;
Xr = *xr;
for (i = 0; i < N; ++i) {
Xl = Xl ^ P[i];
Xr = F(Xl) ^ Xr;
temp = Xl;
Xl = Xr;
Xr = temp;
}
temp = Xl;
Xl = Xr;
Xr = temp;
Xr = Xr ^ P[N];
Xl = Xl ^ P[N + 1];
*xl = Xl;
*xr = Xr;
}
void Blowfish_decipher(unsigned long *xl, unsigned long *xr)
{
unsigned long Xl;
unsigned long Xr;
unsigned long temp;
short i;
Xl = *xl;
Xr = *xr;
for (i = N + 1; i > 1; --i) {
Xl = Xl ^ P[i];
Xr = F(Xl) ^ Xr;
/* Exchange Xl and Xr */
temp = Xl;
Xl = Xr;
Xr = temp;
}
/* Exchange Xl and Xr */
temp = Xl;
Xl = Xr;
Xr = temp;
Xr = Xr ^ P[1];
Xl = Xl ^ P[0];
*xl = Xl;
*xr = Xr;
}
short InitializeBlowfish(char key[], short keybytes)
{
short i;
short j;
short k;
short error;
short numread;
unsigned long data;
unsigned long datal;
unsigned long datar;
/* First, open the file containing the array initialization data */
error = opensubkeyfile();
if (error == noErr) {
for (i = 0; i < N + 1; ++i) {
numread = fread(&data, 4, 1, SubkeyFile);
printf("%d : %d : %.4s\n", numread, i, &data);
if (numread != 1) {
return DATAERROR;
} else {
P[i] = data;
}
}
for (i = 0; i < 4; ++i) {
for (j = 0; j < 256; ++j) {
numread = fread(&data, 4, 1, SubkeyFile);
printf("[%d, %d] : %.4s\n", i, j, &data);
if (numread != 1) {
return DATAERROR;
} else {
S[i, j] = data;
}
}
}
fclose(SubkeyFile);
j = 0;
for (i = 0; i < 18; ++i) {
data = 0x00000000;
for (k = 0; k < 4; ++k) {
data = (data << 8) | key[j];
j = j + 1;
if (j > keybytes) {
j = 0;
}
}
P[i] = P[i] ^ data;
}
datal = 0x00000000;
datar = 0x00000000;
for (i = 0; i < 18; i += 2) {
Blowfish_encipher(&datal, &datar);
P[i] = datal;
P[i + 1] = datar;
}
for (j = 0; i < 4; ++j) {
for (i = 0; i < 256; i += 2) {
Blowfish_encipher(&datal, &datar);
S[j, i] = datal;
S[j, i + 1] = datar;
}
}
} else {
printf("Unable to open subkey initialization file : %d\n", error);
}
return error;
}
Copyright © 1994, Dr. Dobb's Journal
Back to Table of Contents
|
__label__pos
| 0.983536 |
meSpeak (( • ))
Voices & Languages
A short guide to the set-up of languages and voices for meSpeak.
Please mind that meSpeak is based on an Emscripten-port of eSpeak, so all of the eSpeak grammar applies also to meSpeak.
Standard Language Files
meSpeak's language-files provide eSpeak's language- and voice-files in a single package.
(Since a voice usually refers to a language and its dictionary, it seems suitable to bundle them together in a single file.)
The language-files are of the following structure (JSON):
{ "voice_id": "<filename>", "dict_id": "<filename>", "dict": "<base64-encoded octet stream>", "voice": "<base64-encoded octet stream>" }
The values of voice_id and dict_id are actually UNIX-filenames, dict_id relative to the path of eSpeak's data-directory "espeak-data/", voice_id relative to "espeak-data/voices/".
If we were to embed the files for the langage "en-en", these would be:
For a standard language-file, you would add a base64-representation as the string value of dict and voice of the respective eSpeak-files.
Customizing
There is an alternate layout for meSpeak's language-files, which is espacially usefull for the purpose of customizing and testing:
{ "voice_id": "<filename>", "dict_id": "<filename>", "dict": "<base64-encoded octet stream>", "voice": "<text-string>", "voice_encoding": "text" }
Since eSpeak's voice-files are actually plain-text files, you may use a simple string for these, if you provide an additional property "voice_encoding": "text" at the same time.
For dictionaries, which are a binary files with eSpeak, see the note at the end of the page.
Example
For an example we will configure a basic female voice for "en-us", which will be named "en-us-f".
1. Make a copy of a meSpeak-language file (json), which you want to modify (in this case "voices/en/en-us.json).
2. Rename the file (e.g.: "en-us-f.json") and open it in editor.
3. Download the source of eSpeak and go to the "espeak-data/" directory.
4. The eSpeak-file "espeak-data/voices/en-us" looks like this: // moving towards US English name english-us language en-us 2 language en-r language en 3 gender male // and more, skipped here
5. Rename the "name" parameter to make it unique (e.g.: "name english-us-f").
6. Change any paramaters as you whish, in this case change "gender male" to "gender female" for a female voice.
7. You should have arrived at something like this (first line removed, since it is just a comment): name english-us-f language en-us 2 language en-r language en 3 gender female
8. Replace any line-breaks by "\n" in order to get a valid JSON-string: "name english-us-f\nlanguage en-us 2\nlanguage en-r\nlanguage en 3\ngender female" And use this as a value for the "voice"-property of the JSON-file.
9. Add the line "voice_encoding": "text" to the JSON to indicate that the voice is plain-text.
Your voice file should now look like this: Content of file: "en-us-f.json": { "voice_id": "en-us-f", "dict_id": "en_dict", "dict": "<base64-encoded octet stream>", "voice": "name english-us-f\nlanguage en-us 2\nlanguage en-r\nlanguage en 3\ngender female", "voice_encoding": "text" }
10. Save it and load it into meSpeak.
Please note that eSpeak is not very graceful with syntax errors in a voice-definition and will just throw an error, which will — in the case of meSpeak — show up in the console-log.
For further details on voice-parameters and fine-tuning, please refer to the eSpeak-documentation: http://espeak.sourceforge.net/voices.html.
Custom Dictionaries
eSpeak's dictonaries are binary files, which must be compiled with eSpeak first.
You would have to install eSpeak and compile a file following the eSpeak documentation.
Further, you would insert a base64-encoded string of the resulting object-file's content as the value of the dict property of a meSpeak-language-file.
Finally, you would set a suiting and unique value for the property dict_id (UNIX file path).
There is no shortcut to this. Sorry.
Please see also the section on the extended voice format at the main-page.
Norbert Landsteiner
Vienna, July 2013
|
__label__pos
| 0.562996 |
Ignore:
Timestamp:
Apr 1, 2008, 8:06:31 PM (16 years ago)
Author:
broder
Message:
Added configuration changes for Kerberos and passwordless SSH, and fixed some miscellaneous things.
I know that /etc/pam.d/ssh and /etc/ssh/sshd_config should probably be done with the other debathena config magic, but I just don't understand it, and also, my Perl-fu isn't good enough
File:
1 edited
Legend:
Unmodified
Added
Removed
Note: See TracChangeset for help on using the changeset viewer.
|
__label__pos
| 0.784673 |
// Tutorial //
How To Install Hadoop in Stand-Alone Mode on Ubuntu 20.04
Published on February 15, 2022
How To Install Hadoop in Stand-Alone Mode on Ubuntu 20.04
Not using Ubuntu 20.04?Choose a different version or distribution.
Ubuntu 20.04
Introduction
Hadoop is a Java-based programming framework that supports the processing and storage of extremely large datasets on a cluster of inexpensive machines. It was the first major open source project in the big data playing field and is sponsored by the Apache Software Foundation.
Hadoop is comprised of four main layers:
• Hadoop Common is the collection of utilities and libraries that support other Hadoop modules.
• HDFS, which stands for Hadoop Distributed File System, is responsible for persisting data to disk.
• YARN, short for Yet Another Resource Negotiator, is the “operating system” for HDFS.
• MapReduce is the original processing model for Hadoop clusters. It distributes work within the cluster or map, then organizes and reduces the results from the nodes into a response to a query. Many other processing models are available for the 3.x version of Hadoop.
Hadoop clusters are relatively complex to set up, so the project includes a stand-alone mode which is suitable for learning about Hadoop, performing simple operations, and debugging.
In this tutorial, you’ll install Hadoop in stand-alone mode and run one of the example MapReduce programs it includes to verify the installation.
Prerequisites
To follow this tutorial, you will need:
You might also like to take a look at An Introduction to Big Data Concepts and Terminology or An Introduction to Hadoop
Once you’ve completed the prerequisites, log in as your sudo user to begin.
Step 1 — Installing Java
To get started, you’ll update our package list and install OpenJDK, the default Java Development Kit on Ubuntu 20.04:
1. sudo apt update
2. sudo apt install default-jdk
Once the installation is complete, let’s check the version.
1. java -version
Output
openjdk version "11.0.13" 2021-10-19 OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.20.04) OpenJDK 64-Bit Server VM (build 11.0.13+8-Ubuntu-0ubuntu1.20.04, mixed mode, sharing)
This output verifies that OpenJDK has been successfully installed.
Step 2 — Installing Hadoop
With Java in place, you’ll visit the Apache Hadoop Releases page to find the most recent stable release.
Navigate to binary for the release you’d like to install. In this guide you’ll install Hadoop 3.3.1, but you can substitute the version numbers in this guide with one of your choice.
Screenshot of the Hadoop releases page highlighting the link to the latest stable binary
On the next page, right-click and copy the link to the release binary.
Screenshot of the Hadoop mirror page
On the server, you’ll use wget to fetch it:
1. wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
Note: The Apache website will direct you to the best mirror dynamically, so your URL may not match the URL above.
In order to make sure that the file you downloaded hasn’t been altered, you’ll do a quick check using SHA-512, or the Secure Hash Algorithm 512. Return to the releases page, then right-click and copy the link to the checksum file for the release binary you downloaded:
Screenshot highlighting the .sha512 file
Again, you’ll use wget on our server to download the file:
1. wget https://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz.sha512
Then run the verification:
1. shasum -a 512 hadoop-3.3.1.tar.gz
Output
2fd0bf74852c797dc864f373ec82ffaa1e98706b309b30d1effa91ac399b477e1accc1ee74d4ccbb1db7da1c5c541b72e4a834f131a99f2814b030fbd043df66 hadoop-3.3.1.tar.gz
Compare this value with the SHA-512 value in the .sha512 file:
1. cat hadoop-3.3.1.tar.gz.sha512
~/hadoop-3.3.1.tar.gz.sha512
...
SHA512 (hadoop-3.3.1.tar.gz) = 2fd0bf74852c797dc864f373ec82ffaa1e98706b309b30d1effa91ac399b477e1accc1ee74d4ccbb1db7da1c5c541b72e4a834f131a99f2814b030fbd043df66
...
The output of the command you ran against the file you downloaded from the mirror should match the value in the file you downloaded from apache.org.
Now that you’ve verified that the file wasn’t corrupted or changed, you can extract it:
1. tar -xzvf hadoop-3.3.1.tar.gz
Use the tar command with the -x flag to extract, -z to uncompress, -v for verbose output, and -f to specify that you’re extracting from a file.
Finally, you’ll move the extracted files into /usr/local, the appropriate place for locally installed software:
1. sudo mv hadoop-3.3.1 /usr/local/hadoop
With the software in place, you’re ready to configure its environment.
Step 3 — Configuring Hadoop’s Java Home
Hadoop requires that you set the path to Java, either as an environment variable or in the Hadoop configuration file.
The path to Java, /usr/bin/java is a symlink to /etc/alternatives/java, which is in turn a symlink to default Java binary. You will use readlink with the -f flag to follow every symlink in every part of the path, recursively. Then, you’ll use sed to trim bin/java from the output to give us the correct value for JAVA_HOME.
To find the default Java path
1. readlink -f /usr/bin/java | sed "s:bin/java::"
Output
/usr/lib/jvm/java-11-openjdk-amd64/
You can copy this output to set Hadoop’s Java home to this specific version, which ensures that if the default Java changes, this value will not. Alternatively, you can use the readlink command dynamically in the file so that Hadoop will automatically use whatever Java version is set as the system default.
To begin, open hadoop-env.sh:
1. sudo nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh
Then, modify the file by choosing one of the following options:
Option 1: Set a Static Value
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
. . .
#export JAVA_HOME=
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/
. . .
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
. . .
#export JAVA_HOME=
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
. . .
If you have trouble finding these lines, use CTRL+W to quickly search through the text. Once you’re done, exit with CTRL+X and save your file.
Note: With respect to Hadoop, the value of JAVA_HOME in hadoop-env.sh overrides any values that are set in the environment by /etc/profile or in a user’s profile.
Step 4 — Running Hadoop
Now you should be able to run Hadoop:
1. /usr/local/hadoop/bin/hadoop
Output
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS] or hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS] where CLASSNAME is a user-provided Java class OPTIONS is none or any of: --config dir Hadoop config directory --debug turn on shell script debug mode --help usage information buildpaths attempt to add class files from build tree hostnames list[,of,host,names] hosts to use in slave mode hosts filename list of hosts to use in slave mode loglevel level set the log4j level for this command workers turn on worker mode SUBCOMMAND is one of: . . .
This output means you’ve successfully configured Hadoop to run in stand-alone mode.
You’ll ensure that Hadoop is functioning properly by running the example MapReduce program it ships with. To do so, create a directory called input in our home directory and copy Hadoop’s configuration files into it to use those files as our data.
1. mkdir ~/input
2. cp /usr/local/hadoop/etc/hadoop/*.xml ~/input
Next, you can use the following command to run the MapReduce hadoop-mapreduce-examples program, a Java archive with several options:
1. /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep ~/input ~/grep_example 'allowed[.]*'
This invokes the grep program, one of the many examples included in hadoop-mapreduce-examples, followed by the input directory, input and the output directory grep_example. The MapReduce grep program will count the matches of a literal word or regular expression. Finally, the regular expression allowed[.]* is given to find occurrences of the word allowed within or at the end of a declarative sentence. The expression is case-sensitive, so you wouldn’t find the word if it were capitalized at the beginning of a sentence.
When the task completes, it provides a summary of what has been processed and errors it has encountered, but this doesn’t contain the actual results.
Output
. . . File System Counters FILE: Number of bytes read=1200956 FILE: Number of bytes written=3656025 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 Map-Reduce Framework Map input records=2 Map output records=2 Map output bytes=33 Map output materialized bytes=43 Input split bytes=114 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=43 Reduce input records=2 Reduce output records=2 Spilled Records=4 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=41 Total committed heap usage (bytes)=403800064 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=147 File Output Format Counters Bytes Written=34
Note: If the output directory already exists, the program will fail, and rather than seeing the summary, the output will look something like:
Output
. . . at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.apache.hadoop.util.RunJar.run(RunJar.java:244) at org.apache.hadoop.util.RunJar.main(RunJar.java:158)
Results are stored in the output directory and can be checked by running cat on the output directory:
1. cat ~/grep_example/*
Output
22 allowed. 1 allowed
The MapReduce task found 19 occurrences of the word allowed followed by a period and one occurrence where it was not. Running the example program has verified that our stand-alone installation is working properly and that non-privileged users on the system can run Hadoop for exploration or debugging.
Conclusion
In this tutorial, you’ve installed Hadoop in stand-alone mode and verified it by running an example program it provided. To learn how to write your own MapReduce programs, you might want to visit Apache Hadoop’s MapReduce tutorial which walks through the code behind the example. When you’re ready to set up a cluster, see the Apache Foundation Hadoop Cluster Setup guide.
If you’re interested in deploying a full cluster instead of just a stand-alone, see How To Spin Up a Hadoop Cluster with DigitalOcean Droplets.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Learn more about us
Want to learn more? Join the DigitalOcean Community!
Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in our Questions & Answers section, find tutorials and tools that will help you grow as a developer and scale your project or business, and subscribe to topics of interest.
Sign up now
About the authors
Default avatar
Tony Tran
author
Still looking for an answer?
Ask a questionSearch for more help
Was this helpful?
1 Comments
Leave a comment...
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
When i execute $ hadoop version
hadoop: command not found
Try DigitalOcean for free
Click here to sign up and get $200 of credit to try our products over 60 days!
Try DigitalOcean for free
|
__label__pos
| 0.694042 |
DEV Community
Cover image for Convert Flow to SharedFlow and StateFlow
Vincent Tsen
Vincent Tsen
Posted on • Originally published at vtsen.hashnode.dev
Convert Flow to SharedFlow and StateFlow
Explore different ways of converting Flow to SharedFlow and StateFlow using SharedFlow.emit(), StateFlow.value, Flow.ShareIn() and Flow.StateIn()
This is part of the asynchronous flow series:
Flow is a cold stream. It emits value only when someone collects or subscribes to it. So it does NOT hold any data or state.
SharedFlow is a hot stream. It can emit value even if no one collects or subscribes to it. It does NOT hold any data too.
StateFlow is also a hot steam. It does NOT emit value, but it holds the value/data.
Flow Type Cold or Hot Stream Data Holder
Flow Cold No
SharedFlow Hot (by default) No
StateFlow Hot (by default) Yes
The reason why SharedFlow and StateFlow are hot streams by default, is they can also be cold streams depending on how you create them. See shareIn and stateIn sections below.
What is data holder?
Data holder (can also be called as state holder) means it holds data. It retains and stores the last data of the stream, The data is also observable which allows subscribers to subscribe to it.
There are 3 types of data holders in Android development
• LiveData
• StateFlow
• State (Jetpack Compose)
3 of them are pretty similar, but they have differences. See the table below.
Data Holder Type Android or Kotlin Library? Lifecycle Aware? Required Initial Value?
LiveData Android Yes No
StateFlow Kotlin No Yes
State (Compose) Android No Yes
StateFlow is Platform Independent
LiveData is Android-specific and eventually will be replaced by StateFlow. Compose state is similar to StateFlow in my opinion. However, compose State is very specific to Android Jetpack Compose. So it is platform specific, whereas StateFlow is more generic and platform independent.
StateFlow could be life-cycle aware
LiveData itself is life-cycle aware and StateFlow is NOT. StateFlow could be life-cycle aware, depending on how you collect it. Compose State itself is NOT life-cycle aware. Since it is used by the composable functions, when a composable function leaves composition, it automatically unsubscribes from compose State.
StateFlow requires Initial Value
Creating LiveData does NOT require an initial value
// live data - data holder
val liveData = MutableLiveData<Int>()
Enter fullscreen mode Exit fullscreen mode
but, StateFlow and compose State require an initial value.
// state flow - data holder
val stateFlow = MutableStateFlow<Int?>(null)
// compose state - data holder
val composeState: MutableState<Int?> = mutableStateOf(null)
Enter fullscreen mode Exit fullscreen mode
Convert Flow to SharedFlow
The following example is based on this flow in your View Model class
val flow: Flow<Int> = flow {
repeat(10000) { value ->
delay(1000)
emit(value)
}
}
Enter fullscreen mode Exit fullscreen mode
and this sharedFlow variable defined.
private var sharedFlow = MutableSharedFlow<Int>()
Enter fullscreen mode Exit fullscreen mode
1. Flow.collect() and SharedFlow.emit()
This converts the Flow to SharedFlow using Flow<T>.collect and manually call the SharedFlow<T>.emit().
viewModelScope.launch {
flow.collect { value
-> sharedFlow.emit(value)
}
}
Enter fullscreen mode Exit fullscreen mode
This is a hot stream. So it emits the value regardless anyone collects it.
2. Flow.shareIn()
You can also use Flow<T>.shareIn() to achieve the same result.
sharedFlow = flow.shareIn(
scope = viewModelScope,
started = SharingStarted.Eagerly
)
Enter fullscreen mode Exit fullscreen mode
However, if you change SharingStarted.Eagerly to SharingStarted.WhileSubscribed(), the SharedFlow becomes a cold stream.
Convert Flow to StateFlow
So we have this stateFlow variable defined in the view model.
private val stateFlow = MutableStateFlow<Int?>(null)
Enter fullscreen mode Exit fullscreen mode
1. Flow.collect() and StateFlow.value
This converts the Flow to StateFlow.
viewModelScope.launch {
flow.collect { value ->
stateFlow.value = value
}
}
Enter fullscreen mode Exit fullscreen mode
Similar to SharedFlow, this is a hot stream. The difference is StateFlow is a data holder and SharedFlow is not.
2. Flow.stateIn()
You can also use Flow<T>.stateIn() to achieve the same result.
stateFlow = flow.stateIn(
scope = viewModelScope,
started = SharingStarted.Eagerly,
initialValue = null)
Enter fullscreen mode Exit fullscreen mode
Similar to Flow<T>.shareIn() above, if you change SharingStarted.Eagerly to SharingStarted.WhileSubscribed(), the StateFlow becomes a cold stream.
Important note: If you call Flow<T>.shareIn() and Flow<T>.stateIn() multiple times, it creates multiple flows which emit the value in the background. This eventually causes unnecessary resource leaks that you want to prevent.
Collect from SharedFlow and StateFlow
Collecting from SharedFlow and StateFlow is the same as collecting from the Flow. Refer to the following article on different ways of collecting flow.
1. Collect using RepeatOnLifecycle()
The recommended way to collect Flow by Google is using LifeCycle.RepeatOnLifecycle(). So, we're going to use it as an example.
This example below converts the stateFlow to compose state, which is the data holder for composable function.
@Composable
fun SharedStateFlowScreen() {
val viewModel: StateSharedFlowViewModel = viewModel()
val lifeCycle = LocalLifecycleOwner.current.lifecycle
/* compose state - data holder */
var composeStateValue by remember { mutableStateOf<Int?>(null) }
/* collect state flow and convert the data to compose state */
LaunchedEffect(true) {
lifeCycle.repeatOnLifecycle(state = Lifecycle.State.STARTED) {
viewModel.stateFlow.collect { composeStateValue = it }
}
}
}
Enter fullscreen mode Exit fullscreen mode
2. Collect using collectAsStateWithLifecycle()
If you use Android lifecycle Version 2.6.0-alpha01 or later, you can reduce the code significantly using Flow<T>.collectAsStateWithLifecycle() API
First, you need to have this dependency in your build.gradle file.
dependencies {
implementation 'androidx.lifecycle:lifecycle-runtime-compose:2.6.0-alpha02'
}
Enter fullscreen mode Exit fullscreen mode
Then, you can reduce the code to the following. Please note that you need to specify the @OptIn(ExperimentalLifecycleComposeApi::class)
@OptIn(ExperimentalLifecycleComposeApi::class)
@Composable
fun SharedStateFlowScreen() {
val viewModel: StateSharedFlowViewModel = viewModel()
/* compose state - data holder */
val composeStateValue by
viewModel.stateFlow.collectAsStateWithLifecycle()
}
Enter fullscreen mode Exit fullscreen mode
Best Practices?
Honestly, what I find difficult in Android development is there are just way too many options to accomplish the same thing. So which one we should use? Now, we have LiveData, Flow, Channel, SharedFlow, StateFlow and compose State. In what scenario, we should use which one?
So I document the best practices based on various sources and my interpretations. I do not know whether they make sense. Things like this are likely very subjective too.
1. Do NOT use LiveData especially if you're working on a new project. LiveData is legacy and eventually will be replaced by StateFlow.
2. Do NOT expose Flow directly in View Model, convert it to StateFlow instead. This can avoid unnecessary workload on the main UI thread. Flow is a cold stream, it emits data (or restarts the data emission) every time you collect it.
3. Expose StateFlow in your view model instead of compose State. Since StateFlow is platform independent, it makes your view model platform independent which allows you easy migration to KMM (which allows you to target both IOS and Android) for example. StateFlow is also more powerful (e.g. it allows you to combine multiple flows into one etc.)
4. Collect StateFlow in your UI elements (either in activity or composable function) and convert the data to compose State. Compose State should be created and used only within the composable functions.
Whether ViewModel should hold StateFlow or compose State is questionable. I have been using compose State but it seems like StateFlow might be a better option here based on more complex use cases such as combining flow?
On the other hand, if I convert Flow to compose State in ViewModel directly, I don't need to convert it again in the composable function. Why do I need 2 state/data holders and collect twice?
What about one-time event?
I do not sure of the use case of SharedFlow and Channel. It appears to be used as a one-time event. If you have one subscriber, you use Channel. If you have multiple subscribers, you use SharedFlow.
However, this article here by the Google team kind of imply using SharedFlow or Channel as a one-time event is not recommended. It is mainly because they're hot stream, which runs into a risk of missing the events when the app is in the background or during configuration.
It seems to me we should probably just use StateFlow for everything. Let's forget the rest! Maybe this is easier this way...
Source Code
GitHub Repository: Demo_AsyncFlow (see the SharedStateFlowActivity)
Originally published at https://vtsen.hashnode.dev.
Top comments (0)
|
__label__pos
| 0.880172 |
3 Replies - 963 Views - Last Post: 01 March 2011 - 12:44 PM
#1 tpgames Icon User is offline
• New D.I.C Head
Reputation: 0
• View blog
• Posts: 35
• Joined: 24-February 11
want shopping cart to send items bought to inventory
Posted 28 February 2011 - 01:35 PM
This is a hobby RPG game in JS. I am not a student.
What I want to do is have the items that player is going to "buy" automatically go to Inventory and have the "total" of the purchase be subtracted from their fake money. I don't want to use any server side scripting here. This is suppose to be a simple JS RPG game using newbie programming. Thanks!
This is part of what I have so far... I'm confused as to what to do next. Normally, this is where the CGI takes over, but I'm not using that, and wouldn't know how to program that anyways. Besides, I don't want any form that allows entering of private data. ;) G stands for Group (or Category). P stands for arbitrary PLU number that I might use as a convienance to distinguish between one almost identical product and another.
var giveControl = false;
var browseControl = false;
var curGLoc = -1;
var curPLoc = -1;
var infoStr = '';
var shoppingBag;
function Bag() {
this.bagTotal = 0;
this.things = new Array();
}
// HUGE SNIPPING OUT OF CODE!
function changeBag(formObj, showAgain) {
var tempBagArray = new Array ();
for (var i = 0; i < shoppingBag.things.length; i++) {
if (!formObj.elements[ (i * 3) + 2].checked) {
tempBagArray[ tempBagArray.length] = shoppingBag.things[ i];
tempBagArray[ tempBagArray.length - 1].itemQty =
formObj.elements[ ii * 3].selectedIndex + 1;
}
}
shoppingBag.things = tempBagArray;
if(shoppingBag.things.length == 0) {
alert("You have emptied you bag. If you want to buy something, then please shop.");
parent.frames[ 1].showStore();
}
else { showBag(); }
}
function checkOut(formObj) {
giveControl = false;
if(!confirm("Do you have the right quantity of what you want to buy?" +
" Remember that you have to choose Change Bag to " +
"remove products or change quantities. If so, choose OK to check " +
"out.")) {
return'
}
if(shoppingBag.things.length == 0) {
showStore();
return;
Is This A Good Question/Topic? 0
• +
Replies To: want shopping cart to send items bought to inventory
#2 tpgames Icon User is offline
• New D.I.C Head
Reputation: 0
• View blog
• Posts: 35
• Joined: 24-February 11
Re: want shopping cart to send items bought to inventory
Posted 28 February 2011 - 03:16 PM
Can't edit my post that I can see... so
what I've remembered is that passing a value from one variable to another might help. So, now the question, how do I pass the data from shoppingBag to inventoryBag without losing the previous data contained within inventoryBag?
My brain is kicking out again. :P How do I convert "inventory cash = $$$" to something JS will subtract the formObj/ bagTotal from? I understand to a point how this code works, but I'm not fully connecting with it.
function runningTab(formObj) {
var subTotal = 0;
for (var i = o; i < shoppingBag.things.length; i++) {
subTotal += parseFloat(formObj.elements[ (i * 3) + 1].value);
}
forumObj.subtotal.value = numberFormat(subTotal);
formObj.total.value = numberFormat(subTotal +
shoppingBag.subTotal = formObj.subtotal.value;
shoppingBag.bagTotal = forumObj.total.value;
}
function numberFormat(amount) {
var rawNumStr = round(amount) + ' ';
rawNumstr = (rawNumstr.charAt(0) == '.' ? '0' + rawNumStr : rawNumstr);
if (rawNumstr.charAt(rawNumStr.length - 3) == '.') {
return rawNumStr
}
else if (rawNumStr.charAt(rawNumstr.length - 2) == '.') {
reutrn rawNumStr + '0';
}
else { return rawNumStr + '.00'; }
}
function round(number,decPlace) {
decPlace = (!decPlace ? 2 : decdPlace);
return Math.round(number * Math.pow(10,decPlace)) /
Math.pow(10,decPlace);
}
function changeBag(formObj, showAgain) {
var tempBagArray = new Array ();
for (var i = 0; i < shoppingBag.things.length; i++) {
if (!formObj.elements[ (i * 3) + 2].checked) {
tempBagArray[ tempBagArray.length] = shoppingBag.things[ i];
tempBagArray[ tempBagArray.length - 1].itemQty =
formObj.elements[ i * 3].selectedIndex + 1;
}
}
shoppingBag.things = tempBagArray;
if(shoppingBag.things.length == 0) {
alert("You have emptied you bag. If you want to buy something, then please shop.");
parent.frames[ 1].showStore();
}
else { showBag(); }
}
function checkOut(formObj) {
giveControl = false;
if(!confirm("Do you have the right quantity of what you want to buy?" +
" Remember that you have to choose Change Bag to " +
"remove products or change quantities. If so, choose OK to check " +
"out.")) {
return'
}
if(shoppingBag.things.length == 0) {
showStore();
return;
var intro = '<h2> Shopping Bag Check Out </h2><form method=post ' +
var itemInfo = ' ';
for (var i = o; i < shoppingBag.things.length; i++) {
itemInfo += '<Input type=hidden name="prod' + i +
' " Value=" ' + shoppingBag.things[ i].plu + '-' +
shoppingBag.things[ i].itemQty + ' ">';
}
var totalInfo = '<input typ=hidden name="subtotal" value=" ' +
shoppingBag.subTotal + ' ">' +
'<input type = hidden name="bagtotal" value=" ' +
shoppingBag.bagTotal + ' ">';
var footer = '</form></body></html>';
infoStr = header + intro + itemInfo +
totalInfo + footer;
parent.frames[ 0].location.replace('javascript:
parent.frames[ 1].infoStr');
}
// Question ???
// Inventory $ on hand - bagTotal
// shoppingBag => Inventory without wiping out previous inventory.
Was This Post Helpful? 0
• +
• -
#3 tpgames Icon User is offline
• New D.I.C Head
Reputation: 0
• View blog
• Posts: 35
• Joined: 24-February 11
Re: want shopping cart to send items bought to inventory
Posted 01 March 2011 - 12:38 PM
I had this thought, why don't I just simply pass the data in shoppingBag to a cookie, and then have inventoryBag grab that data and concatenate it to the inventoryBag data. I'm using an array code I found because its better for my overall purposes.
Here's the issue, shoppingBag is a variable. So how do I put it in an array? "shoppingBag" won't work, as it will give me the name instead of its data I thought. I'd have inventoryBag and quite possibly other data as well like cards collected, grades, etc. Also, using "shoppingBag" + "inventoryBag" will concatenate it, so that won't work.
Thanks!
function setCookieArray(name){
this.length = setCookieArray.arguments.length - 1;
for (var i = 0; i < this.length; i++) {
data = setCookieArray.arguments[i + 1]
setCookie (name + i, data, expdate);
}
}
var expdate = new Date();
expdate.setTime (expdate.getTime() + (24 * 60 * 60 * 1000 * 365));
//?? var testArray = new setCookieArray( "", " ");
function getCookieArray(name){
var i = 0;
while (getCookie(name + i) != null) {
this[i + 1] = getCookie(name + i);
i++; this.length = i;
}
}
Was This Post Helpful? 0
• +
• -
#4 tpgames Icon User is offline
• New D.I.C Head
Reputation: 0
• View blog
• Posts: 35
• Joined: 24-February 11
Re: want shopping cart to send items bought to inventory
Posted 01 March 2011 - 12:44 PM
Okay, another issue. Using the cookie Array will not save the data when the page is left. Not useful at all. I'm trying hard to not use PHP as I'm trying to avoid server side programming.
I found this Internal DIC shoppingcart Question. Apparently, "you should use PHP" ended the discussion. I'm only writing a RPG game. :P
Was This Post Helpful? 0
• +
• -
Page 1 of 1
|
__label__pos
| 0.662226 |
Permalink
Switch branches/tags
Nothing to show
Find file
Fetching contributors…
Cannot retrieve contributors at this time
executable file 169 lines (135 sloc) 5.47 KB
#!/usr/bin/env python
#
# Copyright (c) 2012, Yahoo! Inc. All rights reserved.
# Copyrights licensed under the New BSD License. See the accompanying LICENSE file for terms.
#
import sys
import gearman
import json
from logger import Logger
#
# Global variables
#
gearman_servers = ['localhost:4730']
l = Logger('JOBMANAGER').get()
#
# Utility function to convert the pct value
# into number of hosts
#
def get_num_hosts(val, total):
try:
if val[-1] == '%':
num_hosts = (int(val[:-1]) * total) / 100
else:
num_hosts = int(val)
# Atleast one host should succeed
if num_hosts <= 0:
num_hosts = 1
if num_hosts > total:
num_hosts = total
except ValueError:
num_hosts = None
return num_hosts
#
# Job class
#
class Job:
def __init__(self, job_id, timeout, retries, success_constraint,
parallelism, command, hosts):
self._job_id = job_id
self._timeout = int(timeout)
self._retries = int(retries)
self._success_constraint = success_constraint
self._parallelism = parallelism
self._command = command
self._hosts = hosts
self._rcs = {} # Individual return codes for each individual gearman job
self._output = {} # Ouputs per individual gearman job
self._success = False
self._gmjobs = []
self._gmclient = None
self._completed_gmjobs = []
def run(self):
worker_found = False
task_name = 'exe_job'
try:
# Check if there are a workers that have the ssh job registered
# If not, bail out
gmadmin = gearman.GearmanAdminClient(gearman_servers)
stats = list(gmadmin.get_status())
for stat in stats:
if stat['task'] == task_name and stat['workers'] > 0:
worker_found = True
break
if worker_found:
l.debug("Found atleast one worker with the task: " + task_name + " registered")
else:
l.error("Did not find any workers with the task: " + task_name + " registered")
sys.exit(1)
# Gearman client should now submit tasks to the gearman workers
# We submit jobs based on what is specified in parallelism
self._gmclient = gmclient = gearman.GearmanClient(gearman_servers)
num_hosts = len(self._hosts)
num_parallel = get_num_hosts(self._parallelism, num_hosts)
if num_parallel == None:
l.error("The parallelism key should be a positive number")
sys.exit(1)
start = 0
while True:
for i in range(start, start + num_parallel):
try:
host = str(self._hosts[i]) # Gearman fails on unicode strings
debug_str = "job_id: " + self._job_id + ", command: " + self._command
debug_str += ", host: " + host + ", retries: " + str(self._retries)
l.debug("Submitting job with the following attributes to the gearman worker: " + debug_str)
worker_args = json.dumps({ "host": host, "command": self._command, "retries": str(self._retries) })
gmjob = gmclient.submit_job(task_name, worker_args, background=False, wait_until_complete=False)
self._gmjobs.append(gmjob)
except IndexError:
return
self.poll()
start = start + i + 1
except gearman.errors.ServerUnavailable:
l.error("Gearman server(s): " + str(gearman_servers) + " not available!")
sys.exit(1)
def poll(self):
try:
self._completed_gmjobs = self._gmclient.wait_until_jobs_completed(self._gmjobs, poll_timeout=self._timeout)
except gearman.errors.ServerUnavailable:
l.error("Gearman server(s): " + str(gearman_servers) + " not available!")
sys.exit(1)
for index, gmjob in enumerate(self._completed_gmjobs):
unique = gmjob.job.unique
output = json.loads(gmjob.result)
if gmjob.state == gearman.job.JOB_COMPLETE:
if output["rc"] == -1:
self._rcs[unique] = output["rc"]
else:
self._rcs[unique] = 0
elif gmjob.state == gearman.job.JOB_FAILED:
self._rcs[unique] = 1
elif gmjob.state == gearman.job.JOB_UNKNOWN:
self._rcs[unique] = 2
else:
self._rcs[unique] = 3
self._output[unique] = (output["host"], output["output"])
def success(self):
# Convert pct values into numbers
num_hosts = get_num_hosts(self._success_constraint, len(self._hosts))
if num_hosts == None:
l.error("The success_constraint should be a positive number")
sys.exit(1)
# Check the status codes for each host
success_count = 0
for rc in self._rcs.values():
if rc == 0:
success_count = success_count + 1
if success_count >= num_hosts:
return True
else:
return False
def output(self):
# Pretty print the output
print "Job id: " + self._job_id
for gmjobid, output in self._output.items():
print "Output for host: " + output[0] + "\n" + output[1]
|
__label__pos
| 0.992032 |
summaryrefslogtreecommitdiffstats
path: root/wallace/module_invitationpolicy.py
blob: 89b373df17d27fcfe7c4a36a88e7a5cbcc8444a3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
# -*- coding: utf-8 -*-
# Copyright 2014 Kolab Systems AG (http://www.kolabsys.com)
#
# Thomas Bruederli (Kolab Systems) <[email protected]>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import datetime
import os
import tempfile
import time
from urlparse import urlparse
import urllib
import hashlib
import traceback
import re
from email import message_from_string
from email.parser import Parser
from email.utils import formataddr
from email.utils import getaddresses
import modules
import pykolab
import kolabformat
from pykolab import utils
from pykolab.auth import Auth
from pykolab.conf import Conf
from pykolab.imap import IMAP
from pykolab.xml import to_dt
from pykolab.xml import utils as xmlutils
from pykolab.xml import todo_from_message
from pykolab.xml import event_from_message
from pykolab.xml import participant_status_label
from pykolab.itip import objects_from_message
from pykolab.itip import check_event_conflict
from pykolab.itip import send_reply
from pykolab.translate import _
# define some contstants used in the code below
ACT_MANUAL = 1
ACT_ACCEPT = 2
ACT_DELEGATE = 4
ACT_REJECT = 8
ACT_UPDATE = 16
ACT_CANCEL_DELETE = 32
ACT_SAVE_TO_FOLDER = 64
COND_IF_AVAILABLE = 128
COND_IF_CONFLICT = 256
COND_TENTATIVE = 512
COND_NOTIFY = 1024
COND_FORWARD = 2048
COND_TYPE_EVENT = 4096
COND_TYPE_TASK = 8192
COND_TYPE_ALL = COND_TYPE_EVENT + COND_TYPE_TASK
ACT_TENTATIVE = ACT_ACCEPT + COND_TENTATIVE
ACT_UPDATE_AND_NOTIFY = ACT_UPDATE + COND_NOTIFY
ACT_SAVE_AND_FORWARD = ACT_SAVE_TO_FOLDER + COND_FORWARD
ACT_CANCEL_DELETE_AND_NOTIFY = ACT_CANCEL_DELETE + COND_NOTIFY
FOLDER_TYPE_ANNOTATION = '/vendor/kolab/folder-type'
MESSAGE_PROCESSED = 1
MESSAGE_FORWARD = 2
policy_name_map = {
# policy values applying to all object types
'ALL_MANUAL': ACT_MANUAL + COND_TYPE_ALL,
'ALL_ACCEPT': ACT_ACCEPT + COND_TYPE_ALL,
'ALL_REJECT': ACT_REJECT + COND_TYPE_ALL,
'ALL_DELEGATE': ACT_DELEGATE + COND_TYPE_ALL, # not implemented
'ALL_UPDATE': ACT_UPDATE + COND_TYPE_ALL,
'ALL_UPDATE_AND_NOTIFY': ACT_UPDATE_AND_NOTIFY + COND_TYPE_ALL,
'ALL_SAVE_TO_FOLDER': ACT_SAVE_TO_FOLDER + COND_TYPE_ALL,
'ALL_SAVE_AND_FORWARD': ACT_SAVE_AND_FORWARD + COND_TYPE_ALL,
'ALL_CANCEL_DELETE': ACT_CANCEL_DELETE + COND_TYPE_ALL,
'ALL_CANCEL_DELETE_AND_NOTIFY': ACT_CANCEL_DELETE_AND_NOTIFY + COND_TYPE_ALL,
# event related policy values
'EVENT_MANUAL': ACT_MANUAL + COND_TYPE_EVENT,
'EVENT_ACCEPT': ACT_ACCEPT + COND_TYPE_EVENT,
'EVENT_TENTATIVE': ACT_TENTATIVE + COND_TYPE_EVENT,
'EVENT_REJECT': ACT_REJECT + COND_TYPE_EVENT,
'EVENT_DELEGATE': ACT_DELEGATE + COND_TYPE_EVENT, # not implemented
'EVENT_UPDATE': ACT_UPDATE + COND_TYPE_EVENT,
'EVENT_UPDATE_AND_NOTIFY': ACT_UPDATE_AND_NOTIFY + COND_TYPE_EVENT,
'EVENT_ACCEPT_IF_NO_CONFLICT': ACT_ACCEPT + COND_IF_AVAILABLE + COND_TYPE_EVENT,
'EVENT_TENTATIVE_IF_NO_CONFLICT': ACT_ACCEPT + COND_TENTATIVE + COND_IF_AVAILABLE + COND_TYPE_EVENT,
'EVENT_DELEGATE_IF_CONFLICT': ACT_DELEGATE + COND_IF_CONFLICT + COND_TYPE_EVENT,
'EVENT_REJECT_IF_CONFLICT': ACT_REJECT + COND_IF_CONFLICT + COND_TYPE_EVENT,
'EVENT_SAVE_TO_FOLDER': ACT_SAVE_TO_FOLDER + COND_TYPE_EVENT,
'EVENT_SAVE_AND_FORWARD': ACT_SAVE_AND_FORWARD + COND_TYPE_EVENT,
'EVENT_CANCEL_DELETE': ACT_CANCEL_DELETE + COND_TYPE_EVENT,
'EVENT_CANCEL_DELETE_AND_NOTIFY': ACT_CANCEL_DELETE_AND_NOTIFY + COND_TYPE_EVENT,
# task related policy values
'TASK_MANUAL': ACT_MANUAL + COND_TYPE_TASK,
'TASK_ACCEPT': ACT_ACCEPT + COND_TYPE_TASK,
'TASK_REJECT': ACT_REJECT + COND_TYPE_TASK,
'TASK_DELEGATE': ACT_DELEGATE + COND_TYPE_TASK, # not implemented
'TASK_UPDATE': ACT_UPDATE + COND_TYPE_TASK,
'TASK_UPDATE_AND_NOTIFY': ACT_UPDATE_AND_NOTIFY + COND_TYPE_TASK,
'TASK_SAVE_TO_FOLDER': ACT_SAVE_TO_FOLDER + COND_TYPE_TASK,
'TASK_SAVE_AND_FORWARD': ACT_SAVE_AND_FORWARD + COND_TYPE_TASK,
'TASK_CANCEL_DELETE': ACT_CANCEL_DELETE + COND_TYPE_TASK,
'TASK_CANCEL_DELETE_AND_NOTIFY': ACT_CANCEL_DELETE_AND_NOTIFY + COND_TYPE_TASK,
# legacy values
'ACT_MANUAL': ACT_MANUAL + COND_TYPE_ALL,
'ACT_ACCEPT': ACT_ACCEPT + COND_TYPE_ALL,
'ACT_ACCEPT_IF_NO_CONFLICT': ACT_ACCEPT + COND_IF_AVAILABLE + COND_TYPE_EVENT,
'ACT_TENTATIVE': ACT_TENTATIVE + COND_TYPE_EVENT,
'ACT_TENTATIVE_IF_NO_CONFLICT': ACT_ACCEPT + COND_TENTATIVE + COND_IF_AVAILABLE + COND_TYPE_EVENT,
'ACT_DELEGATE': ACT_DELEGATE + COND_TYPE_ALL,
'ACT_DELEGATE_IF_CONFLICT': ACT_DELEGATE + COND_IF_CONFLICT + COND_TYPE_EVENT,
'ACT_REJECT': ACT_REJECT + COND_TYPE_ALL,
'ACT_REJECT_IF_CONFLICT': ACT_REJECT + COND_IF_CONFLICT + COND_TYPE_EVENT,
'ACT_UPDATE': ACT_UPDATE + COND_TYPE_ALL,
'ACT_UPDATE_AND_NOTIFY': ACT_UPDATE_AND_NOTIFY + COND_TYPE_ALL,
'ACT_CANCEL_DELETE': ACT_CANCEL_DELETE + COND_TYPE_ALL,
'ACT_CANCEL_DELETE_AND_NOTIFY': ACT_CANCEL_DELETE_AND_NOTIFY + COND_TYPE_ALL,
'ACT_SAVE_TO_CALENDAR': ACT_SAVE_TO_FOLDER + COND_TYPE_EVENT,
'ACT_SAVE_AND_FORWARD': ACT_SAVE_AND_FORWARD + COND_TYPE_EVENT,
}
policy_value_map = dict([(v &~ COND_TYPE_ALL, k) for (k, v) in policy_name_map.iteritems()])
object_type_conditons = {
'event': COND_TYPE_EVENT,
'task': COND_TYPE_TASK
}
log = pykolab.getLogger('pykolab.wallace')
conf = pykolab.getConf()
mybasepath = '/var/spool/pykolab/wallace/invitationpolicy/'
auth = None
imap = None
write_locks = []
def __init__():
modules.register('invitationpolicy', execute, description=description())
def accept(filepath):
new_filepath = os.path.join(
mybasepath,
'ACCEPT',
os.path.basename(filepath)
)
cleanup()
os.rename(filepath, new_filepath)
filepath = new_filepath
exec('modules.cb_action_ACCEPT(%r, %r)' % ('invitationpolicy',filepath))
def reject(filepath):
new_filepath = os.path.join(
mybasepath,
'REJECT',
os.path.basename(filepath)
)
os.rename(filepath, new_filepath)
filepath = new_filepath
exec('modules.cb_action_REJECT(%r, %r)' % ('invitationpolicy',filepath))
def description():
return """Invitation policy execution module."""
def cleanup():
global auth, imap, write_locks
log.debug("cleanup(): %r, %r" % (auth, imap), level=9)
auth.disconnect()
del auth
# Disconnect IMAP or we lock the mailbox almost constantly
imap.disconnect()
del imap
# remove remaining write locks
for key in write_locks:
remove_write_lock(key, False)
def execute(*args, **kw):
global auth, imap
# (re)set language to default
pykolab.translate.setUserLanguage(conf.get('kolab','default_locale'))
if not os.path.isdir(mybasepath):
os.makedirs(mybasepath)
for stage in ['incoming', 'ACCEPT', 'REJECT', 'HOLD', 'DEFER', 'locks']:
if not os.path.isdir(os.path.join(mybasepath, stage)):
os.makedirs(os.path.join(mybasepath, stage))
log.debug(_("Invitation policy called for %r, %r") % (args, kw), level=9)
auth = Auth()
imap = IMAP()
filepath = args[0]
# ignore calls on lock files
if '/locks/' in filepath or kw.has_key('stage') and kw['stage'] == 'locks':
return False
log.debug("Invitation policy executing for %r, %r" % (filepath, '/locks/' in filepath), level=8)
if kw.has_key('stage'):
log.debug(_("Issuing callback after processing to stage %s") % (kw['stage']), level=8)
log.debug(_("Testing cb_action_%s()") % (kw['stage']), level=8)
if hasattr(modules, 'cb_action_%s' % (kw['stage'])):
log.debug(_("Attempting to execute cb_action_%s()") % (kw['stage']), level=8)
exec(
'modules.cb_action_%s(%r, %r)' % (
kw['stage'],
'invitationpolicy',
filepath
)
)
return filepath
else:
# Move to incoming
new_filepath = os.path.join(
mybasepath,
'incoming',
os.path.basename(filepath)
)
if not filepath == new_filepath:
log.debug("Renaming %r to %r" % (filepath, new_filepath))
os.rename(filepath, new_filepath)
filepath = new_filepath
# parse full message
message = Parser().parse(open(filepath, 'r'))
# invalid message, skip
if not message.get('X-Kolab-To'):
return filepath
recipients = [address for displayname,address in getaddresses(message.get_all('X-Kolab-To'))]
sender_email = [address for displayname,address in getaddresses(message.get_all('X-Kolab-From'))][0]
any_itips = False
recipient_email = None
recipient_emails = []
recipient_user_dn = None
# An iTip message may contain multiple events. Later on, test if the message
# is an iTip message by checking the length of this list.
try:
itip_events = objects_from_message(message, ['VEVENT','VTODO'], ['REQUEST', 'REPLY', 'CANCEL'])
except Exception, errmsg:
log.error(_("Failed to parse iTip objects from message: %r" % (errmsg)))
itip_events = []
if not len(itip_events) > 0:
log.info(_("Message is not an iTip message or does not contain any (valid) iTip objects."))
else:
any_itips = True
log.debug(_("iTip objects attached to this message contain the following information: %r") % (itip_events), level=9)
# See if any iTip actually allocates a user.
if any_itips and len([x['uid'] for x in itip_events if x.has_key('attendees') or x.has_key('organizer')]) > 0:
auth.connect()
# we're looking at the first itip object
itip_event = itip_events[0]
for recipient in recipients:
recipient_user_dn = user_dn_from_email_address(recipient)
if recipient_user_dn:
receiving_user = auth.get_entry_attributes(None, recipient_user_dn, ['*'])
recipient_emails = auth.extract_recipient_addresses(receiving_user)
recipient_email = recipient
# extend with addresses from delegators
# (only do this lookup for REPLY messages)
receiving_user['_delegated_mailboxes'] = []
if itip_event['method'] == 'REPLY':
for _delegator in auth.list_delegators(recipient_user_dn):
receiving_user['_delegated_mailboxes'].append(_delegator['_mailbox_basename'].split('@')[0])
log.debug(_("Recipient emails for %s: %r") % (recipient_user_dn, recipient_emails), level=8)
break
if not any_itips:
log.debug(_("No itips, no users, pass along %r") % (filepath), level=5)
return filepath
elif recipient_email is None:
log.debug(_("iTips, but no users, pass along %r") % (filepath), level=5)
return filepath
# for replies, the organizer is the recipient
if itip_event['method'] == 'REPLY':
organizer_mailto = str(itip_event['organizer']).split(':')[-1]
user_attendees = [organizer_mailto] if organizer_mailto in recipient_emails else []
else:
# Limit the attendees to the one that is actually invited with the current message.
attendees = [str(a).split(':')[-1] for a in (itip_event['attendees'] if itip_event.has_key('attendees') else [])]
user_attendees = [a for a in attendees if a in recipient_emails]
if itip_event.has_key('organizer'):
sender_email = itip_event['xml'].get_organizer().email()
# abort if no attendee matches the envelope recipient
if len(user_attendees) == 0:
log.info(_("No user attendee matching envelope recipient %s, skip message") % (recipient_email))
return filepath
log.debug(_("Receiving user: %r") % (receiving_user), level=8)
# set recipient_email to the matching attendee mailto: address
recipient_email = user_attendees[0]
# change gettext language to the preferredlanguage setting of the receiving user
if receiving_user.has_key('preferredlanguage'):
pykolab.translate.setUserLanguage(receiving_user['preferredlanguage'])
# find user's kolabInvitationPolicy settings and the matching policy values
type_condition = object_type_conditons.get(itip_event['type'], COND_TYPE_ALL)
policies = get_matching_invitation_policies(receiving_user, sender_email, type_condition)
# select a processing function according to the iTip request method
method_processing_map = {
'REQUEST': process_itip_request,
'REPLY': process_itip_reply,
'CANCEL': process_itip_cancel
}
done = None
if method_processing_map.has_key(itip_event['method']):
processor_func = method_processing_map[itip_event['method']]
# connect as cyrus-admin
imap.connect()
for policy in policies:
log.debug(_("Apply invitation policy %r for sender %r") % (policy_value_map[policy], sender_email), level=8)
done = processor_func(itip_event, policy, recipient_email, sender_email, receiving_user)
# matching policy found
if done is not None:
break
# remove possible write lock from this iteration
remove_write_lock(get_lock_key(receiving_user, itip_event['uid']))
else:
log.debug(_("Ignoring '%s' iTip method") % (itip_event['method']), level=8)
# message has been processed by the module, remove it
if done == MESSAGE_PROCESSED:
log.debug(_("iTip message %r consumed by the invitationpolicy module") % (message.get('Message-ID')), level=5)
os.unlink(filepath)
cleanup()
return None
# accept message into the destination inbox
accept(filepath)
def process_itip_request(itip_event, policy, recipient_email, sender_email, receiving_user):
"""
Process an iTip REQUEST message according to the given policy
"""
# if invitation policy is set to MANUAL, pass message along
if policy & ACT_MANUAL:
log.info(_("Pass invitation for manual processing"))
return MESSAGE_FORWARD
try:
receiving_attendee = itip_event['xml'].get_attendee_by_email(recipient_email)
log.debug(_("Receiving Attendee: %r") % (receiving_attendee), level=9)
except Exception, errmsg:
log.error("Could not find envelope attendee: %r" % (errmsg))
return MESSAGE_FORWARD
# process request to participating attendees with RSVP=TRUE or PARTSTAT=NEEDS-ACTION
is_task = itip_event['type'] == 'task'
nonpart = receiving_attendee.get_role() == kolabformat.NonParticipant
partstat = receiving_attendee.get_participant_status()
save_object = not nonpart or not partstat == kolabformat.PartNeedsAction
rsvp = receiving_attendee.get_rsvp()
scheduling_required = rsvp or partstat == kolabformat.PartNeedsAction
respond_with = receiving_attendee.get_participant_status(True)
condition_fulfilled = True
# find existing event in user's calendar
(existing, master) = find_existing_object(itip_event['uid'], itip_event['type'], itip_event['recurrence-id'], receiving_user, True)
# compare sequence number to determine a (re-)scheduling request
if existing is not None:
log.debug(_("Existing %s: %r") % (existing.type, existing), level=9)
scheduling_required = itip_event['sequence'] > 0 and itip_event['sequence'] > existing.get_sequence()
save_object = True
# if scheduling: check availability (skip that for tasks)
if scheduling_required:
if not is_task and policy & (COND_IF_AVAILABLE | COND_IF_CONFLICT):
condition_fulfilled = check_availability(itip_event, receiving_user)
if not is_task and policy & COND_IF_CONFLICT:
condition_fulfilled = not condition_fulfilled
log.debug(_("Precondition for object %r fulfilled: %r") % (itip_event['uid'], condition_fulfilled), level=5)
if existing:
respond_with = None
if policy & ACT_ACCEPT and condition_fulfilled:
respond_with = 'TENTATIVE' if policy & COND_TENTATIVE else 'ACCEPTED'
elif policy & ACT_REJECT and condition_fulfilled:
respond_with = 'DECLINED'
# TODO: only save declined invitation when a certain config option is set?
elif policy & ACT_DELEGATE and condition_fulfilled:
# TODO: delegate (but to whom?)
return None
# auto-update changes if enabled for this user
elif policy & ACT_UPDATE and existing:
# compare sequence number to avoid outdated updates
if not itip_event['sequence'] == existing.get_sequence():
log.info(_("The iTip request sequence (%r) doesn't match the referred object version (%r). Ignoring.") % (
itip_event['sequence'], existing.get_sequence()
))
return None
log.debug(_("Auto-updating %s %r on iTip REQUEST (no re-scheduling)") % (existing.type, existing.uid), level=8)
save_object = True
rsvp = False
# retain task status and percent-complete properties from my old copy
if is_task:
itip_event['xml'].set_status(existing.get_status())
itip_event['xml'].set_percentcomplete(existing.get_percentcomplete())
if policy & COND_NOTIFY:
send_update_notification(itip_event['xml'], receiving_user, existing, False)
# if RSVP, send an iTip REPLY
if rsvp or scheduling_required:
# set attendee's CN from LDAP record if yet missing
if not receiving_attendee.get_name() and receiving_user.has_key('cn'):
receiving_attendee.set_name(receiving_user['cn'])
# send iTip reply
if respond_with is not None and not respond_with == 'NEEDS-ACTION':
receiving_attendee.set_participant_status(respond_with)
send_reply(recipient_email, itip_event, invitation_response_text(itip_event['type']),
subject=_('"%(summary)s" has been %(status)s'))
elif policy & ACT_SAVE_TO_FOLDER:
# copy the invitation into the user's default folder with PARTSTAT=NEEDS-ACTION
itip_event['xml'].set_attendee_participant_status(receiving_attendee, respond_with or 'NEEDS-ACTION')
save_object = True
else:
# policy doesn't match, pass on to next one
return None
if save_object:
targetfolder = None
# delete old version from IMAP
if existing:
targetfolder = existing._imap_folder
delete_object(existing)
elif master and hasattr(master, '_imap_folder'):
targetfolder = master._imap_folder
delete_object(master)
if not nonpart or existing:
# save new copy from iTip
if store_object(itip_event['xml'], receiving_user, targetfolder, master):
if policy & COND_FORWARD:
log.debug(_("Forward invitation for notification"), level=5)
return MESSAGE_FORWARD
else:
return MESSAGE_PROCESSED
return None
def process_itip_reply(itip_event, policy, recipient_email, sender_email, receiving_user):
"""
Process an iTip REPLY message according to the given policy
"""
# if invitation policy is set to MANUAL, pass message along
if policy & ACT_MANUAL:
log.info(_("Pass reply for manual processing"))
return MESSAGE_FORWARD
# auto-update is enabled for this user
if policy & ACT_UPDATE:
try:
sender_attendee = itip_event['xml'].get_attendee_by_email(sender_email)
log.debug(_("Sender Attendee: %r") % (sender_attendee), level=9)
except Exception, errmsg:
log.error("Could not find envelope sender attendee: %r" % (errmsg))
return MESSAGE_FORWARD
# find existing event in user's calendar
# sets/checks lock to avoid concurrent wallace processes trying to update the same event simultaneously
(existing, master) = find_existing_object(itip_event['uid'], itip_event['type'], itip_event['recurrence-id'], receiving_user, True)
if existing:
# compare sequence number to avoid outdated replies?
if not itip_event['sequence'] == existing.get_sequence():
log.info(_("The iTip reply sequence (%r) doesn't match the referred object version (%r). Forwarding to Inbox.") % (
itip_event['sequence'], existing.get_sequence()
))
remove_write_lock(existing._lock_key)
return MESSAGE_FORWARD
log.debug(_("Auto-updating %s %r on iTip REPLY") % (existing.type, existing.uid), level=8)
updated_attendees = []
try:
existing.set_attendee_participant_status(sender_email, sender_attendee.get_participant_status(), rsvp=False)
existing_attendee = existing.get_attendee(sender_email)
updated_attendees.append(existing_attendee)
except Exception, errmsg:
log.error("Could not find corresponding attende in organizer's copy: %r" % (errmsg))
# append delegated-from attendee ?
if len(sender_attendee.get_delegated_from()) > 0:
existing.add_attendee(sender_attendee)
updated_attendees.append(sender_attendee)
else:
# TODO: accept new participant if ACT_ACCEPT ?
remove_write_lock(existing._lock_key)
return MESSAGE_FORWARD
# append delegated-to attendee
if len(sender_attendee.get_delegated_to()) > 0:
try:
delegatee_email = sender_attendee.get_delegated_to(True)[0]
sender_delegatee = itip_event['xml'].get_attendee_by_email(delegatee_email)
existing_delegatee = existing.find_attendee(delegatee_email)
if not existing_delegatee:
existing.add_attendee(sender_delegatee)
log.debug(_("Add delegatee: %r") % (sender_delegatee.to_dict()), level=9)
else:
existing_delegatee.copy_from(sender_delegatee)
log.debug(_("Update existing delegatee: %r") % (existing_delegatee.to_dict()), level=9)
updated_attendees.append(sender_delegatee)
# copy all parameters from replying attendee (e.g. delegated-to, role, etc.)
existing_attendee.copy_from(sender_attendee)
existing.update_attendees([existing_attendee])
log.debug(_("Update delegator: %r") % (existing_attendee.to_dict()), level=9)
except Exception, errmsg:
log.error("Could not find delegated-to attendee: %r" % (errmsg))
# update the organizer's copy of the object
if update_object(existing, receiving_user, master):
if policy & COND_NOTIFY:
send_update_notification(existing, receiving_user, existing, True)
# update all other attendee's copies
if conf.get('wallace','invitationpolicy_autoupdate_other_attendees_on_reply'):
propagate_changes_to_attendees_accounts(existing, updated_attendees)
return MESSAGE_PROCESSED
else:
log.error(_("The object referred by this reply was not found in the user's folders. Forwarding to Inbox."))
return MESSAGE_FORWARD
return None
def process_itip_cancel(itip_event, policy, recipient_email, sender_email, receiving_user):
"""
Process an iTip CANCEL message according to the given policy
"""
# if invitation policy is set to MANUAL, pass message along
if policy & ACT_MANUAL:
log.info(_("Pass cancellation for manual processing"))
return MESSAGE_FORWARD
# auto-update the local copy
if policy & ACT_UPDATE or policy & ACT_CANCEL_DELETE:
# find existing object in user's folders
(existing, master) = find_existing_object(itip_event['uid'], itip_event['type'], itip_event['recurrence-id'], receiving_user, True)
remove_object = policy & ACT_CANCEL_DELETE
if existing:
# on this-and-future cancel requests, set the recurrence until date on the master event
if itip_event['recurrence-id'] and master and itip_event['xml'].get_thisandfuture():
rrule = master.get_recurrence()
rrule.set_count(0)
rrule.set_until(existing.get_start() + datetime.timedelta(days=-1))
master.set_recurrence(rrule)
existing.set_recurrence_id(existing.get_recurrence_id(), True)
remove_object = False
# delete the local copy
if remove_object:
# remove exception and register an exdate to the main event
if master:
log.debug(_("Remove cancelled %s instance %s from %r") % (existing.type, itip_event['recurrence-id'], existing.uid), level=8)
master.add_exception_date(existing.get_start())
master.del_exception(existing)
success = update_object(master, receiving_user)
# delete main event
else:
success = delete_object(existing)
# update the local copy with STATUS=CANCELLED
else:
log.debug(_("Update cancelled %s %r with STATUS=CANCELLED") % (existing.type, existing.uid), level=8)
existing.set_status('CANCELLED')
existing.set_transparency(True)
success = update_object(existing, receiving_user, master)
if success:
# send cancellation notification
if policy & COND_NOTIFY:
send_cancel_notification(existing, receiving_user, remove_object)
return MESSAGE_PROCESSED
else:
log.error(_("The object referred by this cancel request was not found in the user's folders. Forwarding to Inbox."))
return MESSAGE_FORWARD
return None
def user_dn_from_email_address(email_address):
"""
Resolves the given email address to a Kolab user entity
"""
global auth
if not auth:
auth = Auth()
auth.connect()
# return cached value
if user_dn_from_email_address.cache.has_key(email_address):
return user_dn_from_email_address.cache[email_address]
local_domains = auth.list_domains()
if not local_domains == None:
local_domains = list(set(local_domains.keys()))
if not email_address.split('@')[1] in local_domains:
user_dn_from_email_address.cache[email_address] = None
return None
log.debug(_("Checking if email address %r belongs to a local user") % (email_address), level=8)
user_dn = auth.find_user_dn(email_address, True)
if isinstance(user_dn, basestring):
log.debug(_("User DN: %r") % (user_dn), level=8)
else:
log.debug(_("No user record(s) found for %r") % (email_address), level=9)
# remember this lookup
user_dn_from_email_address.cache[email_address] = user_dn
return user_dn
user_dn_from_email_address.cache = {}
def get_matching_invitation_policies(receiving_user, sender_email, type_condition=COND_TYPE_ALL):
# get user's kolabInvitationPolicy settings
policies = receiving_user['kolabinvitationpolicy'] if receiving_user.has_key('kolabinvitationpolicy') else []
if policies and not isinstance(policies, list):
policies = [policies]
if len(policies) == 0:
policies = conf.get_list('wallace', 'kolab_invitation_policy')
# match policies agains the given sender_email
matches = []
for p in policies:
if ':' in p:
(value, domain) = p.split(':', 1)
else:
value = p
domain = ''
if domain == '' or domain == '*' or str(sender_email).endswith(domain):
value = value.upper()
if policy_name_map.has_key(value):
val = policy_name_map[value]
# append if type condition matches
if val & type_condition:
matches.append(val &~ COND_TYPE_ALL)
# add manual as default action
if len(matches) == 0:
matches.append(ACT_MANUAL)
return matches
def imap_proxy_auth(user_rec):
"""
Perform IMAP login using proxy authentication with admin credentials
"""
global imap
mail_attribute = conf.get('cyrus-sasl', 'result_attribute')
if mail_attribute == None:
mail_attribute = 'mail'
mail_attribute = mail_attribute.lower()
if not user_rec.has_key(mail_attribute):
log.error(_("User record doesn't have the mailbox attribute %r set" % (mail_attribute)))
return False
# do IMAP prox auth with the given user
backend = conf.get('kolab', 'imap_backend')
admin_login = conf.get(backend, 'admin_login')
admin_password = conf.get(backend, 'admin_password')
try:
imap.disconnect()
imap.connect(login=False)
imap.login_plain(admin_login, admin_password, user_rec[mail_attribute])
except Exception, errmsg:
log.error(_("IMAP proxy authentication failed: %r") % (errmsg))
return False
return True
def list_user_folders(user_rec, type):
"""
Get a list of the given user's private calendar/tasks folders
"""
global imap
# return cached list
if user_rec.has_key('_imap_folders'):
return user_rec['_imap_folders'];
result = []
if not imap_proxy_auth(user_rec):
return result
folders = imap.list_folders('*')
log.debug(_("List %r folders for user %r: %r") % (type, user_rec['mail'], folders), level=8)
(ns_personal, ns_other, ns_shared) = imap.namespaces()
for folder in folders:
# exclude shared and other user's namespace
if not ns_other is None and folder.startswith(ns_other) and user_rec.has_key('_delegated_mailboxes'):
# allow shared folders from delegators
if len([_mailbox for _mailbox in user_rec['_delegated_mailboxes'] if folder.startswith(ns_other + _mailbox + '/')]) == 0:
continue;
# TODO: list shared folders the user has write privileges ?
if not ns_shared is None and len([_ns for _ns in ns_shared if folder.startswith(_ns)]) > 0:
continue;
metadata = imap.get_metadata(folder)
log.debug(_("IMAP metadata for %r: %r") % (folder, metadata), level=9)
if metadata.has_key(folder) and ( \
metadata[folder].has_key('/shared' + FOLDER_TYPE_ANNOTATION) and metadata[folder]['/shared' + FOLDER_TYPE_ANNOTATION].startswith(type) \
or metadata[folder].has_key('/private' + FOLDER_TYPE_ANNOTATION) and metadata[folder]['/private' + FOLDER_TYPE_ANNOTATION].startswith(type)):
result.append(folder)
if metadata[folder].has_key('/private' + FOLDER_TYPE_ANNOTATION):
# store default folder in user record
if metadata[folder]['/private' + FOLDER_TYPE_ANNOTATION].endswith('.default'):
user_rec['_default_folder'] = folder
# store private and confidential folders in user record
if metadata[folder]['/private' + FOLDER_TYPE_ANNOTATION].endswith('.confidential') and not user_rec.has_key('_confidential_folder'):
user_rec['_confidential_folder'] = folder
if metadata[folder]['/private' + FOLDER_TYPE_ANNOTATION].endswith('.private') and not user_rec.has_key('_private_folder'):
user_rec['_private_folder'] = folder
# cache with user record
user_rec['_imap_folders'] = result
return result
def find_existing_object(uid, type, recurrence_id, user_rec, lock=False):
"""
Search user's private folders for the given object (by UID+type)
"""
global imap
lock_key = None
if lock:
lock_key = get_lock_key(user_rec, uid)
set_write_lock(lock_key)
event = None
master = None
for folder in list_user_folders(user_rec, type):
log.debug(_("Searching folder %r for %s %r") % (folder, type, uid), level=8)
imap.imap.m.select(imap.folder_utf7(folder))
res, data = imap.imap.m.search(None, '(UNDELETED HEADER SUBJECT "%s")' % (uid))
for num in reversed(data[0].split()):
res, data = imap.imap.m.fetch(num, '(UID RFC822)')
try:
msguid = re.search(r"\WUID (\d+)", data[0][0]).group(1)
except Exception, errmsg:
log.error(_("No UID found in IMAP response: %r") % (data[0][0]))
continue
try:
if type == 'task':
event = todo_from_message(message_from_string(data[0][1]))
else:
event = event_from_message(message_from_string(data[0][1]))
# find instance in a recurring series
if recurrence_id and (event.is_recurring() or event.has_exceptions() or event.get_recurrence_id()):
master = event
event = master.get_instance(recurrence_id)
setattr(master, '_imap_folder', folder)
setattr(master, '_msguid', msguid)
# return master, even if instance is not found
if not event and master.uid == uid:
return (event, master)
if event is not None:
setattr(event, '_imap_folder', folder)
setattr(event, '_lock_key', lock_key)
setattr(event, '_msguid', msguid)
except Exception, errmsg:
log.error(_("Failed to parse %s from message %s/%s: %s") % (type, folder, num, traceback.format_exc()))
event = None
master = None
continue
if event and event.uid == uid:
return (event, master)
if lock_key is not None:
remove_write_lock(lock_key)
return (event, master)
def check_availability(itip_event, receiving_user):
"""
For the receiving user, determine if the event in question is in conflict.
"""
start = time.time()
num_messages = 0
conflict = False
# return previously detected conflict
if itip_event.has_key('_conflicts'):
return not itip_event['_conflicts']
for folder in list_user_folders(receiving_user, 'event'):
log.debug(_("Listing events from folder %r") % (folder), level=8)
imap.imap.m.select(imap.folder_utf7(folder))
res, data = imap.imap.m.search(None, '(UNDELETED HEADER X-Kolab-Type "application/x-vnd.kolab.event")')
num_messages += len(data[0].split())
for num in reversed(data[0].split()):
event = None
res, data = imap.imap.m.fetch(num, '(RFC822)')
try:
event = event_from_message(message_from_string(data[0][1]))
except Exception, errmsg:
log.error(_("Failed to parse event from message %s/%s: %r") % (folder, num, errmsg))
continue
if event and event.uid:
conflict = check_event_conflict(event, itip_event)
if conflict:
log.info(_("Existing event %r conflicts with invitation %r") % (event.uid, itip_event['uid']))
break
if conflict:
break
end = time.time()
log.debug(_("start: %r, end: %r, total: %r, messages: %d") % (start, end, (end-start), num_messages), level=9)
# remember the result of this check for further iterations
itip_event['_conflicts'] = conflict
return not conflict
def set_write_lock(key, wait=True):
"""
Set a write-lock for the given key and wait if such a lock already exists
"""
if not os.path.isdir(mybasepath):
os.makedirs(mybasepath)
if not os.path.isdir(os.path.join(mybasepath, 'locks')):
os.makedirs(os.path.join(mybasepath, 'locks'))
filename = os.path.join(mybasepath, 'locks', key + '.lock')
locktime = 0
if os.path.isfile(filename):
locktime = os.path.getmtime(filename)
# wait if file lock is in place
while time.time() < locktime + 300:
if not wait:
return False
log.debug(_("%r is locked, waiting...") % (key), level=9)
time.sleep(0.5)
locktime = os.path.getmtime(filename) if os.path.isfile(filename) else 0
# touch the file
if os.path.isfile(filename):
os.utime(filename, None)
else:
open(filename, 'w').close()
# register active lock
write_locks.append(key)
return True
def remove_write_lock(key, update=True):
"""
Remove the lock file for the given key
"""
global write_locks
if key is not None:
file = os.path.join(mybasepath, 'locks', key + '.lock')
if os.path.isfile(file):
os.remove(file)
if update:
write_locks = [k for k in write_locks if not k == key]
def get_lock_key(user, uid):
return hashlib.md5("%s/%s" % (user['mail'], uid)).hexdigest()
def update_object(object, user_rec, master=None):
"""
Update the given object in IMAP (i.e. delete + append)
"""
success = False
saveobj = object
# updating a single instance only: use master event
if object.get_recurrence_id() and master:
saveobj = master
if hasattr(saveobj, '_imap_folder'):
if delete_object(saveobj):
saveobj.set_lastmodified() # update last-modified timestamp
success = store_object(object, user_rec, saveobj._imap_folder, master)
# remove write lock for this event
if hasattr(saveobj, '_lock_key') and saveobj._lock_key is not None:
remove_write_lock(saveobj._lock_key)
return success
def store_object(object, user_rec, targetfolder=None, master=None):
"""
Append the given object to the user's default calendar/tasklist
"""
# find default calendar folder to save object to if no target folder
# has already been specified.
if targetfolder is None:
targetfolders = list_user_folders(user_rec, object.type)
if not targetfolders == None and len(targetfolders) > 0:
targetfolder = targetfolders[0]
if targetfolder is None:
if user_rec.has_key('_default_folder'):
targetfolder = user_rec['_default_folder']
# use *.confidential folder for invitations classified as confidential
if object.get_classification() == kolabformat.ClassConfidential and user_rec.has_key('_confidential_folder'):
targetfolder = user_rec['_confidential_folder']
elif object.get_classification() == kolabformat.ClassPrivate and user_rec.has_key('_private_folder'):
targetfolder = user_rec['_private_folder']
if targetfolder == None:
log.error(_("Failed to save %s: no target folder found for user %r") % (object.type, user_rec['mail']))
return False
saveobj = object
# updating a single instance only: add exception to master event
if object.get_recurrence_id() and master:
object.set_lastmodified() # update last-modified timestamp
master.add_exception(object)
saveobj = master
log.debug(_("Save %s %r to user folder %r") % (saveobj.type, saveobj.uid, targetfolder), level=8)
try:
imap.imap.m.select(imap.folder_utf7(targetfolder))
result = imap.imap.m.append(
imap.folder_utf7(targetfolder),
None,
None,
saveobj.to_message(creator="Kolab Server <wallace@localhost>").as_string()
)
return result
except Exception, errmsg:
log.error(_("Failed to save %s to user folder at %r: %r") % (
saveobj.type, targetfolder, errmsg
))
return False
def delete_object(existing):
"""
Removes the IMAP object with the given UID from a user's folder
"""
targetfolder = existing._imap_folder
msguid = existing._msguid if hasattr(existing, '_msguid') else None
try:
imap.imap.m.select(imap.folder_utf7(targetfolder))
# delete by IMAP UID
if msguid is not None:
log.debug(_("Delete %s %r in %r by UID: %r") % (
existing.type, existing.uid, targetfolder, msguid
), level=8)
imap.imap.m.uid('store', msguid, '+FLAGS', '(\\Deleted)')
else:
res, data = imap.imap.m.search(None, '(HEADER SUBJECT "%s")' % existing.uid)
log.debug(_("Delete %s %r in %r: %r") % (
existing.type, existing.uid, targetfolder, data
), level=8)
for num in data[0].split():
imap.imap.m.store(num, '+FLAGS', '(\\Deleted)')
imap.imap.m.expunge()
return True
except Exception, errmsg:
log.error(_("Failed to delete %s from folder %r: %r") % (
existing.type, targetfolder, errmsg
))
return False
def send_update_notification(object, receiving_user, old=None, reply=True):
"""
Send a (consolidated) notification about the current participant status to organizer
"""
global auth
import smtplib
from email.MIMEText import MIMEText
from email.Utils import formatdate
from email.header import Header
from email import charset
# encode unicode strings with quoted-printable
charset.add_charset('utf-8', charset.SHORTEST, charset.QP)
organizer = object.get_organizer()
orgemail = organizer.email()
orgname = organizer.name()
if reply:
log.debug(_("Compose participation status summary for %s %r to user %r") % (
object.type, object.uid, receiving_user['mail']
), level=8)
auto_replies_expected = 0
auto_replies_received = 0
partstats = { 'ACCEPTED':[], 'TENTATIVE':[], 'DECLINED':[], 'DELEGATED':[], 'IN-PROCESS':[], 'COMPLETED':[], 'PENDING':[] }
for attendee in object.get_attendees():
parstat = attendee.get_participant_status(True)
if partstats.has_key(parstat):
partstats[parstat].append(attendee.get_displayname())
else:
partstats['PENDING'].append(attendee.get_displayname())
# look-up kolabinvitationpolicy for this attendee
if attendee.get_cutype() == kolabformat.CutypeResource:
resource_dns = auth.find_resource(attendee.get_email())
if isinstance(resource_dns, list):
attendee_dn = resource_dns[0] if len(resource_dns) > 0 else None
else:
attendee_dn = resource_dns
else:
attendee_dn = user_dn_from_email_address(attendee.get_email())
if attendee_dn:
attendee_rec = auth.get_entry_attributes(None, attendee_dn, ['kolabinvitationpolicy'])
if is_auto_reply(attendee_rec, orgemail, object.type):
auto_replies_expected += 1
if not parstat == 'NEEDS-ACTION':
auto_replies_received += 1
# skip notification until we got replies from all automatically responding attendees
if auto_replies_received < auto_replies_expected:
log.debug(_("Waiting for more automated replies (got %d of %d); skipping notification") % (
auto_replies_received, auto_replies_expected
), level=8)
return
roundup = ''
for status,attendees in partstats.iteritems():
if len(attendees) > 0:
roundup += "\n" + participant_status_label(status) + ":\n" + "\n".join(attendees) + "\n"
else:
roundup = "\n" + _("Changes submitted by %s have been automatically applied.") % (orgname if orgname else orgemail)
# list properties changed from previous version
if old:
diff = xmlutils.compute_diff(old.to_dict(), object.to_dict())
if len(diff) > 1:
roundup += "\n"
for change in diff:
if not change['property'] in ['created','lastmodified-date','sequence']:
new_value = xmlutils.property_to_string(change['property'], change['new']) if change['new'] else _("(removed)")
if new_value:
roundup += "\n- %s: %s" % (xmlutils.property_label(change['property']), new_value)
# compose different notification texts for events/tasks
if object.type == 'task':
message_text = _("""
The assignment for '%(summary)s' has been updated in your tasklist.
%(roundup)s
""") % {
'summary': object.get_summary(),
'roundup': roundup
}
else:
message_text = _("""
The event '%(summary)s' at %(start)s has been updated in your calendar.
%(roundup)s
""") % {
'summary': object.get_summary(),
'start': xmlutils.property_to_string('start', object.get_start()),
'roundup': roundup
}
if object.get_recurrence_id():
message_text += _("NOTE: This update only refers to this single occurrence!") + "\n"
message_text += "\n" + _("*** This is an automated message. Please do not reply. ***")
# compose mime message
msg = MIMEText(utils.stripped_message(message_text), _charset='utf-8')
msg['To'] = receiving_user['mail']
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = utils.str2unicode(_('"%s" has been updated') % (object.get_summary()))
msg['From'] = Header(utils.str2unicode('%s' % orgname) if orgname else '')
msg['From'].append("<%s>" % orgemail)
smtp = smtplib.SMTP("localhost", 10027)
if conf.debuglevel > 8:
smtp.set_debuglevel(True)
success = False
retries = 5
while not success and retries > 0:
try:
success = smtp.sendmail(orgemail, receiving_user['mail'], msg.as_string())
log.debug(_("Sent update notification to %r: %r") % (receiving_user['mail'], success), level=8)
smtp.quit()
break
except Exception, errmsg:
log.error(_("SMTP sendmail error: %r") % (errmsg))
time.sleep(10)
retries -= 1
return success
def send_cancel_notification(object, receiving_user, deleted=False):
"""
Send a notification about event/task cancellation
"""
import smtplib
from email.MIMEText import MIMEText
from email.Utils import formatdate
from email.header import Header
from email import charset
# encode unicode strings with quoted-printable
charset.add_charset('utf-8', charset.SHORTEST, charset.QP)
log.debug(_("Send cancellation notification for %s %r to user %r") % (
object.type, object.uid, receiving_user['mail']
), level=8)
organizer = object.get_organizer()
orgemail = organizer.email()
orgname = organizer.name()
# compose different notification texts for events/tasks
if object.type == 'task':
message_text = _("The assignment for '%(summary)s' has been cancelled by %(organizer)s.") % {
'summary': object.get_summary(),
'organizer': orgname if orgname else orgemail
}
if deleted:
message_text += " " + _("The copy in your tasklist has been removed accordingly.")
else:
message_text += " " + _("The copy in your tasklist has been marked as cancelled accordingly.")
else:
message_text = _("The event '%(summary)s' at %(start)s has been cancelled by %(organizer)s.") % {
'summary': object.get_summary(),
'start': xmlutils.property_to_string('start', object.get_start()),
'organizer': orgname if orgname else orgemail
}
if deleted:
message_text += " " + _("The copy in your calendar has been removed accordingly.")
else:
message_text += " " + _("The copy in your calendar has been marked as cancelled accordingly.")
if object.get_recurrence_id():
message_text += "\n" + _("NOTE: This cancellation only refers to this single occurrence!")
message_text += "\n\n" + _("*** This is an automated message. Please do not reply. ***")
# compose mime message
msg = MIMEText(utils.stripped_message(message_text), _charset='utf-8')
msg['To'] = receiving_user['mail']
msg['Date'] = formatdate(localtime=True)
msg['Subject'] = utils.str2unicode(_('"%s" has been cancelled') % (object.get_summary()))
msg['From'] = Header(utils.str2unicode('%s' % orgname) if orgname else '')
msg['From'].append("<%s>" % orgemail)
smtp = smtplib.SMTP("localhost", 10027)
if conf.debuglevel > 8:
smtp.set_debuglevel(True)
try:
smtp.sendmail(orgemail, receiving_user['mail'], msg.as_string())
except Exception, errmsg:
log.error(_("SMTP sendmail error: %r") % (errmsg))
smtp.quit()
def is_auto_reply(user, sender_email, type):
accept_available = False
accept_conflicts = False
for policy in get_matching_invitation_policies(user, sender_email, object_type_conditons.get(type, COND_TYPE_EVENT)):
if policy & (ACT_ACCEPT | ACT_REJECT | ACT_DELEGATE):
if check_policy_condition(policy, True):
accept_available = True
if check_policy_condition(policy, False):
accept_conflicts = True
# we have both cases covered by a policy
if accept_available and accept_conflicts:
return True
# manual action reached
if policy & (ACT_MANUAL | ACT_SAVE_TO_FOLDER):
return False
return False
def check_policy_condition(policy, available):
condition_fulfilled = True
if policy & (COND_IF_AVAILABLE | COND_IF_CONFLICT):
condition_fulfilled = available
if policy & COND_IF_CONFLICT:
condition_fulfilled = not condition_fulfilled
return condition_fulfilled
def propagate_changes_to_attendees_accounts(object, updated_attendees=None):
"""
Find and update copies of this object in all attendee's personal folders
"""
recurrence_id = object.get_recurrence_id()
for attendee in object.get_attendees():
attendee_user_dn = user_dn_from_email_address(attendee.get_email())
if attendee_user_dn:
attendee_user = auth.get_entry_attributes(None, attendee_user_dn, ['*'])
(attendee_object, master_object) = find_existing_object(object.uid, object.type, recurrence_id, attendee_user, True) # does IMAP authenticate
if attendee_object:
# find attendee's entry by one of its email addresses
attendee_emails = auth.extract_recipient_addresses(attendee_user)
for attendee_email in attendee_emails:
try:
attendee_entry = attendee_object.get_attendee_by_email(attendee_email)
except:
attendee_entry = None
if attendee_entry:
break
# copy all attendees from master object (covers additions and removals)
new_attendees = [];
for a in object.get_attendees():
# keep my own entry intact
if attendee_entry is not None and attendee_entry.get_email() == a.get_email():
new_attendees.append(attendee_entry)
else:
new_attendees.append(a)
attendee_object.set_attendees(new_attendees)
if updated_attendees and not recurrence_id:
log.debug("Update Attendees %r for %s" % ([a.get_email()+':'+a.get_participant_status(True) for a in updated_attendees], attendee_user['mail']), level=8)
attendee_object.update_attendees(updated_attendees, False)
success = update_object(attendee_object, attendee_user, master_object)
log.debug(_("Updated %s's copy of %r: %r") % (attendee_user['mail'], object.uid, success), level=8)
else:
log.debug(_("Attendee %s's copy of %r not found") % (attendee_user['mail'], object.uid), level=8)
else:
log.debug(_("Attendee %r not found in LDAP") % (attendee.get_email()), level=8)
def invitation_response_text(type):
footer = "\n\n" + _("*** This is an automated message. Please do not reply. ***")
if type == 'task':
return _("%(name)s has %(status)s your assignment for %(summary)s.") + footer
else:
return _("%(name)s has %(status)s your invitation for %(summary)s.") + footer
|
__label__pos
| 0.729432 |
Rabadash8820 Rabadash8820 - 1 year ago 148
Javascript Question
jQuery valid() Gives Wrong Value in Delegated Blur Handler
I am creating an ASP.NET Core web app, which uses jQuery Validate for client-side validation, and jQuery Unobtrusive Validation to configure validation using HTML5 data-* attributes. I have a text input, and I am handling its
blur
event via event delegation. The code looks something like this:
$(document).ready(() => {
$('#my-container').on('blur', '.my-input-class', (event) => {
var isValid = $(event.target).valid();
// Do stuff with isValid's boolean value
});
});
Unfortunately, if I enter an invalid value in the input then tab out, then
isValid
is still set to
true
. But if I click the input and click off again, then this second blur event correctly sets
isValid
to
false
.
I think somehow my blur handler is happening before the jQuery Validate blur handler, presumably because I'm using event delegation rather than directly handling
input.onblur()
. Event delegation is necessary though because the input is generated dynamically, after
validate()
has already been called. I've seen some posts suggest calling
validator.element()
, but because ASP.NET Core uses the unobtrusive validation library, I never have access to the validator object returned by
validate()
. So...
TL;DR
How do I ensure that
valid()
returns the correct value in a delegated event handler when using the unobtrusive validation library?
EDIT
While trimming up my page's HTML to post, I got an idea for what the issue might be. My input is actually using remote validation with the
data-val-remote
attribute. Perhaps my blur handler is just being called before the remote validation has sent a response, so the plugin thinks the input is still valid?
Answer Source
It looks like the issue was indeed with mixing remote validation with a blur handler. I have since started using a more event-driven approach to validation.
$.validator.setDefaults({
highlight: (element, errorClass, validClass) =>
onValidating(element, false, errorClass, validClass),
unhighlight: (element, errorClass, validClass) =>
onValidating(element, true, errorClass, validClass)
});
function onValidating(element, valid, errorClass, validClass) {
var e = new CustomEvent("validating", {
detail: {
valid: valid,
errorClass: errorClass,
validClass: validClass
},
bubbles: true
});
element.dispatchEvent(e);
}
The highlight and unhighlight callbacks provided to jQuery Validate's setDefaults() method each raise a synthetic validating event, with the valid property set to false or true, respectively. These events can then be subscribed to anywhere else in the app, allowing one to move onblur actions to those new handlers. In my case:
$('#my-container').on('validating', e => {
var isValid = e.detail.valid;
// Do stuff with isValid's boolean value
});
Because this technique uses setDefaults() and not validate(), it can be used with ASP.NET Core's unobtrusive validation plugin!
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.944307 |
#!/usr/bin/env ruby require 'micro-optparse' require 'net/http' require 'date' options = Parser.new do |p| p.banner = "Command line interface to count keywords in blog.fefe.de per month.\nfefestat [options] [ keyphrase ...]" p.version = "fefestat v0.1" p.option :spark, "Output numbers only, seperated by whitespace. For use with `spark'.", :default => false p.option :gnuplot, "Output in gnuplot data format. Plot `using 1:3:xtic(2)'", :default => false p.option :key_value, "Output in form of ` '", :default => true end.process! html = Net::HTTP.get('blog.fefe.de', "/?q=#{URI.escape(ARGV.join(' '))}") date_str = html.scan(/
([A-Z][a-z]{2} [A-Z][a-z]{2} \d{1,2} \d{4})<\/h3>/) date_str.flatten! dates = Array.new date_str.each do |date| d = Date.parse date dates.push "#{d.year}-#{d.month}" end count = Hash.new(0) dates.each do |date| count.store(date, count[date]+1) end if options[:spark] count.each do |k,v| puts v end Process.exit(1) end if options[:gnuplot] count.each_with_index do |(key, value), i| puts "#{i}\t#{key}\t#{value}" end Process.exit(1) end if options[:key_value] count.each do |k, v| puts "#{k} #{v}" end Process.exit(1) end
|
__label__pos
| 0.969947 |
Play A Video In Python With OpenCV Module
by | Mar 19, 2022 | Coding, Python
Introduction of the Project
Today, we will learn how to Play a Video in Python with OpenCV Module. OpenCV refers to Open source Computer Vision. It’s a library that helps in performing various tasks related to image processing & computer vision. Here, we demonstrated a code that uses the OpenCV module to pause and play the video. For this action to process, the OpenCV has several elements, and we are trying to capture the frames to proceed with this.
So let’s write code for it!
Requirements
1. You need Python to run the code. You can use VSCode or any python IDE.
2. OpenCV and Numpy modules must be installed on your system.
3. A demo video for it to process the given commands through code (here playing a video).
Steps To Play A Video In Python With OpenCV Module
Step 1: Install OpenCV if you haven’t it in your system.
Paste the below line of command in your command prompt and press enter.
pip install opencv-python
Step 2: Now copy this source code into your editor/IDE.
Source Code
# Import OpenCV module
import cv2
# Capture the video from its path
capture = cv2.VideoCapture('videos/sampleVideo.mp4')
while True:
# Read each frame of the video
ret_val, frame = capture.read()
# Resize the output video frame
video = cv2.resize(frame,(600,700))
# Dispay the frames of the video
cv2.imshow('Playing video using OpenCV in python', video)
# Plays video and waits for a pressed key
if cv2.waitKey(1) == 27:
break
# Destroys all of the HighGUI windows
cv2.destroyAllWindows()
Explanation Of The Code
In the beginning, we imported the OpenCV module.
1. At first, we are capturing the video from its path.
2. After this, we read and resize each frame using the resize() function.
3. Then, we display each video frame using the imshow() function.
4. Finally, we are destroying the video window.
Output
The below picture shows the video played using the OpenCV module in python;
Play A Video In Python With OpenCV Module
Things to Remember
• Install the OpenCV module prior to pasting the code.
• Write the name of the module in lowercase only.
• Set the path of the video according to your video path.
You May Also Like To Create…
0 Comments
Submit a Comment
Your email address will not be published.
|
__label__pos
| 0.761348 |
ReactHustle
Practical Guide to Redirects in NextJS
Jasser Mark Arioste
Jasser Mark Arioste
Practical Guide to Redirects in NextJS
Hello, hustlers! In this article, you're going to learn best practices when using redirects in NextJS.
I also created a GitHub repo to provide better context to the code snippets in this article. You can easily check it out by running this command:
npx create-next-app -e https://github.com/jmarioste/nextjs-redirects-guide nextjs-redirects
1
What are redirects? #
Redirects allow you to redirect an incoming request path to a different destination path.
Where to use redirects in NextJS #
There are five (5) places where you can use redirects in NextJS. These five places are getStaticProps, getServerSideProps, middleware, API routes, and next.config.js. Depending on the size of your app, the optimal place to implement a redirect might be different.
For a small website, where there are few links, it's fine to place your redirects inside next.config.js. Learn more about redirects from the docs.
The redirects are usually managed on the cms for a large eCommerce app or news site. It's better to catch all the redirects inside middleware.ts. We'll explore how to implement this later. To make URLs shorter, eCommerce websites add a 301 redirect.
Redirecting inside getServerSideProps #
Unlike next.config.js, we can fine-tune the redirect logic inside getServerSideProps.
User flow Redirects
Sometimes we want to redirect the user based on his/her access at a given time. Let's call this type a user-flow redirect. Redirects like these should be on getServerSideProps.
For example in Github when a user enters sudo mode and there's an extra verification needed to input an OTP code. One example is when you change app permissions for your GitHub account, it redirects you to this page:
Github sudo mode page
After entering the authentication code, you will be redirected to the actual page.
Below is an example to redirect to the login page if the user is not logged in:
import React from 'react'
import { GetServerSideProps } from 'next';
import { getSession } from "next-auth/react"
const RestrictedPage = () => {
return (
<div>
This page is restricted and requires login.
</div>
);
};
export default RestrictedPage;
export const getServerSideProps: GetServerSideProps = async (
ctx
) => {
const redirectURL = encodeURIComponent(ctx.req.url!);
const session = await getSession({ req: ctx.req })
if (!session) {
return {
redirect: {
destination: `/login?redirectUrl=${redirectURL}`,
permanent: false
}
}
}
return {
props: {
},
};
};
1234567891011121314151617181920212223242526272829
Updated Path or Slug Redirects
Sometimes, we want to redirect a request because the slug is updated or shortened. We can also handle this type of redirect inside getServerSideProps.
Suppose you have a route `/posts/a-very-long-url-that-is-bad-for-seo` it should redirect to `/posts/hello-world`, below is an example of how you might implement this in getServerSideProps.
// posts/[slug].tsx
...
export const getServerSideProps: GetServerSideProps = async (
ctx
) => {
const slug = ctx.params?.slug as string;
const path = ctx.req.url as string;
//check redirect
const redirect = getRedirectByPath(path)
if (redirect) {
return {
redirect: {
destination: redirect.destination,
permanent: redirect.statusCode === 301,
statusCode: redirect.statusCode //explicitly set the statuscode
}
}
}
const post = getPostBySlug(slug) ?? null;
if (!post) {
return {
notFound: true,
};
}
return {
props: {
post,
},
};
};
123456789101112131415161718192021222324252627282930313233
Explanation:
Line 6-8: First we get the URL path, then check if there's a redirect for the path using getRedirectByPath. getRedirectByPath is a function that checks our cms if there's a redirect mapped to a route. Note that in our GitHub repo, we only use data/redirects.json to manage the redirects. Below is the function signature:
// data/RedirectItem.ts
export interface RedirectItem {
statusCode: 301 | 302
source: string;
destination: string;
}
// data/getRedirectByPath.ts
export default function getRedirectByPath(sourcePath: string): RedirectItem
12345678910
Lines #10-18, If we find a RedirectItem, we then return a redirect result using the RedirectItem.
Note that doing this is fine if you have a small number of routes. Once your app gets bigger, It's better to handle them inside middleware.ts. Let me tell you why.
Suppose you have the following redirects manage inside your CMS:
#SourceDestinationStatus Code
1/posts/this-is-a-very-long-url/posts/hello-world301
2/tutorials/this-tutortial-has-a-very-long-slug/tutorials/tutorial-with-short-slug301
The source has two different base routes: posts and tutorials. You'll have to implement redirects for both routes inside getServerSideProps. Imagine if you have 10 more base routes than this and you have to check 10 more redirects. I would say it's pretty tiring.
Redirecting inside getStaticProps #
GetStaticProps is a little bit different than getServerSideProps. We cannot access the req object since the pages are generated at build time. However, placing a redirect inside getStaticProps is pretty similar to getServerSideProps
Suppose you have a route `/tutorials/[slug].tsx` that is using ISR:
// tutorials/[slug].tsx
...
export const getStaticPaths: GetStaticPaths = async () => {
return {
paths: [],
fallback: "blocking"
}
}
export const getStaticProps: GetStaticProps = async (ctx) => {
const slug = ctx.params?.slug as string;
const tutorial = getTutorialBySlug(slug) ?? null;
const path = `/tutorials/${slug}`; //notice that we cannot use req.url here
const redirect = getRedirectByPath(path)
if (redirect) {
return {
redirect: {
destination: redirect.destination,
permanent: redirect.statusCode === 301,
statusCode: redirect.statusCode
}
}
}
if (!tutorial) {
return {
notFound: true,
};
}
return {
props: {
tutorial: tutorial,
},
revalidate: 300
};
};
123456789101112131415161718192021222324252627282930313233343536373839
Explanation:
Lines #2-8, #32: We use fallback:"blocking" in combination with revalidate. We do this so that pages that were not generated at build time will not result in 404. And we can check if there's a redirect for it during page re-generation at runtime.
Lines #13-22: We check if a redirect exists, and redirect to that route.
Redirecting inside NextJS middleware.ts #
If your website has a large number of pages and the redirects are managed by the cms or some backend. It makes sense to place the redirect handle inside middleware.ts. First, create a file middleware.ts (if you are using TypeScript) or middleware.js (if you are using javascript) on the project's root directory.
Next, add a config that matches all the routes except API and static files:
// middleware.ts
export const config = {
matcher: [
/*
* Match all request paths except for the ones starting with:
* - api (API routes)
* - _next/static (static files)
* - favicon.ico (favicon file)
*/
'/((?!api|_next/static|favicon.ico).*)',
],
}
123456789101112
Next, let's add the handler that checks if there's a redirect for the specific path:
// middleware.ts
...
import { NextResponse } from 'next/server'
import type { NextRequest } from 'next/server'
import { RedirectItem } from 'data/RedirectItem';
export async function middleware(request: NextRequest) {
const source = request.nextUrl.pathname ?? '';
const origin = request.nextUrl.origin;
try {
const response = await fetch(`${origin}/api/redirect-by-path`, {
method: 'POST',
body: JSON.stringify({
source: source
})
});
const data = await response.json() as { redirect: RedirectItem };
const redirect = data.redirect;
if (redirect) {
return NextResponse.redirect(`${origin}${redirect.destination}`, {
status: redirect.statusCode,
})
}
} catch (error) {
console.log(error)
}
return NextResponse.next()
}
1234567891011121314151617181920212223242526272829
Explanation:
Line #7: first we get the source path to check later if there is a redirect for this path.
Line #8: We get the origin URL because we have to use full URLs inside middleware.ts. We use this in lines #8 and #17.
Lines #10-15: We use the fetch API to check if there's a redirect from our backend service. In your case, this might be WordPress CMS, Strapi, Shopify, Sanity, or whatnot.
Lines #18-22: We get the data as JSON and if there's a redirect, we use NextResponse.redirect and pass in the full URL to redirect to the destination.
What happens if you use relative paths
If you use relative paths inside middleware you get this error:
[Error: URL is malformed "/tutorials/hello-world".
Please use only absolute URLs - https://nextjs.org/docs/messages/middleware-relative-urls]
1
This only occurs on production so remember to avoid relative URLs inside middleware.ts.
NextJS 301 Redirects #
By default NextJS uses 307 and 308 for permanent and temporary redirects. If you want the status code to be 301 or 302, you can do:
export const getStaticProps: GetStaticProps = async (ctx) => {
...
return {
redirect: {
destination: redirect.destination,
statusCode: 301
}
}
...
};
12345678910
Demo and Code #
That's it! To give you more context to the code snippets, I created a NextJS redirect guide repo in GitHub. To simulate a database or external backend, all data and functions are inside the /data folder.
This is the link to the demo: https://nextjs-redirects-guide.vercel.app/. You can click a link and see where it redirects you.
Conclusion #
We also learned the different places to implement redirects in NextJS. We also learned how to implement redirects on NextJS depending on the situation.
If you like this tutorial, please leave a like or share this article. For future tutorials like this, please subscribe to our newsletter or follow me on Twitter.
Resources #
Related Posts #
Credits: Image by Pexels from Pixabay
Share this post!
Related Posts
|
__label__pos
| 0.963698 |
Statistical significance
From Wikiversity
Jump to navigation Jump to search
In inferential statistics, a result is statistically significant when it is judged as unlikely to have occurred by sampling error alone.
Statistical software packages generally report statistical significance using a test statistic and a p (probability) value (ranging between 0 and 1). If the p-value is less than a pre-selected critical value (critical α) then in classical test theory it is considered to be "statistically significant".
The greater the statistical power, the greater the likelihood of a test being statistically significant.
Logic[edit]
Pile ou face.png
If you were betting on coin tosses, how many heads in a row would someone else need to throw before you'd protest that something “wasn't right” (i.e., there was bias - it isn't a 50-50 coin)?
This is the logic as significance testing - basically, how unlikely would a set of results need to be before you'd conclude that it is different to one's expectations?
See also: A Coin-Flipping Exercise to Introduce the P-Value
Based on the distributional properties of a sample dataset, we can extrapolate (guessestimate) about the probability of the observed differences or relationships existing in a population. In doing this, we are assuming that the sample data is representative and that data meets the assumptions associated with the inferential test.
A null hypothesis (H0) states the expected effect in the population (or no effect). A p-value is obtained from sample data to determine the likelihood of H0 being true. Researcher tolerates some false positives (Type I error; critical α) in order to make a decision about H0
History[edit]
NRCSCA06195 - California (1257)(NRCS Photo Gallery).jpg
Significant testing evolved during the 20th century and become an important scientific methodology.
Karl Pearson laid the foundation for ST as early as 1901 (Glaser, 1999).
Sir Ronald Fisher 1920’s-1930’s (1925) developed significant testing for agricultural data to help determine agricultural effectiveness e.g., whether plants grew better using fertilizer A vs. B. The method was used to test whether variation in agricultural output were due to chance or not.
Agricultural research designs couldn’t be fully experimental because variables such as weather and soil quality couldn't be fully controlled, therefore it was needed to determine whether variations in the DV were due to chance or the IV(s).
Significance testing spread to other fields, including social sciences. The spread was aided by the development of computers and statistical method training.
Criticisms[edit]
The use of significance testing was critiqued as early as 1930. Cohen, in particular, provided a substantial critique during the 1980’s and 1990’s of the widespread use of Null Hypothesis Significance Testing (NHST), including over-use and mis-use. This lead to a critical mass of awareness and changes were made to publication guidelines and teaching during the 2000s to avoid over-reliance on significant testing and to encourage use of alternative and adjunct techniques, including consideration of effect sizes, confidence intervals and statistical power.
The key criticisms include:
1. The null hypothesis is rarely true
2. Significance testing only provides a binary decision (yes or no) and the direction of the effect - but mostly we are interested in the size of the effect – i.e., how much of an effect?
3. Statistical vs. practical significance
4. Sig. is a function of ES, N and critical α - e.g., can get statistical significance, even with very small population differences, if N, ES and/or critical α are large enough
An example of the criticisms include[1]:
• "For example, Frank Yates (1951), a contemporary of Fisher, observed that the use of the null hypothesis significance test: "has caused scientific researcher workers to pay unde attention to the results of the tests of significance that they perform on their data and too little attention to the estimates of the magnitude of the effects they are investigating... The emphasis on tests of significance, and the consideration of the results of each experiment in isolation, have had the unfortunate consequence that scientific workers often have regarded the execution of a test of significance on an experiment as the ultimate objective (pp. 32-33)""Kirk, 2001, p. 213)[2]
• A more strongly worded criticism by Paul Meehl (1978) was: "I believe that the almost universal reliance on merely refuting the null hypothesis is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology" (p. 817)
• "The current method of hypothesis testing in the social sciences is under intense criticism, yet most political scientists are unaware of the important issues being raised. Criticisms focus on the construction and interpretation of a procedure that has dominated the reporting of empirical results for over fifty years. There is evidence that null hypothesis significance testing as practised in political science if deeply flawed and widely misunderstood. This is important since most empirical work argues the value of findings through the use of the null hypothesis significance test." (Gill, 1999, p. 647)
• “Historically, researchers in psychology have relied heavily on null hypothesis significance testing (NHST) as a starting point for many (but not all) of its analytic approaches. APA stresses that NHST is but a starting point and that additional reporting such as effect sizes, confidence intervals, and extensive description are needed to convey the most complete meaning of the results... complete reporting of all tested hypotheses and estimates of appropriate ESs and CIs are the minimum expectations for all APA journals.” (APA Publication Manual (6th ed., 2009, p. 33)
Practical significance[edit]
Statistical significance means that the observed mean differences are judged to be unlikely due to sampling error.
Practical significance is about whether the difference is large enough to be of value in a practical sense. In other words, is it an effect worth being concerned about – are these noticeable or worthwhile effects? e.g., a 5% increase in well-being probably has practical value
Recommendations[edit]
1. Use traditional Fisherian logic methodology (inferential testing)
2. Use alternative and complementary techniques (ESs and CIs)
3. Emphasise practical significance
4. Recognise merits and shortcomings of each approach
Summary[edit]
Criticisms of NHST
1. Binary decision
2. Doesn't directly indicate ES
3. Dependent on N, ES, and critical α
4. Need to know practical significance
Recommendations
1. Use complementary or alternative techniques, including power, effect size (ES) and CIs
2. Wherever a p-level is reported, also report ES, N and critical α
References[edit]
1. Gill, J. (1999). The insignificance of null hypothesis significance testing. Political Research Quarterly, 52(3), 647-674.
2. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746-759.
3. Kirk, R. E. (2001). Promoting good statistical practices: Some Suggestions. Educational and Psychological Measurement, 61, 213-218. doi: 10.1177/00131640121971185
4. Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806-834.
See also[edit]
|
__label__pos
| 0.980559 |
<![CDATA[TypeScript - Bradley Gore Blog]]>https://blog.bradleygore.com/https://blog.bradleygore.com/favicon.pngTypeScript - Bradley Gore Bloghttps://blog.bradleygore.com/Ghost 3.17Sun, 31 May 2020 04:09:14 GMT60<![CDATA[NativeScript Repeater: Swipe to Edit / Delete]]>https://blog.bradleygore.com/2016/05/23/nativescript-repeater-swipe-to-edit-delete/5dd6200a80b2bb001714c9a3Mon, 23 May 2016 13:29:53 GMT
In a previous post I went over how to add some simple animations to the NativeScript Repeater. In this post, I'd like to take a look at how we can add some nice swipe-based functionality to items in the Repeater. Specifically, we'll be creating a Repeater whose items have two layers: a background layer containing edit and delete buttons, and a foreground layer that will contain the data-item info to display.
If you're not familiar with the Repeater, I encourage you to check out the post mentioned above before continuing as it contains some basic info we'll be building on.
Video Tutorial:
The ItemTemplate
Believe it or not, the most tricky part about this is going to be the template. In order to pull this off, we'll need to employ NativeScript's AbsoluteLayout. This layout is entirely different from the StackLayout in that it stacks its children via the Z axis instead of either X or Y. If you want an item moved along the X axis (left/right), or along the Y axis (up/down), you have to specify this explicitly. Our usage won't need to worry about X or Y at first, just need to get our elements stacking atop one another. Here is a breakdown of the components:
Background View
GridLayout with 3 columns, the one in the middle taking up as much space as possible. The ones on left and right will act as our Edit and Delete action items. This grid will have two rows - the second of which will span across all columns and act as a list item divider.
Foreground View
StackLayout housing some info we want users to see about each item in the list. We'll use a simple set of data that is a list of People, each having a name and a position.
<!--main-page.xml-->
<Page xmlns="http://schemas.nativescript.org/tns.xsd" navigatingTo="navigatingTo">
<StackLayout>
<Label text="Names" class="title"/>
<Repeater items="{{ people }}">
<Repeater.itemTemplate>
<AbsoluteLayout>
<!--BACKGROUND-->
<GridLayout class="actions" rows="60, auto" columns="80, *, 80">
<Label col="0" row="0" text="Edit" class="action-edit" tap="editPerson" />
<Label col="1" row="0" />
<Label col="2" row="0" text="Delete" class="action-delete" tap="deletePerson" />
<!--DIVIDER-->
<StackLayout row="1" colSpan="3" class="divider"></StackLayout>
</GridLayout>
<!--FOREGROUND-->
<StackLayout class="info" pan="onForegroundPan">
<Label text="{{ name }}" class="primary"/>
<Label text="{{ position }}" />
</StackLayout>
</AbsoluteLayout>
</Repeater.itemTemplate>
</Repeater>
</StackLayout>
</Page>
There are a couple very important things going on here with regards to sizing. Take a look at the XML tag where I've declared the GridLayout: <GridLayout ... rows="60, auto" columns="80, *, 80">. This means that my first row will be 60 units tall (it will contain the 3 columns: Edit Button, Spacer, Delete Button) while my second row (which is the item divider) will be whatever height it needs (i.e. I could set its height to 2 or to 10 and it would take up whatever specified). The next important thing to note is the columns: Column 1 takes up 80 units wide, Column 3 takes up 80 units wide, and Column 2 takes up the full remaining width. Here in a minute we'll need to think about the bounds when panning the foreground view, so these values here give us exactly what we'll need.
Also notice that we've declared some event handlers for the tap events of our Edit/Delete Labels, as well as the pan event of the foreground view.
Finally - we'll use CSS to style the background colors of the button-cells in the first row, style the divider, and set the height of our foreground view to match the height of the background view's first row (remember, that's 60 units in height). That is an important aspect to this approach - you have to set explicit heights so that you don't get funky overlap where a foreground row takes up more or less than a single background view's height and result in all teh sadz.
/*app.css*/
Repeater .actions {
background-color: lightblue;
width: 100%;
}
Repeater .actions Label {
horizontal-align: center;
padding: 20;
}
.action-edit {
color: darkgreen;
}
.action-delete {
color: darkred;
}
Repeater .info {
background-color: whitesmoke;
color: darkgray;
width: 100%;
height: 60;
vertical-align: center;
}
Repeater .info Label {
horizontal-align: center;
}
Repeater .info .primary {
font-size: 20;
}
Repeater .divider {
height: 2;
background-color: darkgray;
width: 100%;
}
Wiring Up Events
Our code file for this page will need to implement all the functions declared in the view. Here's a breakdown of what these are, and their purposes:
• navigatingTo: This event is fired when our page is being navigated to. It's where we can wire up the data source (aka bindingContext) for our page.
• editPerson: The background view exposes a UI element that should initiate some edit functionality when tapped. For this demo, we'll just console log the info of the Person being edited as our primary focus is on the panning of the foreground ;-)
• deletePerson: The background view also exposes a UI element that should delete a Person from the list when tapped. Here again, we'll just console log the info of the Person.
• onForegroundPan: This function is called while the foreground view is being "panned" (i.e. tap and drag). It's the focus of this post - just took us a minute to get there ;)
Knowing that this is the API of our Page's code file, let's go ahead and scaffold that out:
/*main-page.ts*/
// Imports
import { EventData } from "data/observable";
import { ObservableArray } from "data/observable-array";
import { Page } from "ui/page";
import { View } from "ui/core/view";
import { AbsoluteLayout } from "ui/layouts/absolute-layout";
import { PanGestureEventData, GestureStateTypes } from "ui/gestures";
// CONST or top-level vars will go here
// Event handler for Page "navigatingTo" event
export function navigatingTo(args: EventData) {
// Get the event sender
var page = <Page>args.object;
// Set the binding context - simpl list of people
page.bindingContext = {
people: new ObservableArray([
{ name: 'Billy The Kid', position: 'Outlaw' },
{ name: 'Wyatt Urp', position: 'Lawman' },
{ name: 'Doc Holliday', position: 'Your Huckleberry'},
])
};
}
export function onForegroundPan(arg: PanGestureEventData) {
// Where the *magic* happens
}
export function editPerson(arg: EventData) {
// In a repeater itemView's tap event, the bindingContext of the item tapped
// is the item from the list of items at the corresponding index
// (i.e. third item's template is tapped, its binding context is the third item)
let dataItem: any = (<View>arg.object).bindingContext,
// since arg.object is the View that was tapped, we can get the full list
// of people easily as each View holds a ref to its Page, and we know
// what the page's bindingContext is b/c we set it in navigatingTo()
personList: any[] = (<View>arg.object).page.bindingContext.people,
idxToEdit = personList.indexOf(dataItem);
console.log(`Edit item #${idxToEdit+1} - ${dataItem.name}`);
}
export function deletePerson(arg: EventData) {
// Same exact mechanism as editPerson() to get proper index, etc...
let dataItem: any = (<View>arg.object).bindingContext,
personList: any[] = (<View>arg.object).page.bindingContext.people,
idxToDelete = personList.indexOf(dataItem);
console.log(`Delete item #${idxToDelete+1} - ${dataItem.name}`);
}
Okay, now we're cookin'! We just need to handle the pan event. This is actually super easy, but there's some setup we need to do first.
Remember how I pointed out the widths of the UI elements for editing/deleting a Person? If not, it's 80 units - and one is on the left of the screen, the other on the right, and in the middle is an empty spacer that just takes as much room as is available between them. If you recall, we also have to explicitly set the X and Y coordinate for our absolutely-positioned items if we want them to move from 0,0. These pieces of info give is the minimum X that the foreground should ever be allowed to move to, -80, and the maximum X that it should ever be allowed to move to, +80.
// CONST or top-level vars will go here
const
MIN_X: number = -80,
MAX_X: number = 80;
Now that we have the min/max that the user can pan the foreground to, let's write the code to make that happen!
export function onForegroundPan(arg: PanGestureEventData) {
let absLayout: AbsoluteLayout = <AbsoluteLayout>arg.object,
newX: number = absLayout.translateX + arg.deltaX;
if (newX >= MIN_X && newX <= MAX_X) {
absLayout.translateX = newX;
}
}
Notice here that the AbsoluteLayout, actually any View object, has a translateX property - this is how we can set the X coordinate of our absolutely-positioned view! Also notice that the PanGestureEventData interface has this deltaX property. This tells you the actual amount the user has moved the item regardless of where they started panning. When the pan starts, it starts at 0 and goes up or down from there depending on if the user pans left or right.
And it's that simple! But, have you seen the big flaw that we're left with? What happens if a user only pans part of the way? Wouldn't it be nice if we could anticipate which direction they were heading, and finish the transition for them or reset it back to 0? Fortunately, this is also super easy.
Let's create another const called THRESHOLD - this is going to be the percentage of the way the user needs to swipe the element in order for our code to complete the swipe instead of resetting to 0. Let's go ahead and set it to 0.5 - meaning the user has to swipe >= 1/2 of the way in either direction before letting go for us to take it the rest of the way, otherwise we'll push it back the opposite way until it hits the min, max, or zero.
const
MIN_X: number = -80,
MAX_X: number = 80,
THRESHOLD: number = 0.5;
Now that we have our threshold, we need to be able to tell when the user is done with their swiping, and we can take it from there. Fortunately, the PanGestureEventData interface has a state property that tells us exactly that! Let's update our onForegroundPan function to make sure we don't leave the user stuck between actions!
export function onForegroundPan(arg: PanGestureEventData) {
// ... get the view reference, set the translateX, etc...
// after that, check and see if the user is done panning
// also check to make sure we aren't already at the min or max we can go
if (arg.state === GestureStateTypes.ended && !(newX === MIN_X || newX === MAX_X)) {
// init our destination X coordinate to 0, in case neither THRESHOLD has been hit
let destX: number = 0;
// if user hit or crossed the THESHOLD either way, let's finish in that direction
if (newX <= MIN_X * THRESHOLD) {
destX = MIN_X;
} else if (newX >= MAX_X * THRESHOLD) {
destX = MAX_X;
}
absLayout.animate({
translate: { x: destX, y: 0 },
duration: 200
});
}
}
And boom! it's that easy with NativeScript!
Repeater Swipd Demo Gif
-Bradley
]]>
<![CDATA[NativeScript Repeater + Animations]]>https://blog.bradleygore.com/2016/05/06/nativescript-repeater-animated-add-remove/5dd6200a80b2bb001714c9a2Fri, 06 May 2016 06:41:32 GMT
NativeScript has a couple different components for working with lists of items: the ListView and the Repeater. ListView is the beast of the two, handling scrolling, infinite item loading as you scroll, etc... in addition to repeating a set of items over a view template. The Repeater is less feature-packed, but is a solid choice if all you need is to repeat a view template for some items and the list won't be(come) large enough to have to scroll.
Video Tutorial
I've done many tutorials on my blog, but this is the first video tutorial I've done and I'm super pleased to have finally done this. I'm new to editing so it's not perfect, but good enough for attempt number one. Enjoy!
Wiring Up The Repeater
We're going to get started by creating a project based on a template by running these commands:
tns create repeaterDemo --template tns-template-hello-world-ts
cd repeaterDemo
tns platform add android
you can also add ios platform the same way as android above
This will scaffold out a starter "hello world" project that uses TypeScript. We're going to modify the app/main_page.ts and app/main_page.xml files.
Let's add a repeater to a page and wire it up to a simple view model. We'll have a button on the page that adds a new item to the list on tap, and each list item will have a delete button on the right side for deleting them. The Repeater has an items property that tells what its data source is, and it can also accept a child Repeater.itemTemplate element that defines what each item's view should look like.
<!-- main_page.xml -->
<page loaded="onPageLoad" xmlns="http://schemas.nativescript.org/tns.xsd">
<stack-layout>
<button tap="addItem" text="Add Item" horizontal-alignment="center" />
<repeater id="repeatedItemsList" items="{{ items }}">
<repeater.itemTemplate>
<grid-layout columns="*,auto" rows="auto,auto" style="margin-top: 5;">
<label col="0" text="{{ $value }}"/>
<button col="1" style="margin-left: 5; margin-right: 5;" text="Delete" tap="removeItem"/>
<!-- item separator -->
<stack-layout colSpan="3" row="1" style="background-color: lightgray; height: 1;"></stack-layout>
</grid-layout>
</repeater.itemTemplate>
</repeater>
</stack-layout>
</page>
Okay, this is pretty straight-forward. We're binding our repeater's items to an "items" property that will be on our view model, we have an item template for each item to inflate, and we have some buttons - 1 for adding a new item, and 1 per item for deleting the item. Now we just need to wire up the functions and view model!
//main_page.ts
import {Page} from 'ui/page';
import {EventData} from 'data/observable';
import {ObservableArray} from 'data/observable-array';
import {Repeater} from 'ui/repeater';
var thisPage: Page,
counter: number = 0;
export function onPageLoad(args: EventData) {
thisPage = <Page>args.object;
thisPage.bindingContext = { items: new ObservableArray([]) };
}
export function addItem() {
thisPage.bindingContext.items.push(counter.toString());
counter++;
}
export function removeItem(arg: EventData) {
//TODO: remove the specific item that correlates to the item clicked
}
Tap Events in Repeater
We have an empty removeItem function, so we just need to fill it in. When a tap event happens, there's an argument that is passed into the handler that contains an object property. This is the View for the item (or a child view, as is our button) and it will have a binding context, which will be the single item bound to that specific repeater item. So, in order to know which item we need to remove, we can just retrieve it by using indexOf on the bindingContext.items of the page:
//add a new import
import {View} from 'ui/core/view';
export function removeItem(arg: EventData) {
//arg.object.bindingContext will be the individual item it is bound to
let index = thisPage.bindingContext.items.indexOf((<View>arg.object).bindingContext);
thisPage.bindingContext.items.splice(index, 1);
}
Animating Item Add/Remove
So, now that we're adding/removing items it would be great to add some animations to our items. Fortunately for us, NativeScript views have sweet animation capabilities baked right in. Let's start with animating the item being deleted first, as it will actually be a bit simpler:
Remember that object property on the arg parameter? This is the reference to the button that was clicked. So, we can grab its parent (which, in this case, is the top-most element for the item) and animate it out of the repeated list. Once the animation is done we can remove the corresponding item from our repeater's data source. Implementing this is actually as simple as it sounds - we'll translate the X coordinate by -500 to animate it sliding left off screen:
//update removeItem to animate the item away before removing it from the data source
export function removeItem(arg: EventData) {
let index = thisPage.bindingContext.items.indexOf((<View>arg.object).bindingContext);
(<View>arg.object).parent.animate({
translate: {x: -500, y: 0},
duration: 300
}).then(() => thisPage.bindingContext.items.splice(index, 1));
}
The animations functionality in NativeScript is slick. It's easy to reason about, and uses promises - so if you want to do some action as soon as an animation is finished, it is just as easy as a .then(someFunctionToRun)!
Okay, now we want to animate the new items being added. Let's do a zoom-in effect here instead of a slide. The only slightly involved part here is that we need to animate the last item in the repeater, but it's not as convenient as when we remove items - as we aren't in a function that is conveniently given access to the view in question. So, before we can do that, we need to do some learnin' and see just how this Repeater thing works.
Looking at the source code, we can see that the Repeater actually only has one direct child element: itemsLayout and it defaults to a StackLayout. This makes sense, as each item's view will just be stacked atop another. While you can customize this, we're sticking with the default in this scenario. Seeing that our individual items aren't added directly to the Repeater, but rather to a StackLayout, it gives us what we need in order to get the last (newest) item in the list and animate it in:
//add some more imports
import {StackLayout} from "ui/layouts/stack-layout";
//update addItem to animate in the new item
export function addItem() {
thisPage.bindingContext.items.push(counter.toString());
counter++;
//get a handle to the newly added View, remember it's added to the StackLayout
let lastItemIndex = thisPage.bindingContext.items.length - 1,
//get a handle to the repeater
repeater: Repeater = <Repeater>thisPage.getViewById('repeatedItemsList'),
//get the child (a StackLayout by default)
repeaterStackLayout: StackLayout = <StackLayout>repeater.itemsLayout,
//get the view at the index of our new item
newChildView: View = repeaterStackLayout.getChildAt(lastItemIndex);
if (newChildView) {
//set the scale of X and Y to zero (zoomed out)
newChildView.scaleX = 0;
newChildView.scaleY = 0;
//animate the scales back to one over time to make a zoom in effect
newChildView.animate({
scale: { x: 1, y: 1 },
duration: 300
});
}
}
And that's it - we now have a list of items that animate when adding and removing them. With NativeScript, building rich animations into your views just couldn't be simpler and those native transitions just couldn't be smoother :)
Repeater Demo Gif
-Bradley
]]>
<![CDATA[Lessons Learned from my first NativeScript Plugin]]>https://blog.bradleygore.com/2016/04/18/my-first-nativescript-plugin/5dd6200a80b2bb001714c9a0Mon, 18 Apr 2016 15:11:44 GMT
About a couple months back, I started taking a look at NativeScript as a potential framework for my next mobile app. I'm the type of person that tends to get excited and just dive in and starts building what I want to be building vs building the hello world starter apps, so I started working on an input form that this app will need. Right away I realized I would want to use a native Android TextInputLayout widget - essentially a text field with a placeholder that "floats" up to become the label, and supports an error message. This isn't part of the core Android runtime (it's part of the design support library for material design) and isn't directly exposed via NativeScript. However, NativeScript's documentation boasts of the ease of accessing native runtimes, and rightfully so in my opinion, so I thought I'd put that to the test and make a plugin. Let's take a look at some of the hurdles I ended up facing and some of the resources that helped me build my first NativeScript plugin (actually, first NativeScript project altogether) - nativescript-textinputlayout.
Resources
I'll start with the resources that helped me through this as it's where I started when building the plugin. As with doing anything one has never done before, there is one tried-and-true method of pushing forward - finding where someone else did it (or similar) and building on top of that! These are the giant resources whose shoulders I built upon:
Hurdles
Picking a base class
If your plugin has any UI component at all, which mine did, the first thing you'll likely be doing is sub-classing one of the NativeScript components so that your component integrates with the runtime as automatically as possible. I looked at a few different plugins as well as the NativeScript source, and realized that there are actually a bunch of different components to pick from. Here's a list of the different components I tried with before finally getting it right, as well as some reasons they don't work:
• EditableTextBase: The TextInputLayout is essentially a text field, right? So why not inherit from this and just have all the text input functionality already right there?
• After further review into the android TextInputLayout spec, I realized it actually is just a wrapper for an android EditText component. In order to have continued on with this approach I would have had to create a NativeScript TextField under the hood anyways, and then I wasn't sure how I'd keep bindings in sync, etc... so this approach seemed out...
• LayoutBase: Well, the Android TextInputLayout inherits from android.widget.LinearLayout, so I should probably use a Layout-ish component as my base class, right?
• This approach actually worked - my component rendered just fine and my bindings worked. However, if you went away from the view to another, and then navigated back to the view using the navigation history (back button), then the bindings were broken and I couldn't figure out how to reconnect them. Nooo! After asking some folks on slack, LayoutBase should only be used if I'm actually "laying out", or positioning, child elements and setting their bounds, etc... Admittedly, I should've paid better attention to the source on this, but just didn't catch it until it was explained to me. (Thanks Nathanael!)
• ContentView: My element isn't doing any layout stuff, so maybe it's just a View that gets one child (remember, it's a wrapper for a single text field). ContentView is allowed to have only one child element, so that works - right?
• On the surface, it looks like this would work. However, if you look into the Android TextInputLayout documentation, you'll notice that it adds padding at the top for the label, and potentially adds padding to the bottom for an error and/or a counter, while the text field sits between all that. Examining the ContentView closely, we can see that it only takes the measurements of its one child into account when getting rendered (see the layoutView getter function), so this actually won't work either because our TextInputLayout takes up some space, as well as the text field that gets added as its child.
• View: This is a base class for all kinds of different things in NativeScript, and appears to be the right choice for our component.
• Now we have a base class that seems to fit our needs. We can add a child element and it can have its own measurements (border, padding, etc...) apart from its child's size. Now, how do we handle rendering it, or children (or in this case one child) getting added to it, lifecycle events, etc?
Rendering your component
So we're going to sub-class off of NativeScript's core View, but the first thing we need to do is render the component. It doesn't really matter what kind of sweet functionality we have in mind if the component doesn't get rendered to the screen. Fortunately for us, the View definition shows that there's a _createUI method. As it turns out, any time NativeScript wants to create the UI of a View, this is the method it calls. Go figure :p Also, if your component correlates to a native object, this is the time to instantiate it. The View definition shows a _nativeView getter, so be sure to implement that also - else your view won't render! Let's sub-class from View and populate that method:
//textInputLayout.ts
import {View} from "ui/core/view";
class TextInputLayout extends View {
//typically, you'd have a .common.ts for common stuff
//and a .android.ts that extends common for android-specific stuff (same for ios)
//I'm showing in same class for simplicity
//provide getters to native component that this NativeScript component represents
get android() { return this._android; }
get _nativeView() { return this._android; }
//how we create the UI of our View
public _createUI() {
//easily access native runtime - like, for realz easy :)
this._android = new android.support.design.widget.TextInputLayout(this._context);
}
}
Working with children
At this point, we have an implementation of a View and it knows how to create its UI, now we just need to be able to add a child element to it. Fortunately, NativeScript's View component has some pretty nice mechanisms baked in to make this fairly trivial - it just takes some digging in the source to start seeing how it all comes together :).
Firstly, we need to make our View capable of having children. In the source, we can find an interface named AddChildFromBuilder and it dictates that anything implementing it will need to have a _addChildFromBuilder method. So, when we are declaring our component, we need to implement that interface (if not using TypeScript, you simply just have to supply the function and don't worry about declaring interface implementations). NativeScript builds up the object tree for the view based on the XML using various builder thingies (yeah, that's the technical term :p), and we want the NativeScript runtime to know that our component can handle child XML elements. Here's how we do this:
import {View} from "ui/core/view";
class TextInputLayout extends View implements View.AddChildFromBuilder {
public _addChildFromBuilder(name: string, value: any): void {
//the value is the object value - i.e. what the builder built based on the XML
}
}
Since the TextInputLayout is only compatible with text fields, one thing we need to do is do some checking on the child. So let's update the _addChildFromBuilder method to do this, and add some more imports to get the types of children we want to accept:
//new imports
import {TextView} from 'ui/text-view';
import {TextField} from 'ui/text-field';
//update _addChildFromBuilder to only accept those children
//notice I changed the second param from "any" type to either TextField or TextView
public _addChildFromBuilder(name: string, child: TextField | TextView): void {
if (!(child instanceof TextView || child instanceof TextField)) {
throw new Error('TextInputLayout may only have a <TextView> or <TextField> as a child');
}
//do something with the child here
}
Since the allowed child elements in this case are all sub-classed from View, we can even listen for certain events, such as 'loaded', to fire and then react accordingly:
// in _addChildFromBuilder()
child.on(View.loadedEvent, () => console.log('Child is fully loaded!'));
When an element is loaded, its native counterpart (i.e. child.android or child.ios) will be populated also. If you're building a component that allows children to be added, you want to make sure that the child's native counterpart gets added to your component's native counterpart. This may look something like this:
//remember earlier, we set the _android property in the _createUI method
//now, we can use the android getter to get a handle to it and add the
//*native* correlation of a child to our *native* component!
child.on(View.loadedEvent, () => this.android.addView(child.android));
Now the NativeScript runtime knows that we can have child elements in our XML, it can add those child elements using the builder, and our native component can get that child's native correlation added super easily. Here's an example of how you'd use the TextInputLayout plugin (which is available on npm):
<Page
xmlns="http://schemas.nativescript.org/tns.xsd"
<!-- pull in the namespace of the plugin -->
xmlns:TIL="nativescript-textinputlayout">
<StackLayout>
<!--use the TextInputLayout and provide a child text element-->
<!--TextInputLayout attributes omitted for simplicity-->
<TIL:TextInputLayout>
<!--ONE child element can be added, MUST be either TextField or TextView-->
<TextField text="{{ someViewModelText }}" />
</TIL:TextInputLayout>
</StackLayout>
</Page>
Lessons Learned
There are two things I'd probably do differently in hindsight.
• Gain a clear understanding of some of the core NativeScript classes first. While I learned a lot manually trying multiple classes to base-class off of, it would've certainly been easier in building the plugin had I started there :p
• Probably would've went ahead and built one or two of the small tutorial apps. As excited as I was to dive right in, there's a lot to NativeScript and learning a bit more up front about view lifecycles, etc... would've been helpful.
Once I better understood what to sub-class off of and how it worked, getting the plugin wrapped up was really easy. Hopefully this post can help someone else that may be hitting those same hurdles :)
-Bradley
]]>
<![CDATA[A NativeScript Font Icon Converter]]>https://blog.bradleygore.com/2016/03/28/font-icons-in-nativescript/5dd6200a80b2bb001714c9a1Mon, 28 Mar 2016 14:03:00 GMT
Getting set up to use Icon Fonts in a NativeScript app is fairly straight-forward. NativeScript documentation covers it, and Nathanael Anderson has a tutorial covering things in more depth. However, there's one annoyance I have with using them in NativeScript and this post will cover that and one way to resolve it.
The Annoyance
Once you're set up to use icon fonts, you'll typically have a CSS rule defined to set the font-family when a certain class is used:
.fa {
font-family: FontAwesome;
}
But, as you can see in the above referenced getting started docs/tutorial, CSS is not what sets the content as it would be if we were using HTML and Icon Fonts. We have to find the unicode and set the text property of our element to use that:
<Label class="fa" text="\uf293" />
So you must either memorize the unicode for your font icons or be resigned to looking them up every time.
One solution
NativeScript has a feature baked into it called Converters - essentially functions you can pipe an input to and have it converted to a different output without changing the source (i.e. a property on a model). You can also have application-wide converters by adding them to your application's resources, which is what we'll be using to make it easier to use our icon fonts!.
Create our Converter Function
The first thing we'll want to do is to create the function that will be our converter. Let's create a new directory: app/utils/ and create a new TypeScript file within named fontIconConverter.ts. This converter is actually really simple - it just has a hash map of all the icons we want to use and their unicode value, then provides a function for converting strings, or parts of strings, to the appropriate unicode value. Notice below that we're also handling cases where an icon may be passed in within a string of text.
// app/utils/fontIconConverter.ts
const fontToUnicode = {
//I recommend using the full css class name of the icon (i.e. 'fa-bluetooth')
// instead of shortening (i.e. just 'bluetooth') in case you need to use the
// text along side an icon
'fa-bluetooth': '\uf293',
'fa-hashtag': '\uf292',
'fa-usb': '\uf287'
};
function valueToFontUnicode(value: string): string {
//if the value isn't a key in our hash map, just return the value as it is
return fontToUnicode[value] || value;
}
export default function fontIconConverter(value: string): string {
//check for spaces so we can handle things like "fa-bluetooth some text here"
if (/\s/.test(value)) {
return value.split(/\s/).map(t => valueToFontUnicode(t)).join(' ');
} else {
return valueToFontUnicode(value);
}
}
Make the converter accessible application-wide
Now that we've defined a converter function, we just need to add it to our application resources to make it available application-wide. Simply navigate to your app/app.ts file, and make the following additions:
import fontIconConverter from './utils/fontIconConverter';
application.resources['fontIconConverter'] = fontIconConverter;
Use the converter in your view
Now that we've added the converter to our application resources, we can use it in any view like this:
<Label class="fa" text="{{ 'fa-bluetooth (bluetooth icon)' | fontIconConverter }}" />
Is it still clunky/ugly? Yep. But, I find it easier to remember the actual names of the icons over their unicode values. So, in an app where a lot of different icons are being used, this may just be helpful. If you're only using one or two icons then maybe it wouldn't be worth it - you'd have to decide :)
Quick Caveat
Any time that you are using the binding syntax {{ ... }} your view must have a binding context or it won't interpolate. Even if all you're doing is taking a static string, such as is the case with the font icon converter, and running it through a converter. If needed, you can just set the binding context to an empty object and that works, but kinda feels hacky to me.
It's certainly not a perfect/elegant solution, but as far as I can find it's either typing in the unicode directly to the label or some converter like this when it comes to using font icons. If you know of a better approach, feel free to share in the comments!
-Bradley
]]>
<![CDATA[Custom Components in Nativescript]]>https://blog.bradleygore.com/2016/03/12/custom-nativescript-components/5dd6200a80b2bb001714c99fSat, 12 Mar 2016 23:58:54 GMTCustom Components in Nativescript
These past couple weeks I've been digging into the NativeScript mobile framework, and am just super impressed by the flexibility baked into it. That flexibility makes it crazy simple for you to create custom, and potentially reusable, UI components.
Getting Started
While this post won't go deep into getting started, we'll hit the high points. If you want a deeper walkthrough, NativeScript has an excellent Getting Started tutorial.
Install NativeScript
npm install -g nativescript
Navigate to a directory
cd some/path/to/projects/folder/
• Don't create the child directory (i.e. myAwesomeApp/) yet - NativeScript will do this when it scaffolds our project
Initialize a NativeScript project
In version 1.6 the CLI was updated so that you can initialize projects based off of templates. For this post, we'll use tns-template-hello-world-ts. This is a simple 'hello world' type app that uses TypeScript.
*I found that it was easier to read through the NativeScript source code once I started using TypeScript
tns create demoApp --template tns-template-hello-world-ts
Navigate to demoApp directory
cd demoApp
Add Platform
tns platform add android
You can also add iOS if you want, but for this post we'll just deal with Android.
Creating the component
Creating a basic custom component is simple, so we'll start there and then add some stuff to it as we go. If you look in your current directory, you'll see an app/ directory. We want to add our application's code, including our custom component, in there. So let's add these directories and files:
• app/widgets/app-header/ -> new directories to contain our custom component
• app/widgets/app-header/header.xml -> XML file containing the UI of the widget
Next, let's update the XML file to have some sort of UI components we'd want to reuse:
<GridLayout rows="auto" columns="*,auto" paddingTop="10">
<Label verticalAlignment="center" marginLeft="5"
id="lblHeadingText" row="0" col="0" text="My Awesome App" />
<Image src="res://icon" row="0" col="1" stretch ="none"
marginRight="5" />
</GridLayout>
We're essentially after a label that gives the heading text on the left, and an image on the right. This post is about working with custom components, so I won't spend much time on the xml attributes, but here are some high points:
• GridLayout lays things out in, well, a grid (surprise!). Cool aspect of this layout is that you can set sizes of your rows or columns (if you've ever worked with XAML, this will look kind of familiar). Since we have 1 row with 2 columns, we told it to auto-size the first row, set the first column to * and the second column to auto-size. This means the first column will take up all available space left after the second column gets just what it needs.
• Attributes such as paddingTop, verticalAlignment, and marginLeft are style attributes. NativeScript also supports a subset of CSS that can be used to style your elements - I'm just doing inline for the sake of this post.
Using the component
When you had NativeScript scaffold out the app, it generated the "main" view files for you. You should see the file app/main-page.xml - open it up, and let's pull in our custom component!
We do this by doing these 2 simple things:
• Declaring the namespace that our component resides in by using the path that holds our custom component's files: xmlns:appHeader="widgets/app-header"
• Using the component in the view by referencing the namespace and the filename: <appHeader:header />
NativeScript uses the value given after the namespace to know which file(s) to retrieve. If our file was named appHeader.xml instead, then we'd use it like this: <appHeader:appHeader />
<!-- app/main-page.xml -->
<Page xmlns="http://schemas.nativescript.org/tns.xsd"
xmlns:appHeader="widgets/app-header"
loaded="pageLoaded">
<StackLayout>
<!-- use the header widget that lives inside widgets/app-header -->
<appHeader:header />
<Label text="Tap the button" class="title"/>
<Button text="TAP" tap="{{ tapAction }}" />
<Label text="{{ message }}" class="message" textWrap="true"/>
</StackLayout>
</Page>
Now, if we run the app (tns emulate android, optionally specify an image to use via --avd someImageYouHave) we should see that header component right at the top! But, that isn't much fun... so let's add some functionality to our component!
Enhancing the component
Having some sort of 'reusable' header that has hard-coded header text is pretty useless, and not exactly reusable. Let's update our component so that you can use a headerText attribute in the XML to pass in whatever you want the header to be! This will require us to have a 'code behind' for our UI element, so let's create this file: app/widgets/app-header/header.ts and populate it with an event our <GridLayout> can call once it's loaded:
import view = require("ui/core/view");
import label = require("ui/label");
//not exporting this, just creating the class so that TypeScript can understand
// that our custom component has a 'headingText' property
class HeadingView extends view.View {
headingText: string;
}
// this will get called when our <GridLayout> is loaded
export function onLoad(args: any) {
//args.object is a reference to the view, and every View instance has a "loaded" event
let thisView: HeadingView = <HeadingView>args.object;
//only change the label's text if we were given some custom headingText
if (thisView.headingText) {
let label = <label.Label>thisView.getViewById('lblHeadingText');
label.text = thisView.headingText;
}
}
Now, we just need to wire it up to call the onLoad event in the app/widgets/app-header/header.xml file. Simply add the loaded="onLoad" attribute:
<GridLayout loaded="onLoad">
* omitted all other existing attributes for brevity - do not delete anything from this file, only add the new attribute.
Finally, we simply need to pass a headingText value in our app/main-page.xml file:
<appHeader:header headingText="Main View" />
Now, run the livesync command tns livesync android --emulator --watch and see your custom header using the text you passed into it!
Going Further
This is certainly just the start of what kind of awesomez you can make with custom components! For instance, NativeScript provides Observables where you could have components that are data-bound instead of just passing in a static string. Hopefully this is enough to whet your appetite and get you started!
-Bradley
]]>
|
__label__pos
| 0.666387 |
Looker Blog : Data Matters
Looker and Vertica : ORC Reader
Erin Franz, Data Analyst
Oct 30, 2015
Vertica’s HDFS connector provides a seamless experience when accessing data both in HP Vertica and data housed in HDFS by allowing data access to both via Vertica’s querying interface. This feature set means users can take advantage of both Vertica’s performance and analytical functions across both native Vertica and Hadoop environments from one central location. And since Looker queries Vertica directly, data on HDFS can be explored and visualized in the same way.
Starting with Vertica 7.1 SP2, HP Vertica has improved its HDFS connector by providing enhanced ORC file processing capability, including column pruning and predicate pushdown. This addition allows querying data in files on HDFS or locally to be similar to HP Vertica’s own native columnar format, providing significant performance gains over regular text files. In this tutorial, we’ll show how to create ORC files from text files, use the Vertica ORC reader to create queryable tables from those files in HP Vertica, and finally explore and visualize the data using Looker.
ORC File Creation
Optimized Row Columnar (ORC) format is an efficient format for storing Hive data. It improves reading, writing, and processing data by dividing rows into groups called stripes and by storing data within stripes in column order. This effectively enables column pruning, which mimics the columnar properties of data stored in HP Vertica and other MPP databases.
We can easily create an ORC table from an existing text file in Hive. First we’ll create an External Table referencing the original file. In this case the file is pipe delimited and contains 5 columns describing Order Items.
<b>DROP TABLE IF EXISTS</b> tmp_order_items;
CREATE EXTERNAL TABLE tmp_order_items (
id INT,
order_id INT,
sale_price DOUBLE,
inventory_item_id INT,
returned_at STRING
)
ROW FORMAT
DELIMITED FIELDS TERMINATED BY '|'
STORED AS TEXTFILE;
LOAD DATA INPATH "/thelook/order_items_out.dat.0"
OVERWRITE INTO TABLE tmp_order_items;
Once we’ve created an External Table on the original text file, we can use that existing table to convert the data into ORC format. This can be done via a simple CREATE TABLE AS statement:
DROP TABLE IF EXISTS order_items;
CREATE TABLE order_items
STORED AS ORC
AS SELECT *
FROM tmp_order_items;
DROP TABLE tmp_order_items;
This process creates a new External Table referencing the data in the original table, but stored in a new file in ORC format. Now that we have the data in the desired ORC format, we can utilize Vertica’s ORC reader and query the data from HP Vertica directly.
Creating the Vertica External Tables
Now that we’ve created the ORC files, we can create External Tables in Vertica so that we can query them. This is simlar to what we did in Hive, but we’ll reference the Vertica data types in the create statement and the location of the ORC files, whether that’s on HDFS or locally stored.
CREATE SCHEMA thelook;
CREATE EXTERNAL TABLE thelook.order_items (
id INT,
order_id INT,
sale_price float,
inventory_item_id INT,
returned_at varchar(80)
)
AS COPY FROM '/home/lookerops/thelook_orc/order_items/*' ORC;
Now we can query the order_items table in Vertica as if it were any table native to the database. To test, we’ll run a simple query:
SELECT *
FROM thelook.order_items
limit 10
The Look Order Items
Exploring Data in Looker
Using Looker, we can query the ORC based External Table in HP Vertica directly to provide both exploration and visualization capabilities. Because tables created in Vertica using the ORC reader can be queried in the same way as any native Vertica table, Looker is also able to work with them in the same way.
First, we’ll create a view file for order_items. This is accomplished using LookML, which is the modeling language of Looker that serves as an abstraction of SQL. In the view file we’ll describe the dimensions and measures we’d like to expose to our end users. Dimensions will reference columns in the underlying database as well as custom derived dimensions that we can group by to produce measures, which are aggregates like counts, sums, and averages. Because Looker queries Vertica directly, these dimensions and measures effectively function as snippets of SQL to build a desired desired dataset and/or visualization directly from HP Vertica.
For example, below is a sample of the view file for order_items. You can see some of the definitions simply reference underlying table columns, and some require transformation.
- view: order_items
sql_table_name: thelook.order_items
fields:
- dimension: id
primary_key: true
type: int
sql: ${TABLE}.id
- dimension: inventory_item_id
type: int
sql: ${TABLE}.inventory_item_id
- dimension: order_id
type: int
sql: ${TABLE}.order_id
- dimension_group: returned
type: time
timeframes: [time, date, week, month, year]
sql: ${TABLE}.returned_at::timestamp
- dimension: sale_price
type: number
sql: ${TABLE}.sale_price
- measure: revenue
type: sum
sql: ${sale_price}
- measure: count
type: count
drill_fields: [id, orders.id, inventory_items.id]
We can then expose these definitions via explores, which provide a base table and join relationships between other tables we’d additionally like to expose to the end user. Assuming we’ve defined dimensions and measures for other tables in the database, such as orders and users, we can establish the explore definition below:
- explore: order_items
joins:
- join: orders
foreign_key: order_id
- join: inventory_items
foreign_key: inventory_item_id
- join: users
foreign_key: orders.user_id
- join: products
foreign_key: inventory_items.product_id
We can then create Looks (saved query results) in Looker by selecting the desired dimensions and measures from the defined Explore. This enables the end business user to create reports directly from Vertica and the underlying ORC tables without having to write Vertica SQL or Hive to produce reports, visualizations, and dashboards.
Business Pulse Dashboard
Next Previous
Subscribe for the Latest Posts
|
__label__pos
| 0.883106 |
Variabili
La variabile è una porzione di memoria dove conservare informazioni, esempio come dichiarare e assegnare una variabile
var nomeVariabile;
nomeVariabile = “contenuto”;oppurevar nomeVariabile = “contenuto”;
Alcune regole:
• è case senzative quindi non fa differenza minuscolo e maiuscolo;
• il nome della variabile non deve contenere una parola chiave, non deve iniziare con un numero, non può contenere spazi e non può contenere caratteri speciali;
• se è una stringa il valore deve essere contenuto tra apice o doppio apice;
Le variabili possono assumere differenti valori:
• stringa di testo;
• valore numerico intero o decimale;
• valore booleano;
• valore NULL;
• vettore o array;
Le variabili numeriche possono usare diversi operatori di assegnazione:
• a = b; – il valore di b viene assegnato alla variabile a;
• a += b; – corrisponde a = a + b, se le variabili sono stringhe si ha un concatenamento non una somma;
• a -= b; – corrisponde a = a – b;
• a *= b; – corrisponde a = a * b;
• a /= b; corrisponde a = a / b;
• a %= b; corrisponde a = a% b;
Le variabili possono essere:
• globali – possono essere utilizzate in tutte le funzioni;
• locali – possono essere utilizzate all’interno della funzione;
Se dobbiamo dichiarare un valore che non cambia possiamo definire una costante nel seguente modo:
const MIACOSTANTE = 123;
Follow by Email
Facebook
Twitter
Telegram
|
__label__pos
| 0.517136 |
class ParagraphsFieldSettingsTest
Test the ParagraphFieldSettings Process Plugin.
@group paragraphs @coversDefaultClass \Drupal\paragraphs\Plugin\migrate\process\ParagraphsFieldSettings
Hierarchy
• class \Drupal\Tests\paragraphs\Unit\migrate\ParagraphsFieldSettingsTest extends \Drupal\Tests\migrate\Unit\process\MigrateProcessTestCase
Expanded class hierarchy of ParagraphsFieldSettingsTest
File
paragraphs/tests/src/Unit/migrate/ParagraphsFieldSettingsTest.php, line 14
Namespace
Drupal\Tests\paragraphs\Unit\migrate
View source
class ParagraphsFieldSettingsTest extends MigrateProcessTestCase {
/**
* {@inheritdoc}
*/
protected function setUp() : void {
$this->plugin = new ParagraphsFieldSettings([], 'paragraphs_field_settings', []);
parent::setUp();
}
/**
* Test setting target_type for paragraphs fields.
*/
public function testParagraphsFieldSettings() {
$this->row
->expects($this
->any())
->method('getSourceProperty')
->with('type')
->willReturn('paragraphs');
$value = $this->plugin
->transform([], $this->migrateExecutable, $this->row, 'settings');
$this
->assertEquals([
'target_type' => 'paragraph',
], $value);
}
/**
* Test leaving target_type empty for non-paragraphs fields.
*/
public function testNonParagraphFieldSettings() {
$this->row
->expects($this
->any())
->method('getSourceProperty')
->with('type')
->willReturn('something_else');
$value = $this->plugin
->transform([], $this->migrateExecutable, $this->row, 'settings');
$this
->assertEmpty($value);
}
/**
* Test leaving target_type alone for other field types that may have set it.
*/
public function testTaxonomyParagraphFieldSettings() {
$this->row
->expects($this
->any())
->method('getSourceProperty')
->with('type')
->willReturn('taxonomy_term');
$value = $this->plugin
->transform([
'target_type' => 'some_preset_vaue',
], $this->migrateExecutable, $this->row, 'settings');
$this
->assertEquals([
'target_type' => 'some_preset_vaue',
], $value);
}
}
Members
Namesort descending Modifiers Type Description Overrides
ParagraphsFieldSettingsTest::setUp protected function
ParagraphsFieldSettingsTest::testNonParagraphFieldSettings public function Test leaving target_type empty for non-paragraphs fields.
ParagraphsFieldSettingsTest::testParagraphsFieldSettings public function Test setting target_type for paragraphs fields.
ParagraphsFieldSettingsTest::testTaxonomyParagraphFieldSettings public function Test leaving target_type alone for other field types that may have set it.
|
__label__pos
| 0.729411 |
Skip to main content
Upgrading from Share Modal v2
This is a brief summary of how to move from the (EVM only) Access Control Conditions format used in the old Lit Share Modal to the new Unified Access Control Condition format used in the new v3 build. More information on Unified Access Control Conditions can be found here
There are two changes to be aware: first, the additions of a conditionType property within each individual condition of itself defines what kind of AuthSig is required to register it, and, second, a change in the syntax of the object passed to the saveSigningConditions and getSignedToken methods in the litNodeClient.
New conditionType property
Within the individual conditions, a new conditionType property has been added. The share modal currently supports Solana and EVM chains, which use the values 'solRpc' and 'evmBasic' to differentiate.
Old (EVM only) Access Control Conditions format for checking a wallet with address 0x50e2dac5e78B5905CB09495547452cEE64426db2.
const accessControlConditions = [
{
contractAddress: '',
standardContractType: '',
chain: 'ethereum',
method: '',
parameters: [
':userAddress',
],
returnValueTest: {
comparator: '=',
value: '0x50e2dac5e78B5905CB09495547452cEE64426db2'
}
}
]
New Unified Access Control Conditions format far same condition as above. Note the addition of the conditionType property.
const unifiedAccessControlConditions = [
{
conditionType: 'evmBasic',
contractAddress: '',
standardContractType: '',
chain: 'ethereum',
method: '',
parameters: [
':userAddress',
],
returnValueTest: {
comparator: '=',
value: '0x50e2dac5e78B5905CB09495547452cEE64426db2'
}
}
]
The new format can also be used to denote Solana conditions. Note the change of the conditionType to 'solRpc'. Here is an example of checking for a wallet with address 88PoAjLoSqrTjH2cdRWq4JEezhSdDBw3g7Qa6qKQurxA.
const unifiedAccessControlConditions = [
{
conditionType: 'solRpc',
method: "",
params: [
":userAddress"
],
chain: 'solana',
returnValueTest: {
key: "",
comparator: "=",
value: "88PoAjLoSqrTjH2cdRWq4JEezhSdDBw3g7Qa6qKQurxA",
},
},
}
]
Conditions from different chains can be combined using AND/OR operators. This example checks for ownership of an EVM wallet of address 0x50e2dac5e78B5905CB09495547452cEE64426db2 OR ownership of a Solana wallet with address 6XmeyeYtSd31Eby2syaRkpXKY2GMMd3n3MEwTM5B7riD.
const unifiedAccessControlConditions = [
{
conditionType: 'evmBasic',
contractAddress: '',
standardContractType: '',
chain: 'ethereum',
method: '',
parameters: [
':userAddress',
],
returnValueTest: {
comparator: '=',
value: '0x50e2dac5e78B5905CB09495547452cEE64426db2'
}
}
{ operator: 'or' },
{
conditionType: 'solRpc',
method: ',
params: [
:userAddress'
],
chain: 'solana',
returnValueTest: {
key: ',
comparator: '=',
value: '6XmeyeYtSd31Eby2syaRkpXKY2GMMd3n3MEwTM5B7riD'
}
}
]
saveSigningCondition syntax change
Old (EVM only) Access Control Condition format for saving conditions:
var ethAuthSig = await LitJsSdk.checkAndSignAuthMessage({
chain: "ethereum",
});
await litNodeClient.saveSigningCondition({
accessControlConditions: accessControlConditions,
authSig: ethAuthSig,
resourceId,
});
New Unified Access Control Conditions format. The new format passes in a different AuthSig for each condition type under a specific property name. Note too, the change from accessControlConditions to unifiedAccessControlConditions. Here is an example of what it would look like to save conditions that used both Solana and EVM condition types.
Note: if saving only EVM conditions, only an 'ethereum' AuthSig is required. Likewise, if only Solana conditions are saved, only a 'solana' AuthSig is required.
var solAuthSig = await LitJsSdk.checkAndSignAuthMessage({
chain: "solana",
});
var ethAuthSig = await LitJsSdk.checkAndSignAuthMessage({
chain: "ethereum",
});
await litNodeClient.saveSigningCondition({
unifiedAccessControlConditions,
authSig: {
solana: solAuthSig,
ethereum: ethAuthSig, // note that the key here is "ethereum" for any and all EVM chains. If you're using Polygon, for example, you should still have "ethereum" here.
},
resourceId,
});
getSignedToken syntax change
The getSignedToken method changes in the same way as saveSigningCondition.
var solAuthSig = await LitJsSdk.checkAndSignAuthMessage({
chain: "solana",
});
var ethAuthSig = await LitJsSdk.checkAndSignAuthMessage({
chain: "ethereum",
});
await litNodeClient.getSignedToken({
unifiedAccessControlConditions,
authSig: {
solana: solAuthSig,
ethereum: ethAuthSig,
},
resourceId,
});
More information on Unified Access Control Conditions can be found here
|
__label__pos
| 0.844678 |
Ticket #6457: trac_6457-ideal_intersection.patch
File trac_6457-ideal_intersection.patch, 1.8 KB (added by davidloeffler, 10 years ago)
patch against 4.1.alpha2
• sage/rings/number_field/number_field_ideal.py
#6457: intersection of ideals in a number field
diff -r 3ebf91bde442 sage/rings/number_field/number_field_ideal.py
a b
639639 self.__integral_split = (self*denominator, denominator)
640640 return self.__integral_split
641641
642 def intersection(self, other):
643 r"""
644 Return the intersection of self and other.
645
646 EXAMPLE::
647
648 sage: K.<a> = QuadraticField(-11)
649 sage: p = K.ideal((a + 1)/2); q = K.ideal((a + 3)/2)
650 sage: p.intersection(q) == q.intersection(p) == K.ideal(a-2)
651 True
652
653 An example with non-principal ideals::
654
655 sage: L.<a> = NumberField(x^3 - 7)
656 sage: p = L.ideal(a^2 + a + 1, 2)
657 sage: q = L.ideal(a+1)
658 sage: p.intersection(q) == L.ideal(8, 2*a + 2)
659 True
660
661 A relative example::
662
663 sage: L.<a,b> = NumberField([x^2 + 11, x^2 - 5])
664 sage: A = L.ideal([15, (-3/2*b + 7/2)*a - 8])
665 sage: B = L.ideal([6, (-1/2*b + 1)*a - b - 5/2])
666 sage: A.intersection(B) == L.ideal(-1/2*a - 3/2*b - 1)
667 True
668 """
669 other = self.number_field().ideal(other)
670 mod = self.free_module().intersection(other.free_module())
671 L = self.number_field()
672 if L.is_absolute():
673 elts = [L(x.list()) for x in mod.gens()]
674 else:
675 elts = [sum([x[i] * L.absolute_generator()**i for i in xrange(L.absolute_degree())]) for x in mod.gens()]
676 return L.ideal(elts)
677
642678 def is_integral(self):
643679 """
644680 Return True if this ideal is integral.
|
__label__pos
| 0.865659 |
3
$\begingroup$
I'm looking for fast Python implémentations of gradient descent optimization algorithm. I have a convex problem , with no constraint, so for now I'm using the BFGS algorithm implemented in scikit-learn ( minimize ).
Is there anything faster / scalable on multi-core systems ?
$\endgroup$
0
$\begingroup$
Parallel gradient descent has been implemented in this repository in Python. It should have a familiar interface, since it's being developed for implementation as a scikit-learn feature.
| improve this answer | |
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.923819 |
Category: How do I do X?
Updated
2023-08-03
This solution is summarized from an archived support forum post. This information may have changed. If you notice an error, please let us know in Discord.
Implementing Server-side Pagination using PostgreSQL in Appsmith (Part 4 - Filtering)
Issue
As I was building my application, I realized that I needed to add server-side filtering to make it more user-friendly. To do this, I needed to provide a way for my users to select specific fields from the table and provide a filter value for those fields. I achieved this by using a select widget and an input widget, and updating the query to use their values. With this update, users are now able to filter by title and body without any issues.
Resolution
In this final part of the series, we learned how to implement server-side filtering in our application. We added a select widget to allow users to select specific fields and an input widget to provide a filter value for that field. Then, we updated our query to use both widgets and filter the data accordingly.
To implement server-side filtering using PostgreSQL, we disabled the default filtering functionality in the table widget and used the following query:
SELECT * FROM posts WHERE {{W_tableFilter.selectedOptionValue ?? 'title'}} ILIKE '{{'%' + ${W_tableFilter.selectedOptionValue ? W_tableFilterInput.text : Table1.searchText} + '%'}}' LIMIT {{Table1.pageSize}} OFFSET {{(Table1.pageNo - 1) * Table1.pageSize}};
In this query, W_tableFilter.selectedOptionValue refers to the currently selected field of the select widget and W_tableFilterInput.text refers to the input of the input widget. By using these widgets together, we can filter data on the server-side and provide more efficient and accurate results.
Overall, implementing server-side filtering in Appsmith using PostgreSQL is a simple process that can greatly improve the performance and functionality of our applications.
|
__label__pos
| 0.934376 |
Type Safe Linear Algebra in Scala
Thanks to Scala, lately I’ve been appreciating type safety more and more when programming. It was an adjustment coming from Python, R, and C, but the performance benefits and the fact that I can be pretty sure that when my code compiles, it will run properly means that I can deploy code with much higher confidence.
However, there’s one area of my development life where type safety hasn’t done much for me – specifically numerical linear algebra and, by consequence, machine learning. In this post I’ll explain what that problem is, and propose a solution to backport type safety onto linear algebra operations in Scala, in a non-intrusive way.
The Problem
Anyone who has taken a basic linear algebra class or played around with numerical code knows about dimension alignment - in python it looks like this:
>>> np.random.rand(2,2) * np.random.rand(3,1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: operands could not be broadcast together with shapes (2,2) (3,1)
In Scala, using the awesome breeze library, it looks like this:
scala> import breeze.linalg._
import breeze.linalg._
scala> DenseMatrix.rand(2,2) * DenseMatrix.rand(3,1)
java.lang.IllegalArgumentException: requirement failed: Dimension mismatch!
at scala.Predef$.require(Predef.scala:233)
at breeze.linalg.operators.DenseMatrixMultiplyStuff$implOpMulMatrix_DMD_DMD_eq_DMD$.apply(DenseMatrixOps.scala:45)
at breeze.linalg.operators.DenseMatrixMultiplyStuff$implOpMulMatrix_DMD_DMD_eq_DMD$.apply(DenseMatrixOps.scala:40)
at breeze.linalg.NumericOps$class.$times(NumericOps.scala:261)
at breeze.linalg.DenseMatrix.$times(DenseMatrix.scala:54)
...
That is - if you want to multiply two matrices, their dimensions have to match up in the right way. An (n x d) matrix can only be multiplied on the left by a matrix that’s (something x n) or on the right by a matrix that’s (d x something).
There’s something to notice about the errors above. First, they’re data dependent. Multiplying a (3 x 2) by a (2 x 1) matrix wouldn’t have such disastrous effects, but change the inner dimension, and suddenly you have problems. Second, they’re runtime errors. Meaning that, especially in the case of Scala, the code will compile just fine, and we will only encounter this error at runtime. Isn’t this what the compiler is supposed to figure out for us?
Matrix-matrix multiplication is at the very heart of advanced analytics, machine learning, and high performance scientific computing - so it’s comes up in complicated and non-trivial ways at the center of some very complicated algorithms. I can’t tell you the number of bugs I’ve hit because I forgot to transpose or because I assumed that the data was coming in in one shape and in fact it came in in another, and I believe this to be a common experience among programmers of algorithms like this. Heck - even the theoreticians will tell you that half the work in checking their proofs for correctness is making sure that the dimensions line up. (I kid, but only a little.)
A Solution
So how do we avoid this mess and get the type system to check our dimensions for us? If you came to this post hoping to read about Algebraic Data Types and Monads, then I’m sorry, this is not the post you were hoping for. Instead, I’ll propose a simple solution that does the trick without resorting to type system black magic.
The basic observation here is twofold: 1. Usually people work with a relatively small number of dimensions. That is, I might have “N” data points with “D” features in “K” classes - while each of these numbers might itself be big, there are only 3 of them to keep track of, and I kind of know that my data is going to be (N x D) and my model is going to be (D x K), for example. 2. By forcing the user to provide just a little more information to the type system, we can get type safety for linear algebra in a sensible way.
So, now for the code - first, let’s define a Matrix type that contains two type parameters - A and B, which has some basic operations:
import breeze.linalg._
class Matrix[A,B](val mat: DenseMatrix[Double]) {
def *[C](other: Matrix[B,C]): Matrix[A,C] = new Matrix[A,C](mat*other.mat)
def t: Matrix[B,A] = new Matrix[B,A](mat.t)
def +(other: Matrix[A,B]): Matrix[A,B] = new Matrix[A,B](mat + other.mat)
def :*(other: Matrix[A,B]): Matrix[A,B] = new Matrix[A,B](mat :* other.mat)
def *(scalar: Double): Matrix[A,B] = new Matrix[A,B](mat * scalar)
}
Additionally, I’ll create some helper functions - one to read data in from file and the other to invert a square matrix:
object MatrixUtils {
def readcsv[A,B](filename: String) = new Matrix[A,B](csvread(new java.io.File(filename)))
def inverse[A](x: Matrix[A,A]): Matrix[A,A] = new Matrix[A,A](inv(x.mat))
def ident[D](d: Int): Matrix[D,D] = new Matrix[D,D](DenseMatrix.eye(d))
}
So let’s see it in action:
import MatrixUtils._
class N
class D
class K
val x = new Matrix[N,D](DenseMatrix.rand(100,10))
val y = new Matrix[N,K](DenseMatrix.rand(100,2))
val z1 = x * x //Does not compile!
val z2 = x.t * y //Compiles! Returns a Matrix[D,K]
val z3 = x.t * x //Compiles! Returns a Matrix[D,D]
val z4 = x * x.t //Compiles! Returns a Matrix[N,N]
What have we done her? We’ve first defined some classes to represent our dimensions (which are abstract) - then we’ve created some matrices and assigned labels to these dimensions. We could just has easily have read x or y from file - provided we knew their intended shapes.
Finally, we tried some basic linear algebra (matrix multiplication!) on this stuff.
So what?
Well, here’s the punchline - we can now implement something reasonably complicated - say, solving an L2-regularized linear system using the normal equations - using the above classes, be sure that my code is actually going to run if I feed it data of the right shape, and as a bonus have the compiler confirm for me that my method actually has the right dimensions.
Suppose I want to find the solution to the following problem
\[ \underset{x}{min\,}{ {\|A X - B\|}_2^2 + \lambda \|X\|_2^2} \]
A and B are fixed matrixes (say “data” and “labels” in the case of machine learning.) One way to do this is to take the derivative of the above (convex) expression and set it to 0. This results in the fairly complicated expression:
\[ X = (A^T A + \lambda I)^{-1} A^T B \]
Or, written with my handy Matrix library:
import MatrixUtils._
def solve[X,Y,Z](a: Matrix[X,Y], b: Matrix[X,Z], lambda: Double) = {
inverse((a.t * a) + ident[Y](a.mat.cols)*lambda) * a.t * b
}
And what does the type signature of solve look like?
solve: [X, Y, Z](a: Matrix[X,Y], b: Matrix[X,Z], lambda: Double)Matrix[Y,Z]
The compiler has figured out that the result of my solve procedure is an (Y x Z) matrix - which in the specific case of my data is (D x K). If you’re familiar with linear regression, this should seem right!
And to actually use it:
val z = solve(x, y, 1e2)
val predictions = x * z
//Meanwhile, this won't compile:
val z2 = solve(x.t, y, 1e2)
And that’s it. I can be sure that z has the right shape, because the compiler tells me so, and I can be sure that if I had screwed up the dimensions somewhere, I’ll be told at compile time, rather than 30 minutes in to a 2-hour, 100 node job on a cluster.
Conclusion
In this post, I’ve described a problem which, I think, plagues a lot of people who do numerically intensive computing, and proposed a simple solution that relies on the type system to help cope with this problem. Of course, this isn’t going to solve all problems - for example, if the solution to some problem is square, and you forget to transpose it, the compiler can’t catch that for you.
I haven’t yet built this idea into a real system, but I’d be interested in hearing if this idea has already been implemented in scientific or mathematical computing systems, or if not, why people think this is a bad idea.
Find me on Twitter and make fun of me if you have comments!
posted on 2015-05-28
Currency Arbitrage in Python
Yesterday, there was a post on Hacker News about solving a currency arbitrage problem in Prolog. The problem was originally posted by the folks over at Priceonomics. Spoiler alert - I solve their puzzle in this post.
I’ve actually solved this puzzle before, on a final in my undergraduate algorithms class. I remember being proud of myself for coming up with the solution, and for remembering a trick from high school precalculus that allowed me to get there. (Side note: I now see this trick used basically all the time.) Of course, this was for an algorithms class, so I never actually implemented it.
The solution is this: structure the currency network as a directed graph, with exchange rates on the edges. Then, find a cycle where multiplying the weights together gives you a number > 1. Two things to note - first, this is like solving a longest path problem. Second, it’s not quite that because longest paths are about additive weight and we’re looking for multiplicative weight.
The trick I mentioned above involves taking the log of the exchange rates and finding an increasing cycle. We have to remember that the sum of logs of some terms is the same as the log of the product of those terms. To suit the algorithm we use, taking the negative of the log of the weights and finding a negative weight cycle. It turns out that the Bellman-Ford algorithm will do just this.
Originally, I had plans to redo the author’s Prolog solution with the Bellman-Ford algorithm. While I’m a big fan of declarative programming and the first order logic, I chickened out and decided to redo things in good old python.
The goal here was to write in concise, idiomatic python with enough structure to be readable, and use common python libraries as much as possible. So, I use urllib2 and json to grab and parse data from Priceonomics, and NetworkX to structure the data as a graph and run the Bellman-Ford algorithm.
Of course, it turns out that by default NetworkX raises an exception when a negative cycle is detected, so I had to modify it to return the results of the algorithm when a cycle is detected.
Once I did that, though - I wound up with 50 lines of pretty simple python. Most of the works is actually done in interpreting the results of the Bellman-Ford algorithm.
posted on 2013-06-08
Blogging
To keep up appearances, I’ve decided to start blogging. I’m using Jekyll-Bootstrap and Github Pages for now (that’s right, full hipster). All opinions on this blog are my own and do not necessarily reflect those of my employer.
posted on 2013-01-02
|
__label__pos
| 0.85041 |
Ext JS 4: The Class Definition Pipeline
Last time, we looked at some of the features of the new class system in Ext JS 4, and explored some of the code that makes it work. Today we’re going to dig a little deeper and look at the class definition pipeline – the framework responsible for creating every class in Ext JS 4.
As I mentioned last time, every class in Ext JS 4 is an instance of Ext.Class. When an Ext.Class is constructed, it hands itself off to a pipeline populated by small, focused processors, each of which handles one part of the class definition process. We ship a number of these processors out of the box – there are processors for handling mixins, setting up configuration functions and handling class extension.
The pipeline is probably best explained with a picture. Think of your class starting its definition journey at the bottom left, working its way up the preprocessors on the left hand side and then down the postprocessors on the right, until finally it reaches the end, where it signals its readiness to a callback function:
Processors
The distinction between preprocessors and postprocessors is that a class is considered ‘ready’ (e.g. can be instantiated) after the preprocessors have all been executed. Postprocessors typically perform functions like aliasing the class name to an xtype or back to a legacy class name – things that don’t affect the class’ behavior.
Each processor runs asynchronously, calling back to the Ext.Class constructor when it is ready – this is what enables us to extend classes that don’t exist on the page yet. The first preprocessor is the Loader, which checks to see if all of the new Class’ dependencies are available. If they are not, the Loader can dynamically load those dependencies before calling back to Ext.Class and allowing the next preprocessor to run. We’ll take another look at the Loader in another post.
After running the Loader, the new Class is set up to inherit from the declared superclass by the Extend preprocessor. The Mixins preprocessor takes care of copying all of the functions from each of our mixins, and the Config preprocessor handles the creation of the 4 config functions we saw last time (e.g. getTitle, setTitle, resetTitle, applyTitle – check out yesterday’s post to see how the Configs processor helps out).
Finally, the Statics preprocessor looks for any static functions that we set up on our new class and makes them available statically on the class. The processors that are run are completely customizable, and it’s easy to add custom processors at any point. Let’s take a look at that Statics preprocessor as an example:
//Each processor is passed three arguments - the class under construction,
//the configuration for that class and a callback function to call when the processor has finished
Ext.Class.registerPreprocessor('statics', function(cls, data, callback) {
if (Ext.isObject(data.statics)) {
var statics = data.statics,
name;
//here we just copy each static function onto the new Class
for (name in statics) {
if (statics.hasOwnProperty(name)) {
cls[name] = statics[name];
}
}
}
delete data.statics;
//Once the processor's work is done, we just call the callback function to kick off the next processor
if (callback) {
callback.call(this, cls, data);
}
});
//Changing the order that the preprocessors are called in is easy too - this is the default
Ext.Class.setDefaultPreprocessors(['extend', 'mixins', 'config', 'statics']);
What happens above is pretty straightforward. We’re registering a preprocessor called ‘statics’ with Ext.Class. The function we provide is called whenever the ‘statics’ preprocessor is invoked, and is passed the new Ext.Class instance, the configuration for that class, and a callback to call when the preprocessor has finished its work.
The actual work that this preprocessor does is trivial – it just looks to see if we declared a ‘statics’ property in our class configuration and if so copies it onto the new class. For example, let’s say we want to create a static getNextId function on a class:
Ext.define('MyClass', {
statics: {
idSeed: 1000,
getNextId: function() {
return this.idSeed++;
}
}
});
Because of the Statics preprocessor, we can now call the function statically on the Class (e.g. without creating an instance of MyClass):
MyClass.getNextId(); //1000
MyClass.getNextId(); //1001
MyClass.getNextId(); //1002
... etc
Finally, let’s come back to that callback at the bottom of the picture above. If we supply one, a callback function is run after all of the processors have run. At this point the new class is completely ready for use in your application. Here we create an instance of MyClass using the callback function, guaranteeing that the dependency on Ext.Window has been honored:
Ext.define('MyClass', {
extend: 'Ext.Window'
}, function() {
//this callback is called when MyClass is ready for use
var cls = new MyClass();
cls.setTitle('Everything is ready');
cls.show();
});
That’s it for today. Next time we’ll look at some of the new features in the part of Ext JS 4 that is closest to my heart – the data package.
27 Responses to Ext JS 4: The Class Definition Pipeline
1. Scott says:
Sorry for being picky: shouldn’t the first call to MyClass.getNextId() return 1001 instead of 1000?
2. Ed Spencer says:
@Scott – try it! You’ll get 1000. Now replace idSeed++ with ++idSeed and it’ll start with 1001. These are known as pre and post increment – http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference is a good reference
3. Stanislav says:
It would be very interesting to see implementation of “Loader” in new Class system of ExtJs, or to read about it one of your articles
4. Les says:
Ed, are there any args or return value in the Ext.define or Ext.require callback?
5. Ed Spencer says:
@Les no arguments given to the require callback (we may add some later). The define callback is given a single argument – a reference to the newly-defined class (e.g. MyClass in the last example above).
The Ext.define callback is also called in the scope of the newly-defined class (e.g. this === MyClass in the final example above)
6. Dmitriy Pashkevich says:
But don’t all these preprocessors slow things down? The loading and initialization routines now seem to be split into much more stages involving a lot of copying properties between objects, calling callbacks… Although it’s clear that the new system is more flexible and has less requirements for the end developer.
7. Les says:
Ed, can Ext.require load a resource other than an Ext class, e.g. a JavaScript or an html file or an Ext template? Is there a way to define a “templated” class that would depend on an external template?
8. Ed Spencer says:
@Dmitriy there are a few more function invocations but the enhanced flexibility is well worth it
@Les Only classes at the moment, but the separate Loader can load any file. I’ll post something about that soon, either here or on the sencha.com blog
9. Les says:
Ed, how a component class would dynamically load plugins?
10. Les says:
>>> Ed, how a component class would dynamically load plugins?
I believe I can answer my own question:) Class is ready before the plugins that it can use are loaded. This because plugins are needed per instance of the class. So, I’d need to require the plugins before an instance of the class is created.
11. Rafael says:
Ed,
Will the object´s superClass have an alias in Ext4? I don´t know if I´m just too lazy but something like “this.super.initComponent();” would be so much better than “Ext.Panel.superclass.initComponent.call(this)”
12. Ed Spencer says:
@Rafael yea, just call this.parent(). It’s the same as doing My.NewClass.superclass.prototype.theMethodName() – much better
13. Blacktiger says:
Can a preprocessor or a postprocessor be used to automatically modify the created object?
14. Ed Spencer says:
@Blacktiger which object are you talking about? The processors can modify the Ext.Class instance which represents the new class. If you want to have some code that automatically modifies every created instance of that class, you can do it in the class’ constructor
15. Blacktiger says:
Sorry, dumb question. I thought about it afterward and realized you can do most anything either by mucking with the class or adding a constructor.
16. Pingback: extjstutorial.org
17. halcwb says:
This is very nice! Just a thought, would building a ‘Aspect’ preprocessor be something worthwhile to consider? This would allow a sort of aspect oriented programming capabilities added to classes.
18. Ed Spencer says:
@halcwb give it a go! It’s really easy to write your own processors and hook them in
19. yoorek says:
last example gives me ‘TypeError: Expecting a function in instanceof check, but got [object Object]’ , why? :
Ext.define(‘MyClass’, {
extend: ‘Ext.Window’
}, function() {
//this callback is called when MyClass is ready for use
var cls = new MyClass();
cls.setTitle(‘Everything is ready’);
cls.show();
});
20. Yiyu Jia says:
I do not find a default preprocessor in Ext JS 4.0.2 source code (I look in Class.js).
Also, I found the way Ext JS 4 processing preprocessors and postprocessors is strange. I think there are bugs in it. http://www.sencha.com/forum/showthread.php?143668-when-configuration-data-for-Ext.ClassManager.create-overwrite-postprocessor&p=636782#post636782 .
21. Yiyu Jia says:
just a quick update. I saw the “loader” preprocessor is registered in loader.js
22. xun says:
Honestly i have no idea why ext has such a drastic changes that broke everything?
this.parent.methodname() will not work.
Yes, this.callParent(arguments) will work, however you will have to keep your method name the same.
What should be the alternative?
23. Great post Ed!
I would like to add a piece of info that can save some headaches, for people reading this topic, as bc the documentation regarding this matter is lacking some info. Remember that in ExtJS v4.1.3 (haven’t tried others) the preprocessors (Ext.Class.registerPreprocessor) needs to :
**have the name of an existing config, let say for a store class, “proxy” and “model” are valid names to use, therefore Ext.Class.registerPreprocessor(“proxy”, function() { //do stuff });
**The function passed in, has 4 arguments and not 3 as states in the doc. Therefore it one should use the function as: Ext.Class.registerPreprocessor(“proxy”, function(cls, data, ***hooks***, callback) { //do stuff });
Hooks is the 3rd arg and callback will be the 4th.
Hope this helps someone that lands in this page for the same purpose as me.
Cheers!
Tiago Teixeira
24. rijkvanwel says:
Picture’s broken.
25. edspencer says:
@rijkvanwel unfortunately I lost a bunch of images when I moved my blog a few months back. If I find another copy I’ll restore it
• test799 says:
Here’s the image: http://web.archive.org/web/20130821165451im_/https://edspencer.net/wp-content/uploads/2011/01/Processors.png
I do have a question for you though. I have always been confused by the config preprocesser in Ext Js. In fact I never use it. It seems to work fine in simple examples not extending anything, but if you want to use it on an Ext class it behaves oddly. For instance, if you wanted it to match Sencha Touch by wrapping the fields property, it would not play nice and your model won’t have any fields. Should it never be used on non-user declared properties? It is often required in Sencha Touch. I do hope it gets changed to at least be consistent between the frameworks in 5.x.
• edspencer says:
Good thinking 🙂 Updated the post with that image
Agreed, that doesn’t make any sense. I wrote it up here mainly so I could figure out how it works! But yea, none of it should exist (in fact I believe they’re heading back to an Ext.extend-like single function for performance and other reasons).
By complete chance I’m actually writing a new framework at my new company, where I’ve learned that having a single architect design the whole thing results in a far more consistent whole. We live and learn I guess
Leave a Reply to Stanislav Cancel reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
%d bloggers like this:
|
__label__pos
| 0.853654 |
merged
authorpaulson
Sat Aug 04 00:19:23 2018 +0100 (11 months ago)
changeset 687201e1818612124
parent 68718 ce18a3924864
parent 68719 8aedca31957d
child 68721 53ad5c01be3f
merged
1.1 --- a/src/HOL/List.thy Fri Aug 03 21:38:54 2018 +0200
1.2 +++ b/src/HOL/List.thy Sat Aug 04 00:19:23 2018 +0100
1.3 @@ -1242,7 +1242,7 @@
1.4
1.5 lemma rev_induct [case_names Nil snoc]:
1.6 "[| P []; !!x xs. P xs ==> P (xs @ [x]) |] ==> P xs"
1.7 -apply(simplesubst rev_rev_ident[symmetric])
1.8 +apply(subst rev_rev_ident[symmetric])
1.9 apply(rule_tac list = "rev xs" in list.induct, simp_all)
1.10 done
1.11
1.12 @@ -1489,8 +1489,7 @@
1.13 case Nil thus ?case by simp
1.14 next
1.15 case (Cons x xs) thus ?case
1.16 - apply (auto split:if_split_asm)
1.17 - using length_filter_le[of P xs] by arith
1.18 + using Suc_le_eq by fastforce
1.19 qed
1.20
1.21 lemma length_filter_conv_card:
1.22 @@ -1558,17 +1557,17 @@
1.23 lemma filter_eq_ConsD:
1.24 "filter P ys = x#xs \<Longrightarrow>
1.25 \<exists>us vs. ys = us @ x # vs \<and> (\<forall>u\<in>set us. \<not> P u) \<and> P x \<and> xs = filter P vs"
1.26 -by(rule Cons_eq_filterD) simp
1.27 + by(rule Cons_eq_filterD) simp
1.28
1.29 lemma filter_eq_Cons_iff:
1.30 "(filter P ys = x#xs) =
1.31 (\<exists>us vs. ys = us @ x # vs \<and> (\<forall>u\<in>set us. \<not> P u) \<and> P x \<and> xs = filter P vs)"
1.32 -by(auto dest:filter_eq_ConsD)
1.33 + by(auto dest:filter_eq_ConsD)
1.34
1.35 lemma Cons_eq_filter_iff:
1.36 "(x#xs = filter P ys) =
1.37 (\<exists>us vs. ys = us @ x # vs \<and> (\<forall>u\<in>set us. \<not> P u) \<and> P x \<and> xs = filter P vs)"
1.38 -by(auto dest:Cons_eq_filterD)
1.39 + by(auto dest:Cons_eq_filterD)
1.40
1.41 lemma inj_on_filter_key_eq:
1.42 assumes "inj_on f (insert y (set xs))"
1.43 @@ -1583,16 +1582,16 @@
1.44 subsubsection \<open>List partitioning\<close>
1.45
1.46 primrec partition :: "('a \<Rightarrow> bool) \<Rightarrow>'a list \<Rightarrow> 'a list \<times> 'a list" where
1.47 -"partition P [] = ([], [])" |
1.48 -"partition P (x # xs) =
1.49 + "partition P [] = ([], [])" |
1.50 + "partition P (x # xs) =
1.51 (let (yes, no) = partition P xs
1.52 in if P x then (x # yes, no) else (yes, x # no))"
1.53
1.54 lemma partition_filter1: "fst (partition P xs) = filter P xs"
1.55 -by (induct xs) (auto simp add: Let_def split_def)
1.56 + by (induct xs) (auto simp add: Let_def split_def)
1.57
1.58 lemma partition_filter2: "snd (partition P xs) = filter (Not \<circ> P) xs"
1.59 -by (induct xs) (auto simp add: Let_def split_def)
1.60 + by (induct xs) (auto simp add: Let_def split_def)
1.61
1.62 lemma partition_P:
1.63 assumes "partition P xs = (yes, no)"
1.64 @@ -1614,8 +1613,8 @@
1.65
1.66 lemma partition_filter_conv[simp]:
1.67 "partition f xs = (filter f xs,filter (Not \<circ> f) xs)"
1.68 -unfolding partition_filter2[symmetric]
1.69 -unfolding partition_filter1[symmetric] by simp
1.70 + unfolding partition_filter2[symmetric]
1.71 + unfolding partition_filter1[symmetric] by simp
1.72
1.73 declare partition.simps[simp del]
1.74
1.75 @@ -1623,28 +1622,28 @@
1.76 subsubsection \<open>@{const concat}\<close>
1.77
1.78 lemma concat_append [simp]: "concat (xs @ ys) = concat xs @ concat ys"
1.79 -by (induct xs) auto
1.80 + by (induct xs) auto
1.81
1.82 lemma concat_eq_Nil_conv [simp]: "(concat xss = []) = (\<forall>xs \<in> set xss. xs = [])"
1.83 -by (induct xss) auto
1.84 + by (induct xss) auto
1.85
1.86 lemma Nil_eq_concat_conv [simp]: "([] = concat xss) = (\<forall>xs \<in> set xss. xs = [])"
1.87 -by (induct xss) auto
1.88 + by (induct xss) auto
1.89
1.90 lemma set_concat [simp]: "set (concat xs) = (UN x:set xs. set x)"
1.91 -by (induct xs) auto
1.92 + by (induct xs) auto
1.93
1.94 lemma concat_map_singleton[simp]: "concat(map (%x. [f x]) xs) = map f xs"
1.95 -by (induct xs) auto
1.96 + by (induct xs) auto
1.97
1.98 lemma map_concat: "map f (concat xs) = concat (map (map f) xs)"
1.99 -by (induct xs) auto
1.100 + by (induct xs) auto
1.101
1.102 lemma filter_concat: "filter p (concat xs) = concat (map (filter p) xs)"
1.103 -by (induct xs) auto
1.104 + by (induct xs) auto
1.105
1.106 lemma rev_concat: "rev (concat xs) = concat (map rev (rev xs))"
1.107 -by (induct xs) auto
1.108 + by (induct xs) auto
1.109
1.110 lemma concat_eq_concat_iff: "\<forall>(x, y) \<in> set (zip xs ys). length x = length y ==> length xs = length ys ==> (concat xs = concat ys) = (xs = ys)"
1.111 proof (induct xs arbitrary: ys)
1.112 @@ -1653,21 +1652,21 @@
1.113 qed (auto)
1.114
1.115 lemma concat_injective: "concat xs = concat ys ==> length xs = length ys ==> \<forall>(x, y) \<in> set (zip xs ys). length x = length y ==> xs = ys"
1.116 -by (simp add: concat_eq_concat_iff)
1.117 + by (simp add: concat_eq_concat_iff)
1.118
1.119
1.120 subsubsection \<open>@{const nth}\<close>
1.121
1.122 lemma nth_Cons_0 [simp, code]: "(x # xs)!0 = x"
1.123 -by auto
1.124 + by auto
1.125
1.126 lemma nth_Cons_Suc [simp, code]: "(x # xs)!(Suc n) = xs!n"
1.127 -by auto
1.128 + by auto
1.129
1.130 declare nth.simps [simp del]
1.131
1.132 lemma nth_Cons_pos[simp]: "0 < n \<Longrightarrow> (x#xs) ! n = xs ! (n - 1)"
1.133 -by(auto simp: Nat.gr0_conv_Suc)
1.134 + by(auto simp: Nat.gr0_conv_Suc)
1.135
1.136 lemma nth_append:
1.137 "(xs @ ys)!n = (if n < length xs then xs!n else ys!(n - length xs))"
1.138 @@ -1678,10 +1677,10 @@
1.139 qed simp
1.140
1.141 lemma nth_append_length [simp]: "(xs @ x # ys) ! length xs = x"
1.142 -by (induct xs) auto
1.143 + by (induct xs) auto
1.144
1.145 lemma nth_append_length_plus[simp]: "(xs @ ys) ! (length xs + n) = ys ! n"
1.146 -by (induct xs) auto
1.147 + by (induct xs) auto
1.148
1.149 lemma nth_map [simp]: "n < length xs ==> (map f xs)!n = f(xs!n)"
1.150 proof (induct xs arbitrary: n)
1.151 @@ -1691,10 +1690,10 @@
1.152 qed simp
1.153
1.154 lemma nth_tl: "n < length (tl xs) \<Longrightarrow> tl xs ! n = xs ! Suc n"
1.155 -by (induction xs) auto
1.156 + by (induction xs) auto
1.157
1.158 lemma hd_conv_nth: "xs \<noteq> [] \<Longrightarrow> hd xs = xs!0"
1.159 -by(cases xs) simp_all
1.160 + by(cases xs) simp_all
1.161
1.162
1.163 lemma list_eq_iff_nth_eq:
1.164 @@ -1724,7 +1723,7 @@
1.165 qed simp
1.166
1.167 lemma in_set_conv_nth: "(x \<in> set xs) = (\<exists>i < length xs. xs!i = x)"
1.168 -by(auto simp:set_conv_nth)
1.169 + by(auto simp:set_conv_nth)
1.170
1.171 lemma nth_equal_first_eq:
1.172 assumes "x \<notin> set xs"
1.173 @@ -1756,18 +1755,18 @@
1.174 qed
1.175
1.176 lemma list_ball_nth: "\<lbrakk>n < length xs; \<forall>x \<in> set xs. P x\<rbrakk> \<Longrightarrow> P(xs!n)"
1.177 -by (auto simp add: set_conv_nth)
1.178 + by (auto simp add: set_conv_nth)
1.179
1.180 lemma nth_mem [simp]: "n < length xs \<Longrightarrow> xs!n \<in> set xs"
1.181 -by (auto simp add: set_conv_nth)
1.182 + by (auto simp add: set_conv_nth)
1.183
1.184 lemma all_nth_imp_all_set:
1.185 "\<lbrakk>\<forall>i < length xs. P(xs!i); x \<in> set xs\<rbrakk> \<Longrightarrow> P x"
1.186 -by (auto simp add: set_conv_nth)
1.187 + by (auto simp add: set_conv_nth)
1.188
1.189 lemma all_set_conv_all_nth:
1.190 "(\<forall>x \<in> set xs. P x) = (\<forall>i. i < length xs \<longrightarrow> P (xs ! i))"
1.191 -by (auto simp add: set_conv_nth)
1.192 + by (auto simp add: set_conv_nth)
1.193
1.194 lemma rev_nth:
1.195 "n < size xs \<Longrightarrow> rev xs ! n = xs ! (length xs - Suc n)"
1.196 @@ -1811,20 +1810,20 @@
1.197 subsubsection \<open>@{const list_update}\<close>
1.198
1.199 lemma length_list_update [simp]: "length(xs[i:=x]) = length xs"
1.200 -by (induct xs arbitrary: i) (auto split: nat.split)
1.201 + by (induct xs arbitrary: i) (auto split: nat.split)
1.202
1.203 lemma nth_list_update:
1.204 -"i < length xs==> (xs[i:=x])!j = (if i = j then x else xs!j)"
1.205 -by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split)
1.206 + "i < length xs==> (xs[i:=x])!j = (if i = j then x else xs!j)"
1.207 + by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split)
1.208
1.209 lemma nth_list_update_eq [simp]: "i < length xs ==> (xs[i:=x])!i = x"
1.210 -by (simp add: nth_list_update)
1.211 + by (simp add: nth_list_update)
1.212
1.213 lemma nth_list_update_neq [simp]: "i \<noteq> j ==> xs[i:=x]!j = xs!j"
1.214 -by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split)
1.215 + by (induct xs arbitrary: i j) (auto simp add: nth_Cons split: nat.split)
1.216
1.217 lemma list_update_id[simp]: "xs[i := xs!i] = xs"
1.218 -by (induct xs arbitrary: i) (simp_all split:nat.splits)
1.219 + by (induct xs arbitrary: i) (simp_all split:nat.splits)
1.220
1.221 lemma list_update_beyond[simp]: "length xs \<le> i \<Longrightarrow> xs[i:=x] = xs"
1.222 proof (induct xs arbitrary: i)
1.223 @@ -1834,44 +1833,44 @@
1.224 qed simp
1.225
1.226 lemma list_update_nonempty[simp]: "xs[k:=x] = [] \<longleftrightarrow> xs=[]"
1.227 -by (simp only: length_0_conv[symmetric] length_list_update)
1.228 + by (simp only: length_0_conv[symmetric] length_list_update)
1.229
1.230 lemma list_update_same_conv:
1.231 "i < length xs ==> (xs[i := x] = xs) = (xs!i = x)"
1.232 -by (induct xs arbitrary: i) (auto split: nat.split)
1.233 + by (induct xs arbitrary: i) (auto split: nat.split)
1.234
1.235 lemma list_update_append1:
1.236 "i < size xs \<Longrightarrow> (xs @ ys)[i:=x] = xs[i:=x] @ ys"
1.237 -by (induct xs arbitrary: i)(auto split:nat.split)
1.238 + by (induct xs arbitrary: i)(auto split:nat.split)
1.239
1.240 lemma list_update_append:
1.241 "(xs @ ys) [n:= x] =
1.242 (if n < length xs then xs[n:= x] @ ys else xs @ (ys [n-length xs:= x]))"
1.243 -by (induct xs arbitrary: n) (auto split:nat.splits)
1.244 + by (induct xs arbitrary: n) (auto split:nat.splits)
1.245
1.246 lemma list_update_length [simp]:
1.247 "(xs @ x # ys)[length xs := y] = (xs @ y # ys)"
1.248 -by (induct xs, auto)
1.249 + by (induct xs, auto)
1.250
1.251 lemma map_update: "map f (xs[k:= y]) = (map f xs)[k := f y]"
1.252 -by(induct xs arbitrary: k)(auto split:nat.splits)
1.253 + by(induct xs arbitrary: k)(auto split:nat.splits)
1.254
1.255 lemma rev_update:
1.256 "k < length xs \<Longrightarrow> rev (xs[k:= y]) = (rev xs)[length xs - k - 1 := y]"
1.257 -by (induct xs arbitrary: k) (auto simp: list_update_append split:nat.splits)
1.258 + by (induct xs arbitrary: k) (auto simp: list_update_append split:nat.splits)
1.259
1.260 lemma update_zip:
1.261 "(zip xs ys)[i:=xy] = zip (xs[i:=fst xy]) (ys[i:=snd xy])"
1.262 -by (induct ys arbitrary: i xy xs) (auto, case_tac xs, auto split: nat.split)
1.263 + by (induct ys arbitrary: i xy xs) (auto, case_tac xs, auto split: nat.split)
1.264
1.265 lemma set_update_subset_insert: "set(xs[i:=x]) \<le> insert x (set xs)"
1.266 -by (induct xs arbitrary: i) (auto split: nat.split)
1.267 + by (induct xs arbitrary: i) (auto split: nat.split)
1.268
1.269 lemma set_update_subsetI: "\<lbrakk>set xs \<subseteq> A; x \<in> A\<rbrakk> \<Longrightarrow> set(xs[i := x]) \<subseteq> A"
1.270 -by (blast dest!: set_update_subset_insert [THEN subsetD])
1.271 + by (blast dest!: set_update_subset_insert [THEN subsetD])
1.272
1.273 lemma set_update_memI: "n < length xs \<Longrightarrow> x \<in> set (xs[n := x])"
1.274 -by (induct xs arbitrary: n) (auto split:nat.splits)
1.275 + by (induct xs arbitrary: n) (auto split:nat.splits)
1.276
1.277 lemma list_update_overwrite[simp]:
1.278 "xs [i := x, i := y] = xs [i := y]"
1.279 @@ -1885,67 +1884,67 @@
1.280 "[][i := y] = []"
1.281 "(x # xs)[0 := y] = y # xs"
1.282 "(x # xs)[Suc i := y] = x # xs[i := y]"
1.283 -by simp_all
1.284 + by simp_all
1.285
1.286
1.287 subsubsection \<open>@{const last} and @{const butlast}\<close>
1.288
1.289 lemma last_snoc [simp]: "last (xs @ [x]) = x"
1.290 -by (induct xs) auto
1.291 + by (induct xs) auto
1.292
1.293 lemma butlast_snoc [simp]: "butlast (xs @ [x]) = xs"
1.294 -by (induct xs) auto
1.295 + by (induct xs) auto
1.296
1.297 lemma last_ConsL: "xs = [] \<Longrightarrow> last(x#xs) = x"
1.298 -by simp
1.299 + by simp
1.300
1.301 lemma last_ConsR: "xs \<noteq> [] \<Longrightarrow> last(x#xs) = last xs"
1.302 -by simp
1.303 + by simp
1.304
1.305 lemma last_append: "last(xs @ ys) = (if ys = [] then last xs else last ys)"
1.306 -by (induct xs) (auto)
1.307 + by (induct xs) (auto)
1.308
1.309 lemma last_appendL[simp]: "ys = [] \<Longrightarrow> last(xs @ ys) = last xs"
1.310 -by(simp add:last_append)
1.311 + by(simp add:last_append)
1.312
1.313 lemma last_appendR[simp]: "ys \<noteq> [] \<Longrightarrow> last(xs @ ys) = last ys"
1.314 -by(simp add:last_append)
1.315 + by(simp add:last_append)
1.316
1.317 lemma last_tl: "xs = [] \<or> tl xs \<noteq> [] \<Longrightarrow>last (tl xs) = last xs"
1.318 -by (induct xs) simp_all
1.319 + by (induct xs) simp_all
1.320
1.321 lemma butlast_tl: "butlast (tl xs) = tl (butlast xs)"
1.322 -by (induct xs) simp_all
1.323 + by (induct xs) simp_all
1.324
1.325 lemma hd_rev: "xs \<noteq> [] \<Longrightarrow> hd(rev xs) = last xs"
1.326 -by(rule rev_exhaust[of xs]) simp_all
1.327 + by(rule rev_exhaust[of xs]) simp_all
1.328
1.329 lemma last_rev: "xs \<noteq> [] \<Longrightarrow> last(rev xs) = hd xs"
1.330 -by(cases xs) simp_all
1.331 + by(cases xs) simp_all
1.332
1.333 lemma last_in_set[simp]: "as \<noteq> [] \<Longrightarrow> last as \<in> set as"
1.334 -by (induct as) auto
1.335 + by (induct as) auto
1.336
1.337 lemma length_butlast [simp]: "length (butlast xs) = length xs - 1"
1.338 -by (induct xs rule: rev_induct) auto
1.339 + by (induct xs rule: rev_induct) auto
1.340
1.341 lemma butlast_append:
1.342 "butlast (xs @ ys) = (if ys = [] then butlast xs else xs @ butlast ys)"
1.343 -by (induct xs arbitrary: ys) auto
1.344 + by (induct xs arbitrary: ys) auto
1.345
1.346 lemma append_butlast_last_id [simp]:
1.347 "xs \<noteq> [] \<Longrightarrow> butlast xs @ [last xs] = xs"
1.348 -by (induct xs) auto
1.349 + by (induct xs) auto
1.350
1.351 lemma in_set_butlastD: "x \<in> set (butlast xs) \<Longrightarrow> x \<in> set xs"
1.352 -by (induct xs) (auto split: if_split_asm)
1.353 + by (induct xs) (auto split: if_split_asm)
1.354
1.355 lemma in_set_butlast_appendI:
1.356 "x \<in> set (butlast xs) \<or> x \<in> set (butlast ys) \<Longrightarrow> x \<in> set (butlast (xs @ ys))"
1.357 -by (auto dest: in_set_butlastD simp add: butlast_append)
1.358 + by (auto dest: in_set_butlastD simp add: butlast_append)
1.359
1.360 lemma last_drop[simp]: "n < length xs \<Longrightarrow> last (drop n xs) = last xs"
1.361 -by (induct xs arbitrary: n)(auto split:nat.split)
1.362 + by (induct xs arbitrary: n)(auto split:nat.split)
1.363
1.364 lemma nth_butlast:
1.365 assumes "n < length (butlast xs)" shows "butlast xs ! n = xs ! n"
1.366 @@ -1957,82 +1956,82 @@
1.367 qed simp
1.368
1.369 lemma last_conv_nth: "xs\<noteq>[] \<Longrightarrow> last xs = xs!(length xs - 1)"
1.370 -by(induct xs)(auto simp:neq_Nil_conv)
1.371 + by(induct xs)(auto simp:neq_Nil_conv)
1.372
1.373 lemma butlast_conv_take: "butlast xs = take (length xs - 1) xs"
1.374 -by (induction xs rule: induct_list012) simp_all
1.375 + by (induction xs rule: induct_list012) simp_all
1.376
1.377 lemma last_list_update:
1.378 "xs \<noteq> [] \<Longrightarrow> last(xs[k:=x]) = (if k = size xs - 1 then x else last xs)"
1.379 -by (auto simp: last_conv_nth)
1.380 + by (auto simp: last_conv_nth)
1.381
1.382 lemma butlast_list_update:
1.383 "butlast(xs[k:=x]) =
1.384 (if k = size xs - 1 then butlast xs else (butlast xs)[k:=x])"
1.385 -by(cases xs rule:rev_cases)(auto simp: list_update_append split: nat.splits)
1.386 + by(cases xs rule:rev_cases)(auto simp: list_update_append split: nat.splits)
1.387
1.388 lemma last_map: "xs \<noteq> [] \<Longrightarrow> last (map f xs) = f (last xs)"
1.389 -by (cases xs rule: rev_cases) simp_all
1.390 + by (cases xs rule: rev_cases) simp_all
1.391
1.392 lemma map_butlast: "map f (butlast xs) = butlast (map f xs)"
1.393 -by (induct xs) simp_all
1.394 + by (induct xs) simp_all
1.395
1.396 lemma snoc_eq_iff_butlast:
1.397 "xs @ [x] = ys \<longleftrightarrow> (ys \<noteq> [] \<and> butlast ys = xs \<and> last ys = x)"
1.398 -by fastforce
1.399 + by fastforce
1.400
1.401 corollary longest_common_suffix:
1.402 "\<exists>ss xs' ys'. xs = xs' @ ss \<and> ys = ys' @ ss
1.403 \<and> (xs' = [] \<or> ys' = [] \<or> last xs' \<noteq> last ys')"
1.404 -using longest_common_prefix[of "rev xs" "rev ys"]
1.405 -unfolding rev_swap rev_append by (metis last_rev rev_is_Nil_conv)
1.406 + using longest_common_prefix[of "rev xs" "rev ys"]
1.407 + unfolding rev_swap rev_append by (metis last_rev rev_is_Nil_conv)
1.408
1.409
1.410 subsubsection \<open>@{const take} and @{const drop}\<close>
1.411
1.412 lemma take_0: "take 0 xs = []"
1.413 -by (induct xs) auto
1.414 + by (induct xs) auto
1.415
1.416 lemma drop_0: "drop 0 xs = xs"
1.417 -by (induct xs) auto
1.418 + by (induct xs) auto
1.419
1.420 lemma take0[simp]: "take 0 = (\<lambda>xs. [])"
1.421 -by(rule ext) (rule take_0)
1.422 + by(rule ext) (rule take_0)
1.423
1.424 lemma drop0[simp]: "drop 0 = (\<lambda>x. x)"
1.425 -by(rule ext) (rule drop_0)
1.426 + by(rule ext) (rule drop_0)
1.427
1.428 lemma take_Suc_Cons [simp]: "take (Suc n) (x # xs) = x # take n xs"
1.429 -by simp
1.430 + by simp
1.431
1.432 lemma drop_Suc_Cons [simp]: "drop (Suc n) (x # xs) = drop n xs"
1.433 -by simp
1.434 + by simp
1.435
1.436 declare take_Cons [simp del] and drop_Cons [simp del]
1.437
1.438 lemma take_Suc: "xs \<noteq> [] \<Longrightarrow> take (Suc n) xs = hd xs # take n (tl xs)"
1.439 -by(clarsimp simp add:neq_Nil_conv)
1.440 + by(clarsimp simp add:neq_Nil_conv)
1.441
1.442 lemma drop_Suc: "drop (Suc n) xs = drop n (tl xs)"
1.443 -by(cases xs, simp_all)
1.444 + by(cases xs, simp_all)
1.445
1.446 lemma hd_take[simp]: "j > 0 \<Longrightarrow> hd (take j xs) = hd xs"
1.447 -by (metis gr0_conv_Suc list.sel(1) take.simps(1) take_Suc)
1.448 + by (metis gr0_conv_Suc list.sel(1) take.simps(1) take_Suc)
1.449
1.450 lemma take_tl: "take n (tl xs) = tl (take (Suc n) xs)"
1.451 -by (induct xs arbitrary: n) simp_all
1.452 + by (induct xs arbitrary: n) simp_all
1.453
1.454 lemma drop_tl: "drop n (tl xs) = tl(drop n xs)"
1.455 -by(induct xs arbitrary: n, simp_all add:drop_Cons drop_Suc split:nat.split)
1.456 + by(induct xs arbitrary: n, simp_all add:drop_Cons drop_Suc split:nat.split)
1.457
1.458 lemma tl_take: "tl (take n xs) = take (n - 1) (tl xs)"
1.459 -by (cases n, simp, cases xs, auto)
1.460 + by (cases n, simp, cases xs, auto)
1.461
1.462 lemma tl_drop: "tl (drop n xs) = drop n (tl xs)"
1.463 -by (simp only: drop_tl)
1.464 + by (simp only: drop_tl)
1.465
1.466 lemma nth_via_drop: "drop n xs = y#ys \<Longrightarrow> xs!n = y"
1.467 -by (induct xs arbitrary: n, simp)(auto simp: drop_Cons nth_Cons split: nat.splits)
1.468 + by (induct xs arbitrary: n, simp)(auto simp: drop_Cons nth_Cons split: nat.splits)
1.469
1.470 lemma take_Suc_conv_app_nth:
1.471 "i < length xs \<Longrightarrow> take (Suc i) xs = take i xs @ [xs!i]"
1.472 @@ -2049,24 +2048,24 @@
1.473 qed simp
1.474
1.475 lemma length_take [simp]: "length (take n xs) = min (length xs) n"
1.476 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.477 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.478
1.479 lemma length_drop [simp]: "length (drop n xs) = (length xs - n)"
1.480 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.481 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.482
1.483 lemma take_all [simp]: "length xs \<le> n ==> take n xs = xs"
1.484 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.485 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.486
1.487 lemma drop_all [simp]: "length xs \<le> n ==> drop n xs = []"
1.488 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.489 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.490
1.491 lemma take_append [simp]:
1.492 "take n (xs @ ys) = (take n xs @ take (n - length xs) ys)"
1.493 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.494 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.495
1.496 lemma drop_append [simp]:
1.497 "drop n (xs @ ys) = drop n xs @ drop (n - length xs) ys"
1.498 -by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.499 + by (induct n arbitrary: xs) (auto, case_tac xs, auto)
1.500
1.501 lemma take_take [simp]: "take n (take m xs) = take (min n m) xs"
1.502 proof (induct m arbitrary: xs n)
1.503 @@ -2087,7 +2086,7 @@
1.504 qed auto
1.505
1.506 lemma drop_take: "drop n (take m xs) = take (m-n) (drop n xs)"
1.507 -by(induct xs arbitrary: m n)(auto simp: take_Cons drop_Cons split: nat.split)
1.508 + by(induct xs arbitrary: m n)(auto simp: take_Cons drop_Cons split: nat.split)
1.509
1.510 lemma append_take_drop_id [simp]: "take n xs @ drop n xs = xs"
1.511 proof (induct n arbitrary: xs)
1.512 @@ -2096,10 +2095,10 @@
1.513 qed auto
1.514
1.515 lemma take_eq_Nil[simp]: "(take n xs = []) = (n = 0 \<or> xs = [])"
1.516 -by(induct xs arbitrary: n)(auto simp: take_Cons split:nat.split)
1.517 + by(induct xs arbitrary: n)(auto simp: take_Cons split:nat.split)
1.518
1.519 lemma drop_eq_Nil[simp]: "(drop n xs = []) = (length xs \<le> n)"
1.520 -by (induct xs arbitrary: n) (auto simp: drop_Cons split:nat.split)
1.521 + by (induct xs arbitrary: n) (auto simp: drop_Cons split:nat.split)
1.522
1.523 lemma take_map: "take n (map f xs) = map f (take n xs)"
1.524 proof (induct n arbitrary: xs)
1.525 @@ -2146,19 +2145,19 @@
1.526
1.527 lemma butlast_take:
1.528 "n \<le> length xs ==> butlast (take n xs) = take (n - 1) xs"
1.529 -by (simp add: butlast_conv_take min.absorb1 min.absorb2)
1.530 + by (simp add: butlast_conv_take min.absorb1 min.absorb2)
1.531
1.532 lemma butlast_drop: "butlast (drop n xs) = drop n (butlast xs)"
1.533 -by (simp add: butlast_conv_take drop_take ac_simps)
1.534 + by (simp add: butlast_conv_take drop_take ac_simps)
1.535
1.536 lemma take_butlast: "n < length xs ==> take n (butlast xs) = take n xs"
1.537 -by (simp add: butlast_conv_take min.absorb1)
1.538 + by (simp add: butlast_conv_take min.absorb1)
1.539
1.540 lemma drop_butlast: "drop n (butlast xs) = butlast (drop n xs)"
1.541 -by (simp add: butlast_conv_take drop_take ac_simps)
1.542 + by (simp add: butlast_conv_take drop_take ac_simps)
1.543
1.544 lemma hd_drop_conv_nth: "n < length xs \<Longrightarrow> hd(drop n xs) = xs!n"
1.545 -by(simp add: hd_conv_nth)
1.546 + by(simp add: hd_conv_nth)
1.547
1.548 lemma set_take_subset_set_take:
1.549 "m \<le> n \<Longrightarrow> set(take m xs) \<le> set(take n xs)"
1.550 @@ -2168,10 +2167,10 @@
1.551 qed simp
1.552
1.553 lemma set_take_subset: "set(take n xs) \<subseteq> set xs"
1.554 -by(induct xs arbitrary: n)(auto simp:take_Cons split:nat.split)
1.555 + by(induct xs arbitrary: n)(auto simp:take_Cons split:nat.split)
1.556
1.557 lemma set_drop_subset: "set(drop n xs) \<subseteq> set xs"
1.558 -by(induct xs arbitrary: n)(auto simp:drop_Cons split:nat.split)
1.559 + by(induct xs arbitrary: n)(auto simp:drop_Cons split:nat.split)
1.560
1.561 lemma set_drop_subset_set_drop:
1.562 "m \<ge> n \<Longrightarrow> set(drop m xs) \<le> set(drop n xs)"
1.563 @@ -2182,10 +2181,10 @@
1.564 qed simp
1.565
1.566 lemma in_set_takeD: "x \<in> set(take n xs) \<Longrightarrow> x \<in> set xs"
1.567 -using set_take_subset by fast
1.568 + using set_take_subset by fast
1.569
1.570 lemma in_set_dropD: "x \<in> set(drop n xs) \<Longrightarrow> x \<in> set xs"
1.571 -using set_drop_subset by fast
1.572 + using set_drop_subset by fast
1.573
1.574 lemma append_eq_conv_conj:
1.575 "(xs @ ys = zs) = (xs = take (length xs) zs \<and> ys = drop (length xs) zs)"
1.576 @@ -2226,10 +2225,10 @@
1.577 qed
1.578
1.579 lemma take_update_cancel[simp]: "n \<le> m \<Longrightarrow> take n (xs[m := y]) = take n xs"
1.580 -by(simp add: list_eq_iff_nth_eq)
1.581 + by(simp add: list_eq_iff_nth_eq)
1.582
1.583 lemma drop_update_cancel[simp]: "n < m \<Longrightarrow> drop m (xs[n := x]) = drop m xs"
1.584 -by(simp add: list_eq_iff_nth_eq)
1.585 + by(simp add: list_eq_iff_nth_eq)
1.586
1.587 lemma upd_conv_take_nth_drop:
1.588 "i < length xs \<Longrightarrow> xs[i:=a] = take i xs @ a # drop (Suc i) xs"
1.589 @@ -2258,27 +2257,27 @@
1.590 qed auto
1.591
1.592 lemma nth_image: "l \<le> size xs \<Longrightarrow> nth xs ` {0..<l} = set(take l xs)"
1.593 -by(auto simp: set_conv_nth image_def) (metis Suc_le_eq nth_take order_trans)
1.594 + by(auto simp: set_conv_nth image_def) (metis Suc_le_eq nth_take order_trans)
1.595
1.596
1.597 subsubsection \<open>@{const takeWhile} and @{const dropWhile}\<close>
1.598
1.599 lemma length_takeWhile_le: "length (takeWhile P xs) \<le> length xs"
1.600 -by (induct xs) auto
1.601 + by (induct xs) auto
1.602
1.603 lemma takeWhile_dropWhile_id [simp]: "takeWhile P xs @ dropWhile P xs = xs"
1.604 -by (induct xs) auto
1.605 + by (induct xs) auto
1.606
1.607 lemma takeWhile_append1 [simp]:
1.608 "\<lbrakk>x \<in> set xs; \<not>P(x)\<rbrakk> \<Longrightarrow> takeWhile P (xs @ ys) = takeWhile P xs"
1.609 -by (induct xs) auto
1.610 + by (induct xs) auto
1.611
1.612 lemma takeWhile_append2 [simp]:
1.613 "(\<And>x. x \<in> set xs \<Longrightarrow> P x) \<Longrightarrow> takeWhile P (xs @ ys) = xs @ takeWhile P ys"
1.614 -by (induct xs) auto
1.615 + by (induct xs) auto
1.616
1.617 lemma takeWhile_tail: "\<not> P x \<Longrightarrow> takeWhile P (xs @ (x#l)) = takeWhile P xs"
1.618 -by (induct xs) auto
1.619 + by (induct xs) auto
1.620
1.621 lemma takeWhile_nth: "j < length (takeWhile P xs) \<Longrightarrow> takeWhile P xs ! j = xs ! j"
1.622 by (metis nth_append takeWhile_dropWhile_id)
1.623 @@ -2288,62 +2287,62 @@
1.624 by (metis add.commute nth_append_length_plus takeWhile_dropWhile_id)
1.625
1.626 lemma length_dropWhile_le: "length (dropWhile P xs) \<le> length xs"
1.627 -by (induct xs) auto
1.628 + by (induct xs) auto
1.629
1.630 lemma dropWhile_append1 [simp]:
1.631 "\<lbrakk>x \<in> set xs; \<not>P(x)\<rbrakk> \<Longrightarrow> dropWhile P (xs @ ys) = (dropWhile P xs)@ys"
1.632 -by (induct xs) auto
1.633 + by (induct xs) auto
1.634
1.635 lemma dropWhile_append2 [simp]:
1.636 "(\<And>x. x \<in> set xs \<Longrightarrow> P(x)) ==> dropWhile P (xs @ ys) = dropWhile P ys"
1.637 -by (induct xs) auto
1.638 + by (induct xs) auto
1.639
1.640 lemma dropWhile_append3:
1.641 "\<not> P y \<Longrightarrow>dropWhile P (xs @ y # ys) = dropWhile P xs @ y # ys"
1.642 -by (induct xs) auto
1.643 + by (induct xs) auto
1.644
1.645 lemma dropWhile_last:
1.646 "x \<in> set xs \<Longrightarrow> \<not> P x \<Longrightarrow> last (dropWhile P xs) = last xs"
1.647 -by (auto simp add: dropWhile_append3 in_set_conv_decomp)
1.648 + by (auto simp add: dropWhile_append3 in_set_conv_decomp)
1.649
1.650 lemma set_dropWhileD: "x \<in> set (dropWhile P xs) \<Longrightarrow> x \<in> set xs"
1.651 -by (induct xs) (auto split: if_split_asm)
1.652 + by (induct xs) (auto split: if_split_asm)
1.653
1.654 lemma set_takeWhileD: "x \<in> set (takeWhile P xs) \<Longrightarrow> x \<in> set xs \<and> P x"
1.655 -by (induct xs) (auto split: if_split_asm)
1.656 + by (induct xs) (auto split: if_split_asm)
1.657
1.658 lemma takeWhile_eq_all_conv[simp]:
1.659 "(takeWhile P xs = xs) = (\<forall>x \<in> set xs. P x)"
1.660 -by(induct xs, auto)
1.661 + by(induct xs, auto)
1.662
1.663 lemma dropWhile_eq_Nil_conv[simp]:
1.664 "(dropWhile P xs = []) = (\<forall>x \<in> set xs. P x)"
1.665 -by(induct xs, auto)
1.666 + by(induct xs, auto)
1.667
1.668 lemma dropWhile_eq_Cons_conv:
1.669 "(dropWhile P xs = y#ys) = (xs = takeWhile P xs @ y # ys \<and> \<not> P y)"
1.670 -by(induct xs, auto)
1.671 + by(induct xs, auto)
1.672
1.673 lemma distinct_takeWhile[simp]: "distinct xs ==> distinct (takeWhile P xs)"
1.674 -by (induct xs) (auto dest: set_takeWhileD)
1.675 + by (induct xs) (auto dest: set_takeWhileD)
1.676
1.677 lemma distinct_dropWhile[simp]: "distinct xs ==> distinct (dropWhile P xs)"
1.678 -by (induct xs) auto
1.679 + by (induct xs) auto
1.680
1.681 lemma takeWhile_map: "takeWhile P (map f xs) = map f (takeWhile (P \<circ> f) xs)"
1.682 -by (induct xs) auto
1.683 + by (induct xs) auto
1.684
1.685 lemma dropWhile_map: "dropWhile P (map f xs) = map f (dropWhile (P \<circ> f) xs)"
1.686 -by (induct xs) auto
1.687 + by (induct xs) auto
1.688
1.689 lemma takeWhile_eq_take: "takeWhile P xs = take (length (takeWhile P xs)) xs"
1.690 -by (induct xs) auto
1.691 + by (induct xs) auto
1.692
1.693 lemma dropWhile_eq_drop: "dropWhile P xs = drop (length (takeWhile P xs)) xs"
1.694 -by (induct xs) auto
1.695 + by (induct xs) auto
1.696
1.697 lemma hd_dropWhile: "dropWhile P xs \<noteq> [] \<Longrightarrow> \<not> P (hd (dropWhile P xs))"
1.698 -by (induct xs) auto
1.699 + by (induct xs) auto
1.700
1.701 lemma takeWhile_eq_filter:
1.702 assumes "\<And> x. x \<in> set (dropWhile P xs) \<Longrightarrow> \<not> P x"
1.703 @@ -2384,12 +2383,12 @@
1.704 thus "\<not> P (xs ! n')" using Cons by auto
1.705 qed
1.706 ultimately show ?thesis by simp
1.707 - qed
1.708 + qed
1.709 qed
1.710
1.711 lemma nth_length_takeWhile:
1.712 "length (takeWhile P xs) < length xs \<Longrightarrow> \<not> P (xs ! length (takeWhile P xs))"
1.713 -by (induct xs) auto
1.714 + by (induct xs) auto
1.715
1.716 lemma length_takeWhile_less_P_nth:
1.717 assumes all: "\<And> i. i < j \<Longrightarrow> P (xs ! i)" and "j \<le> length xs"
1.718 @@ -2402,7 +2401,7 @@
1.719
1.720 lemma takeWhile_neq_rev: "\<lbrakk>distinct xs; x \<in> set xs\<rbrakk> \<Longrightarrow>
1.721 takeWhile (\<lambda>y. y \<noteq> x) (rev xs) = rev (tl (dropWhile (\<lambda>y. y \<noteq> x) xs))"
1.722 -by(induct xs) (auto simp: takeWhile_tail[where l="[]"])
1.723 + by(induct xs) (auto simp: takeWhile_tail[where l="[]"])
1.724
1.725 lemma dropWhile_neq_rev: "\<lbrakk>distinct xs; x \<in> set xs\<rbrakk> \<Longrightarrow>
1.726 dropWhile (\<lambda>y. y \<noteq> x) (rev xs) = x # rev (takeWhile (\<lambda>y. y \<noteq> x) xs)"
1.727 @@ -2414,34 +2413,34 @@
1.728
1.729 lemma takeWhile_not_last:
1.730 "distinct xs \<Longrightarrow> takeWhile (\<lambda>y. y \<noteq> last xs) xs = butlast xs"
1.731 -by(induction xs rule: induct_list012) auto
1.732 + by(induction xs rule: induct_list012) auto
1.733
1.734 lemma takeWhile_cong [fundef_cong]:
1.735 "\<lbrakk>l = k; \<And>x. x \<in> set l \<Longrightarrow> P x = Q x\<rbrakk>
1.736 \<Longrightarrow> takeWhile P l = takeWhile Q k"
1.737 -by (induct k arbitrary: l) (simp_all)
1.738 + by (induct k arbitrary: l) (simp_all)
1.739
1.740 lemma dropWhile_cong [fundef_cong]:
1.741 "\<lbrakk>l = k; \<And>x. x \<in> set l \<Longrightarrow> P x = Q x\<rbrakk>
1.742 \<Longrightarrow> dropWhile P l = dropWhile Q k"
1.743 -by (induct k arbitrary: l, simp_all)
1.744 + by (induct k arbitrary: l, simp_all)
1.745
1.746 lemma takeWhile_idem [simp]:
1.747 "takeWhile P (takeWhile P xs) = takeWhile P xs"
1.748 -by (induct xs) auto
1.749 + by (induct xs) auto
1.750
1.751 lemma dropWhile_idem [simp]:
1.752 "dropWhile P (dropWhile P xs) = dropWhile P xs"
1.753 -by (induct xs) auto
1.754 + by (induct xs) auto
1.755
1.756
1.757 subsubsection \<open>@{const zip}\<close>
1.758
1.759 lemma zip_Nil [simp]: "zip [] ys = []"
1.760 -by (induct ys) auto
1.761 + by (induct ys) auto
1.762
1.763 lemma zip_Cons_Cons [simp]: "zip (x # xs) (y # ys) = (x, y) # zip xs ys"
1.764 -by simp
1.765 + by simp
1.766
1.767 declare zip_Cons [simp del]
1.768
1.769 @@ -2449,15 +2448,15 @@
1.770 "zip [] ys = []"
1.771 "zip xs [] = []"
1.772 "zip (x # xs) (y # ys) = (x, y) # zip xs ys"
1.773 -by (fact zip_Nil zip.simps(1) zip_Cons_Cons)+
1.774 + by (fact zip_Nil zip.simps(1) zip_Cons_Cons)+
1.775
1.776 lemma zip_Cons1:
1.777 "zip (x#xs) ys = (case ys of [] \<Rightarrow> [] | y#ys \<Rightarrow> (x,y)#zip xs ys)"
1.778 -by(auto split:list.split)
1.779 + by(auto split:list.split)
1.780
1.781 lemma length_zip [simp]:
1.782 "length (zip xs ys) = min (length xs) (length ys)"
1.783 -by (induct xs ys rule:list_induct2') auto
1.784 + by (induct xs ys rule:list_induct2') auto
1.785
1.786 lemma zip_obtain_same_length:
1.787 assumes "\<And>zs ws n. length zs = length ws \<Longrightarrow> n = min (length xs) (length ys)
1.788 @@ -2479,21 +2478,21 @@
1.789 lemma zip_append1:
1.790 "zip (xs @ ys) zs =
1.791 zip xs (take (length xs) zs) @ zip ys (drop (length xs) zs)"
1.792 -by (induct xs zs rule:list_induct2') auto
1.793 + by (induct xs zs rule:list_induct2') auto
1.794
1.795 lemma zip_append2:
1.796 "zip xs (ys @ zs) =
1.797 zip (take (length ys) xs) ys @ zip (drop (length ys) xs) zs"
1.798 -by (induct xs ys rule:list_induct2') auto
1.799 + by (induct xs ys rule:list_induct2') auto
1.800
1.801 lemma zip_append [simp]:
1.802 "[| length xs = length us |] ==>
1.803 zip (xs@ys) (us@vs) = zip xs us @ zip ys vs"
1.804 -by (simp add: zip_append1)
1.805 + by (simp add: zip_append1)
1.806
1.807 lemma zip_rev:
1.808 "length xs = length ys ==> zip (rev xs) (rev ys) = rev (zip xs ys)"
1.809 -by (induct rule:list_induct2, simp_all)
1.810 + by (induct rule:list_induct2, simp_all)
1.811
1.812 lemma zip_map_map:
1.813 "zip (map f xs) (map g ys) = map (\<lambda> (x, y). (f x, g y)) (zip xs ys)"
1.814 @@ -2508,23 +2507,23 @@
1.815
1.816 lemma zip_map1:
1.817 "zip (map f xs) ys = map (\<lambda>(x, y). (f x, y)) (zip xs ys)"
1.818 -using zip_map_map[of f xs "\<lambda>x. x" ys] by simp
1.819 + using zip_map_map[of f xs "\<lambda>x. x" ys] by simp
1.820
1.821 lemma zip_map2:
1.822 "zip xs (map f ys) = map (\<lambda>(x, y). (x, f y)) (zip xs ys)"
1.823 -using zip_map_map[of "\<lambda>x. x" xs f ys] by simp
1.824 + using zip_map_map[of "\<lambda>x. x" xs f ys] by simp
1.825
1.826 lemma map_zip_map:
1.827 "map f (zip (map g xs) ys) = map (%(x,y). f(g x, y)) (zip xs ys)"
1.828 -by (auto simp: zip_map1)
1.829 + by (auto simp: zip_map1)
1.830
1.831 lemma map_zip_map2:
1.832 "map f (zip xs (map g ys)) = map (%(x,y). f(x, g y)) (zip xs ys)"
1.833 -by (auto simp: zip_map2)
1.834 + by (auto simp: zip_map2)
1.835
1.836 text\<open>Courtesy of Andreas Lochbihler:\<close>
1.837 lemma zip_same_conv_map: "zip xs xs = map (\<lambda>x. (x, x)) xs"
1.838 -by(induct xs) auto
1.839 + by(induct xs) auto
1.840
1.841 lemma nth_zip [simp]:
1.842 "[| i < length xs; i < length ys|] ==> (zip xs ys)!i = (xs!i, ys!i)"
1.843 @@ -2536,10 +2535,10 @@
1.844
1.845 lemma set_zip:
1.846 "set (zip xs ys) = {(xs!i, ys!i) | i. i < min (length xs) (length ys)}"
1.847 -by(simp add: set_conv_nth cong: rev_conj_cong)
1.848 + by(simp add: set_conv_nth cong: rev_conj_cong)
1.849
1.850 lemma zip_same: "((a,b) \<in> set (zip xs xs)) = (a \<in> set xs \<and> a = b)"
1.851 -by(induct xs) auto
1.852 + by(induct xs) auto
1.853
1.854 lemma zip_update: "zip (xs[i:=x]) (ys[i:=y]) = (zip xs ys)[i:=(x,y)]"
1.855 by (simp add: update_zip)
1.856 @@ -2553,7 +2552,7 @@
1.857 qed auto
1.858
1.859 lemma zip_replicate1: "zip (replicate n x) ys = map (Pair x) (take n ys)"
1.860 -by(induction ys arbitrary: n)(case_tac [2] n, simp_all)
1.861 + by(induction ys arbitrary: n)(case_tac [2] n, simp_all)
1.862
1.863 lemma take_zip: "take n (zip xs ys) = zip (take n xs) (take n ys)"
1.864 proof (induct n arbitrary: xs ys)
1.865 @@ -2580,26 +2579,26 @@
1.866 qed simp
1.867
1.868 lemma set_zip_leftD: "(x,y)\<in> set (zip xs ys) \<Longrightarrow> x \<in> set xs"
1.869 -by (induct xs ys rule:list_induct2') auto
1.870 + by (induct xs ys rule:list_induct2') auto
1.871
1.872 lemma set_zip_rightD: "(x,y)\<in> set (zip xs ys) \<Longrightarrow> y \<in> set ys"
1.873 -by (induct xs ys rule:list_induct2') auto
1.874 + by (induct xs ys rule:list_induct2') auto
1.875
1.876 lemma in_set_zipE:
1.877 "(x,y) \<in> set(zip xs ys) \<Longrightarrow> (\<lbrakk> x \<in> set xs; y \<in> set ys \<rbrakk> \<Longrightarrow> R) \<Longrightarrow> R"
1.878 -by(blast dest: set_zip_leftD set_zip_rightD)
1.879 + by(blast dest: set_zip_leftD set_zip_rightD)
1.880
1.881 lemma zip_map_fst_snd: "zip (map fst zs) (map snd zs) = zs"
1.882 -by (induct zs) simp_all
1.883 + by (induct zs) simp_all
1.884
1.885 lemma zip_eq_conv:
1.886 "length xs = length ys \<Longrightarrow> zip xs ys = zs \<longleftrightarrow> map fst zs = xs \<and> map snd zs = ys"
1.887 -by (auto simp add: zip_map_fst_snd)
1.888 + by (auto simp add: zip_map_fst_snd)
1.889
1.890 lemma in_set_zip:
1.891 "p \<in> set (zip xs ys) \<longleftrightarrow> (\<exists>n. xs ! n = fst p \<and> ys ! n = snd p
1.892 \<and> n < length xs \<and> n < length ys)"
1.893 -by (cases p) (auto simp add: set_zip)
1.894 + by (cases p) (auto simp add: set_zip)
1.895
1.896 lemma in_set_impl_in_set_zip1:
1.897 assumes "length xs = length ys"
1.898 @@ -2633,25 +2632,25 @@
1.899
1.900 lemma list_all2_lengthD [intro?]:
1.901 "list_all2 P xs ys ==> length xs = length ys"
1.902 -by (simp add: list_all2_iff)
1.903 + by (simp add: list_all2_iff)
1.904
1.905 lemma list_all2_Nil [iff, code]: "list_all2 P [] ys = (ys = [])"
1.906 -by (simp add: list_all2_iff)
1.907 + by (simp add: list_all2_iff)
1.908
1.909 lemma list_all2_Nil2 [iff, code]: "list_all2 P xs [] = (xs = [])"
1.910 -by (simp add: list_all2_iff)
1.911 + by (simp add: list_all2_iff)
1.912
1.913 lemma list_all2_Cons [iff, code]:
1.914 "list_all2 P (x # xs) (y # ys) = (P x y \<and> list_all2 P xs ys)"
1.915 -by (auto simp add: list_all2_iff)
1.916 + by (auto simp add: list_all2_iff)
1.917
1.918 lemma list_all2_Cons1:
1.919 "list_all2 P (x # xs) ys = (\<exists>z zs. ys = z # zs \<and> P x z \<and> list_all2 P xs zs)"
1.920 -by (cases ys) auto
1.921 + by (cases ys) auto
1.922
1.923 lemma list_all2_Cons2:
1.924 "list_all2 P xs (y # ys) = (\<exists>z zs. xs = z # zs \<and> P z y \<and> list_all2 P zs ys)"
1.925 -by (cases xs) auto
1.926 + by (cases xs) auto
1.927
1.928 lemma list_all2_induct
1.929 [consumes 1, case_names Nil Cons, induct set: list_all2]:
1.930 @@ -2660,16 +2659,16 @@
1.931 assumes Cons: "\<And>x xs y ys.
1.932 \<lbrakk>P x y; list_all2 P xs ys; R xs ys\<rbrakk> \<Longrightarrow> R (x # xs) (y # ys)"
1.933 shows "R xs ys"
1.934 -using P
1.935 -by (induct xs arbitrary: ys) (auto simp add: list_all2_Cons1 Nil Cons)
1.936 + using P
1.937 + by (induct xs arbitrary: ys) (auto simp add: list_all2_Cons1 Nil Cons)
1.938
1.939 lemma list_all2_rev [iff]:
1.940 "list_all2 P (rev xs) (rev ys) = list_all2 P xs ys"
1.941 -by (simp add: list_all2_iff zip_rev cong: conj_cong)
1.942 + by (simp add: list_all2_iff zip_rev cong: conj_cong)
1.943
1.944 lemma list_all2_rev1:
1.945 "list_all2 P (rev xs) ys = list_all2 P xs (rev ys)"
1.946 -by (subst list_all2_rev [symmetric]) simp
1.947 + by (subst list_all2_rev [symmetric]) simp
1.948
1.949 lemma list_all2_append1:
1.950 "list_all2 P (xs @ ys) zs =
1.951 @@ -2708,21 +2707,21 @@
1.952 lemma list_all2_append:
1.953 "length xs = length ys \<Longrightarrow>
1.954 list_all2 P (xs@us) (ys@vs) = (list_all2 P xs ys \<and> list_all2 P us vs)"
1.955 -by (induct rule:list_induct2, simp_all)
1.956 + by (induct rule:list_induct2, simp_all)
1.957
1.958 lemma list_all2_appendI [intro?, trans]:
1.959 "\<lbrakk> list_all2 P a b; list_all2 P c d \<rbrakk> \<Longrightarrow> list_all2 P (a@c) (b@d)"
1.960 -by (simp add: list_all2_append list_all2_lengthD)
1.961 + by (simp add: list_all2_append list_all2_lengthD)
1.962
1.963 lemma list_all2_conv_all_nth:
1.964 "list_all2 P xs ys =
1.965 (length xs = length ys \<and> (\<forall>i < length xs. P (xs!i) (ys!i)))"
1.966 -by (force simp add: list_all2_iff set_zip)
1.967 + by (force simp add: list_all2_iff set_zip)
1.968
1.969 lemma list_all2_trans:
1.970 assumes tr: "!!a b c. P1 a b ==> P2 b c ==> P3 a c"
1.971 shows "!!bs cs. list_all2 P1 as bs ==> list_all2 P2 bs cs ==> list_all2 P3 as cs"
1.972 - (is "!!bs cs. PROP ?Q as bs cs")
1.973 + (is "!!bs cs. PROP ?Q as bs cs")
1.974 proof (induct as)
1.975 fix x xs bs assume I1: "!!bs cs. PROP ?Q xs bs cs"
1.976 show "!!cs. PROP ?Q (x # xs) bs cs"
1.977 @@ -2735,35 +2734,35 @@
1.978
1.979 lemma list_all2_all_nthI [intro?]:
1.980 "length a = length b \<Longrightarrow> (\<And>n. n < length a \<Longrightarrow> P (a!n) (b!n)) \<Longrightarrow> list_all2 P a b"
1.981 -by (simp add: list_all2_conv_all_nth)
1.982 + by (simp add: list_all2_conv_all_nth)
1.983
1.984 lemma list_all2I:
1.985 "\<forall>x \<in> set (zip a b). case_prod P x \<Longrightarrow> length a = length b \<Longrightarrow> list_all2 P a b"
1.986 -by (simp add: list_all2_iff)
1.987 + by (simp add: list_all2_iff)
1.988
1.989 lemma list_all2_nthD:
1.990 "\<lbrakk> list_all2 P xs ys; p < size xs \<rbrakk> \<Longrightarrow> P (xs!p) (ys!p)"
1.991 -by (simp add: list_all2_conv_all_nth)
1.992 + by (simp add: list_all2_conv_all_nth)
1.993
1.994 lemma list_all2_nthD2:
1.995 "\<lbrakk>list_all2 P xs ys; p < size ys\<rbrakk> \<Longrightarrow> P (xs!p) (ys!p)"
1.996 -by (frule list_all2_lengthD) (auto intro: list_all2_nthD)
1.997 + by (frule list_all2_lengthD) (auto intro: list_all2_nthD)
1.998
1.999 lemma list_all2_map1:
1.1000 "list_all2 P (map f as) bs = list_all2 (\<lambda>x y. P (f x) y) as bs"
1.1001 -by (simp add: list_all2_conv_all_nth)
1.1002 + by (simp add: list_all2_conv_all_nth)
1.1003
1.1004 lemma list_all2_map2:
1.1005 "list_all2 P as (map f bs) = list_all2 (\<lambda>x y. P x (f y)) as bs"
1.1006 -by (auto simp add: list_all2_conv_all_nth)
1.1007 + by (auto simp add: list_all2_conv_all_nth)
1.1008
1.1009 lemma list_all2_refl [intro?]:
1.1010 "(\<And>x. P x x) \<Longrightarrow> list_all2 P xs xs"
1.1011 -by (simp add: list_all2_conv_all_nth)
1.1012 + by (simp add: list_all2_conv_all_nth)
1.1013
1.1014 lemma list_all2_update_cong:
1.1015 "\<lbrakk> list_all2 P xs ys; P x y \<rbrakk> \<Longrightarrow> list_all2 P (xs[i:=x]) (ys[i:=y])"
1.1016 -by (cases "i < length ys") (auto simp add: list_all2_conv_all_nth nth_list_update)
1.1017 + by (cases "i < length ys") (auto simp add: list_all2_conv_all_nth nth_list_update)
1.1018
1.1019 lemma list_all2_takeI [simp,intro?]:
1.1020 "list_all2 P xs ys \<Longrightarrow> list_all2 P (take n xs) (take n ys)"
1.1021 @@ -2787,46 +2786,46 @@
1.1022
1.1023 lemma list_all2_eq:
1.1024 "xs = ys \<longleftrightarrow> list_all2 (=) xs ys"
1.1025 -by (induct xs ys rule: list_induct2') auto
1.1026 + by (induct xs ys rule: list_induct2') auto
1.1027
1.1028 lemma list_eq_iff_zip_eq:
1.1029 "xs = ys \<longleftrightarrow> length xs = length ys \<and> (\<forall>(x,y) \<in> set (zip xs ys). x = y)"
1.1030 -by(auto simp add: set_zip list_all2_eq list_all2_conv_all_nth cong: conj_cong)
1.1031 + by(auto simp add: set_zip list_all2_eq list_all2_conv_all_nth cong: conj_cong)
1.1032
1.1033 lemma list_all2_same: "list_all2 P xs xs \<longleftrightarrow> (\<forall>x\<in>set xs. P x x)"
1.1034 -by(auto simp add: list_all2_conv_all_nth set_conv_nth)
1.1035 + by(auto simp add: list_all2_conv_all_nth set_conv_nth)
1.1036
1.1037 lemma zip_assoc:
1.1038 "zip xs (zip ys zs) = map (\<lambda>((x, y), z). (x, y, z)) (zip (zip xs ys) zs)"
1.1039 -by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1040 + by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1041
1.1042 lemma zip_commute: "zip xs ys = map (\<lambda>(x, y). (y, x)) (zip ys xs)"
1.1043 -by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1044 + by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1045
1.1046 lemma zip_left_commute:
1.1047 "zip xs (zip ys zs) = map (\<lambda>(y, (x, z)). (x, y, z)) (zip ys (zip xs zs))"
1.1048 -by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1049 + by(rule list_all2_all_nthI[where P="(=)", unfolded list.rel_eq]) simp_all
1.1050
1.1051 lemma zip_replicate2: "zip xs (replicate n y) = map (\<lambda>x. (x, y)) (take n xs)"
1.1052 -by(subst zip_commute)(simp add: zip_replicate1)
1.1053 + by(subst zip_commute)(simp add: zip_replicate1)
1.1054
1.1055 subsubsection \<open>@{const List.product} and @{const product_lists}\<close>
1.1056
1.1057 lemma product_concat_map:
1.1058 "List.product xs ys = concat (map (\<lambda>x. map (\<lambda>y. (x,y)) ys) xs)"
1.1059 -by(induction xs) (simp)+
1.1060 + by(induction xs) (simp)+
1.1061
1.1062 lemma set_product[simp]: "set (List.product xs ys) = set xs \<times> set ys"
1.1063 -by (induct xs) auto
1.1064 + by (induct xs) auto
1.1065
1.1066 lemma length_product [simp]:
1.1067 "length (List.product xs ys) = length xs * length ys"
1.1068 -by (induct xs) simp_all
1.1069 + by (induct xs) simp_all
1.1070
1.1071 lemma product_nth:
1.1072 assumes "n < length xs * length ys"
1.1073 shows "List.product xs ys ! n = (xs ! (n div length ys), ys ! (n mod length ys))"
1.1074 -using assms proof (induct xs arbitrary: n)
1.1075 + using assms proof (induct xs arbitrary: n)
1.1076 case Nil then show ?case by simp
1.1077 next
1.1078 case (Cons x xs n)
1.1079 @@ -2837,7 +2836,7 @@
1.1080
1.1081 lemma in_set_product_lists_length:
1.1082 "xs \<in> set (product_lists xss) \<Longrightarrow> length xs = length xss"
1.1083 -by (induct xss arbitrary: xs) auto
1.1084 + by (induct xss arbitrary: xs) auto
1.1085
1.1086 lemma product_lists_set:
1.1087 "set (product_lists xss) = {xs. list_all2 (\<lambda>x ys. x \<in> set ys) xs xss}" (is "?L = Collect ?R")
1.1088 @@ -2856,25 +2855,25 @@
1.1089 lemma fold_simps [code]: \<comment> \<open>eta-expanded variant for generated code -- enables tail-recursion optimisation in Scala\<close>
1.1090 "fold f [] s = s"
1.1091 "fold f (x # xs) s = fold f xs (f x s)"
1.1092 -by simp_all
1.1093 + by simp_all
1.1094
1.1095 lemma fold_remove1_split:
1.1096 "\<lbrakk> \<And>x y. x \<in> set xs \<Longrightarrow> y \<in> set xs \<Longrightarrow> f x \<circ> f y = f y \<circ> f x;
1.1097 x \<in> set xs \<rbrakk>
1.1098 \<Longrightarrow> fold f xs = fold f (remove1 x xs) \<circ> f x"
1.1099 -by (induct xs) (auto simp add: comp_assoc)
1.1100 + by (induct xs) (auto simp add: comp_assoc)
1.1101
1.1102 lemma fold_cong [fundef_cong]:
1.1103 "a = b \<Longrightarrow> xs = ys \<Longrightarrow> (\<And>x. x \<in> set xs \<Longrightarrow> f x = g x)
1.1104 \<Longrightarrow> fold f xs a = fold g ys b"
1.1105 -by (induct ys arbitrary: a b xs) simp_all
1.1106 + by (induct ys arbitrary: a b xs) simp_all
1.1107
1.1108 lemma fold_id: "(\<And>x. x \<in> set xs \<Longrightarrow> f x = id) \<Longrightarrow> fold f xs = id"
1.1109 -by (induct xs) simp_all
1.1110 + by (induct xs) simp_all
1.1111
1.1112 lemma fold_commute:
1.1113 "(\<And>x. x \<in> set xs \<Longrightarrow> h \<circ> g x = f x \<circ> h) \<Longrightarrow> h \<circ> fold g xs = fold f xs \<circ> h"
1.1114 -by (induct xs) (simp_all add: fun_eq_iff)
1.1115 + by (induct xs) (simp_all add: fun_eq_iff)
1.1116
1.1117 lemma fold_commute_apply:
1.1118 assumes "\<And>x. x \<in> set xs \<Longrightarrow> h \<circ> g x = f x \<circ> h"
1.1119 @@ -2887,41 +2886,41 @@
1.1120 lemma fold_invariant:
1.1121 "\<lbrakk> \<And>x. x \<in> set xs \<Longrightarrow> Q x; P s; \<And>x s. Q x \<Longrightarrow> P s \<Longrightarrow> P (f x s) \<rbrakk>
1.1122 \<Longrightarrow> P (fold f xs s)"
1.1123 -by (induct xs arbitrary: s) simp_all
1.1124 + by (induct xs arbitrary: s) simp_all
1.1125
1.1126 lemma fold_append [simp]: "fold f (xs @ ys) = fold f ys \<circ> fold f xs"
1.1127 -by (induct xs) simp_all
1.1128 + by (induct xs) simp_all
1.1129
1.1130 lemma fold_map [code_unfold]: "fold g (map f xs) = fold (g \<circ> f) xs"
1.1131 -by (induct xs) simp_all
1.1132 + by (induct xs) simp_all
1.1133
1.1134 lemma fold_filter:
1.1135 "fold f (filter P xs) = fold (\<lambda>x. if P x then f x else id) xs"
1.1136 -by (induct xs) simp_all
1.1137 + by (induct xs) simp_all
1.1138
1.1139 lemma fold_rev:
1.1140 "(\<And>x y. x \<in> set xs \<Longrightarrow> y \<in> set xs \<Longrightarrow> f y \<circ> f x = f x \<circ> f y)
1.1141 \<Longrightarrow> fold f (rev xs) = fold f xs"
1.1142 -by (induct xs) (simp_all add: fold_commute_apply fun_eq_iff)
1.1143 + by (induct xs) (simp_all add: fold_commute_apply fun_eq_iff)
1.1144
1.1145 lemma fold_Cons_rev: "fold Cons xs = append (rev xs)"
1.1146 -by (induct xs) simp_all
1.1147 + by (induct xs) simp_all
1.1148
1.1149 lemma rev_conv_fold [code]: "rev xs = fold Cons xs []"
1.1150 -by (simp add: fold_Cons_rev)
1.1151 + by (simp add: fold_Cons_rev)
1.1152
1.1153 lemma fold_append_concat_rev: "fold append xss = append (concat (rev xss))"
1.1154 -by (induct xss) simp_all
1.1155 + by (induct xss) simp_all
1.1156
1.1157 text \<open>@{const Finite_Set.fold} and @{const fold}\<close>
1.1158
1.1159 lemma (in comp_fun_commute) fold_set_fold_remdups:
1.1160 "Finite_Set.fold f y (set xs) = fold f (remdups xs) y"
1.1161 -by (rule sym, induct xs arbitrary: y) (simp_all add: fold_fun_left_comm insert_absorb)
1.1162 + by (rule sym, induct xs arbitrary: y) (simp_all add: fold_fun_left_comm insert_absorb)
1.1163
1.1164 lemma (in comp_fun_idem) fold_set_fold:
1.1165 "Finite_Set.fold f y (set xs) = fold f xs y"
1.1166 -by (rule sym, induct xs arbitrary: y) (simp_all add: fold_fun_left_comm)
1.1167 + by (rule sym, induct xs arbitrary: y) (simp_all add: fold_fun_left_comm)
1.1168
1.1169 lemma union_set_fold [code]: "set xs \<union> A = fold Set.insert xs A"
1.1170 proof -
1.1171 @@ -2932,7 +2931,7 @@
1.1172
1.1173 lemma union_coset_filter [code]:
1.1174 "List.coset xs \<union> A = List.coset (List.filter (\<lambda>x. x \<notin> A) xs)"
1.1175 -by auto
1.1176 + by auto
1.1177
1.1178 lemma minus_set_fold [code]: "A - set xs = fold Set.remove xs A"
1.1179 proof -
1.1180 @@ -2944,15 +2943,15 @@
1.1181
1.1182 lemma minus_coset_filter [code]:
1.1183 "A - List.coset xs = set (List.filter (\<lambda>x. x \<in> A) xs)"
1.1184 -by auto
1.1185 + by auto
1.1186
1.1187 lemma inter_set_filter [code]:
1.1188 "A \<inter> set xs = set (List.filter (\<lambda>x. x \<in> A) xs)"
1.1189 -by auto
1.1190 + by auto
1.1191
1.1192 lemma inter_coset_fold [code]:
1.1193 "A \<inter> List.coset xs = fold Set.remove xs A"
1.1194 -by (simp add: Diff_eq [symmetric] minus_set_fold)
1.1195 + by (simp add: Diff_eq [symmetric] minus_set_fold)
1.1196
1.1197 lemma (in semilattice_set) set_eq_fold [code]:
1.1198 "F (set (x # xs)) = fold f xs x"
1.1199 @@ -3000,70 +2999,70 @@
1.1200 text \<open>Correspondence\<close>
1.1201
1.1202 lemma foldr_conv_fold [code_abbrev]: "foldr f xs = fold f (rev xs)"
1.1203 -by (induct xs) simp_all
1.1204 + by (induct xs) simp_all
1.1205
1.1206 lemma foldl_conv_fold: "foldl f s xs = fold (\<lambda>x s. f s x) xs s"
1.1207 -by (induct xs arbitrary: s) simp_all
1.1208 + by (induct xs arbitrary: s) simp_all
1.1209
1.1210 lemma foldr_conv_foldl: \<comment> \<open>The ``Third Duality Theorem'' in Bird \& Wadler:\<close>
1.1211 "foldr f xs a = foldl (\<lambda>x y. f y x) a (rev xs)"
1.1212 -by (simp add: foldr_conv_fold foldl_conv_fold)
1.1213 + by (simp add: foldr_conv_fold foldl_conv_fold)
1.1214
1.1215 lemma foldl_conv_foldr:
1.1216 "foldl f a xs = foldr (\<lambda>x y. f y x) (rev xs) a"
1.1217 -by (simp add: foldr_conv_fold foldl_conv_fold)
1.1218 + by (simp add: foldr_conv_fold foldl_conv_fold)
1.1219
1.1220 lemma foldr_fold:
1.1221 "(\<And>x y. x \<in> set xs \<Longrightarrow> y \<in> set xs \<Longrightarrow> f y \<circ> f x = f x \<circ> f y)
1.1222 \<Longrightarrow> foldr f xs = fold f xs"
1.1223 -unfolding foldr_conv_fold by (rule fold_rev)
1.1224 + unfolding foldr_conv_fold by (rule fold_rev)
1.1225
1.1226 lemma foldr_cong [fundef_cong]:
1.1227 "a = b \<Longrightarrow> l = k \<Longrightarrow> (\<And>a x. x \<in> set l \<Longrightarrow> f x a = g x a) \<Longrightarrow> foldr f l a = foldr g k b"
1.1228 -by (auto simp add: foldr_conv_fold intro!: fold_cong)
1.1229 + by (auto simp add: foldr_conv_fold intro!: fold_cong)
1.1230
1.1231 lemma foldl_cong [fundef_cong]:
1.1232 "a = b \<Longrightarrow> l = k \<Longrightarrow> (\<And>a x. x \<in> set l \<Longrightarrow> f a x = g a x) \<Longrightarrow> foldl f a l = foldl g b k"
1.1233 -by (auto simp add: foldl_conv_fold intro!: fold_cong)
1.1234 + by (auto simp add: foldl_conv_fold intro!: fold_cong)
1.1235
1.1236 lemma foldr_append [simp]: "foldr f (xs @ ys) a = foldr f xs (foldr f ys a)"
1.1237 -by (simp add: foldr_conv_fold)
1.1238 + by (simp add: foldr_conv_fold)
1.1239
1.1240 lemma foldl_append [simp]: "foldl f a (xs @ ys) = foldl f (foldl f a xs) ys"
1.1241 -by (simp add: foldl_conv_fold)
1.1242 + by (simp add: foldl_conv_fold)
1.1243
1.1244 lemma foldr_map [code_unfold]: "foldr g (map f xs) a = foldr (g \<circ> f) xs a"
1.1245 -by (simp add: foldr_conv_fold fold_map rev_map)
1.1246 + by (simp add: foldr_conv_fold fold_map rev_map)
1.1247
1.1248 lemma foldr_filter:
1.1249 "foldr f (filter P xs) = foldr (\<lambda>x. if P x then f x else id) xs"
1.1250 -by (simp add: foldr_conv_fold rev_filter fold_filter)
1.1251 + by (simp add: foldr_conv_fold rev_filter fold_filter)
1.1252
1.1253 lemma foldl_map [code_unfold]:
1.1254 "foldl g a (map f xs) = foldl (\<lambda>a x. g a (f x)) a xs"
1.1255 -by (simp add: foldl_conv_fold fold_map comp_def)
1.1256 + by (simp add: foldl_conv_fold fold_map comp_def)
1.1257
1.1258 lemma concat_conv_foldr [code]:
1.1259 "concat xss = foldr append xss []"
1.1260 -by (simp add: fold_append_concat_rev foldr_conv_fold)
1.1261 + by (simp add: fold_append_concat_rev foldr_conv_fold)
1.1262
1.1263
1.1264 subsubsection \<open>@{const upt}\<close>
1.1265
1.1266 lemma upt_rec[code]: "[i..<j] = (if i<j then i#[Suc i..<j] else [])"
1.1267 -\<comment> \<open>simp does not terminate!\<close>
1.1268 -by (induct j) auto
1.1269 + \<comment> \<open>simp does not terminate!\<close>
1.1270 + by (induct j) auto
1.1271
1.1272 lemmas upt_rec_numeral[simp] = upt_rec[of "numeral m" "numeral n"] for m n
1.1273
1.1274 lemma upt_conv_Nil [simp]: "j \<le> i ==> [i..<j] = []"
1.1275 -by (subst upt_rec) simp
1.1276 + by (subst upt_rec) simp
1.1277
1.1278 lemma upt_eq_Nil_conv[simp]: "([i..<j] = []) = (j = 0 \<or> j \<le> i)"
1.1279 -by(induct j)simp_all
1.1280 + by(induct j)simp_all
1.1281
1.1282 lemma upt_eq_Cons_conv:
1.1283 - "([i..<j] = x#xs) = (i < j \<and> i = x \<and> [i+1..<j] = xs)"
1.1284 + "([i..<j] = x#xs) = (i < j \<and> i = x \<and> [i+1..<j] = xs)"
1.1285 proof (induct j arbitrary: x xs)
1.1286 case (Suc j)
1.1287 then show ?case
1.1288 @@ -3071,11 +3070,11 @@
1.1289 qed simp
1.1290
1.1291 lemma upt_Suc_append: "i \<le> j ==> [i..<(Suc j)] = [i..<j]@[j]"
1.1292 -\<comment> \<open>Only needed if \<open>upt_Suc\<close> is deleted from the simpset.\<close>
1.1293 -by simp
1.1294 + \<comment> \<open>Only needed if \<open>upt_Suc\<close> is deleted from the simpset.\<close>
1.1295 + by simp
1.1296
1.1297 lemma upt_conv_Cons: "i < j ==> [i..<j] = i # [Suc i..<j]"
1.1298 -by (simp add: upt_rec)
1.1299 + by (simp add: upt_rec)
1.1300
1.1301 lemma upt_conv_Cons_Cons: \<comment> \<open>no precondition\<close>
1.1302 "m # n # ns = [m..<q] \<longleftrightarrow> n # ns = [Suc m..<q]"
1.1303 @@ -3086,23 +3085,23 @@
1.1304 qed
1.1305
1.1306 lemma upt_add_eq_append: "i<=j ==> [i..<j+k] = [i..<j]@[j..<j+k]"
1.1307 -\<comment> \<open>LOOPS as a simprule, since \<open>j \<le> j\<close>.\<close>
1.1308 -by (induct k) auto
1.1309 + \<comment> \<open>LOOPS as a simprule, since \<open>j \<le> j\<close>.\<close>
1.1310 + by (induct k) auto
1.1311
1.1312 lemma length_upt [simp]: "length [i..<j] = j - i"
1.1313 -by (induct j) (auto simp add: Suc_diff_le)
1.1314 + by (induct j) (auto simp add: Suc_diff_le)
1.1315
1.1316 lemma nth_upt [simp]: "i + k < j ==> [i..<j] ! k = i + k"
1.1317 -by (induct j) (auto simp add: less_Suc_eq nth_append split: nat_diff_split)
1.1318 + by (induct j) (auto simp add: less_Suc_eq nth_append split: nat_diff_split)
1.1319
1.1320 lemma hd_upt[simp]: "i < j \<Longrightarrow> hd[i..<j] = i"
1.1321 -by(simp add:upt_conv_Cons)
1.1322 + by(simp add:upt_conv_Cons)
1.1323
1.1324 lemma tl_upt: "tl [m..<n] = [Suc m..<n]"
1.1325 -by (simp add: upt_rec)
1.1326 + by (simp add: upt_rec)
1.1327
1.1328 lemma last_upt[simp]: "i < j \<Longrightarrow> last[i..<j] = j - 1"
1.1329 -by(cases j)(auto simp: upt_Suc_append)
1.1330 + by(cases j)(auto simp: upt_Suc_append)
1.1331
1.1332 lemma take_upt [simp]: "i+m \<le> n ==> take m [i..<n] = [i..<i+m]"
1.1333 proof (induct m arbitrary: i)
1.1334 @@ -3112,26 +3111,26 @@
1.1335 qed simp
1.1336
1.1337 lemma drop_upt[simp]: "drop m [i..<j] = [i+m..<j]"
1.1338 -by(induct j) auto
1.1339 + by(induct j) auto
1.1340
1.1341 lemma map_Suc_upt: "map Suc [m..<n] = [Suc m..<Suc n]"
1.1342 -by (induct n) auto
1.1343 + by (induct n) auto
1.1344
1.1345 lemma map_add_upt: "map (\<lambda>i. i + n) [0..<m] = [n..<m + n]"
1.1346 -by (induct m) simp_all
1.1347 + by (induct m) simp_all
1.1348
1.1349 lemma nth_map_upt: "i < n-m ==> (map f [m..<n]) ! i = f(m+i)"
1.1350 proof (induct n m arbitrary: i rule: diff_induct)
1.1351 -case (3 x y)
1.1352 + case (3 x y)
1.1353 then show ?case
1.1354 by (metis add.commute length_upt less_diff_conv nth_map nth_upt)
1.1355 qed auto
1.1356
1.1357 lemma map_decr_upt: "map (\<lambda>n. n - Suc 0) [Suc m..<Suc n] = [m..<n]"
1.1358 -by (induct n) simp_all
1.1359 + by (induct n) simp_all
1.1360
1.1361 lemma map_upt_Suc: "map f [0 ..< Suc n] = f 0 # map (\<lambda>i. f (Suc i)) [0 ..< n]"
1.1362 -by (induct n arbitrary: f) auto
1.1363 + by (induct n arbitrary: f) auto
1.1364
1.1365 lemma nth_take_lemma:
1.1366 "k \<le> length xs \<Longrightarrow> k \<le> length ys \<Longrightarrow>
1.1367 @@ -3145,24 +3144,20 @@
1.1368
1.1369 lemma nth_equalityI:
1.1370 "[| length xs = length ys; \<forall>i < length xs. xs!i = ys!i |] ==> xs = ys"
1.1371 -by (frule nth_take_lemma [OF le_refl eq_imp_le]) simp_all
1.1372 + by (frule nth_take_lemma [OF le_refl eq_imp_le]) simp_all
1.1373
1.1374 lemma map_nth:
1.1375 "map (\<lambda>i. xs ! i) [0..<length xs] = xs"
1.1376 -by (rule nth_equalityI, auto)
1.1377 + by (rule nth_equalityI, auto)
1.1378
1.1379 lemma list_all2_antisym:
1.1380 "\<lbrakk> (\<And>x y. \<lbrakk>P x y; Q y x\<rbrakk> \<Longrightarrow> x = y); list_all2 P xs ys; list_all2 Q ys xs \<rbrakk>
1.1381 \<Longrightarrow> xs = ys"
1.1382 -apply (simp add: list_all2_conv_all_nth)
1.1383 -apply (rule nth_equalityI, blast, simp)
1.1384 -done
1.1385 + by (simp add: list_all2_conv_all_nth nth_equalityI)
1.1386
1.1387 lemma take_equalityI: "(\<forall>i. take i xs = take i ys) ==> xs = ys"
1.1388 \<comment> \<open>The famous take-lemma.\<close>
1.1389 -apply (drule_tac x = "max (length xs) (length ys)" in spec)
1.1390 -apply (simp add: le_max_iff_disj)
1.1391 -done
1.1392 + by (metis length_take min.commute order_refl take_all)
1.1393
1.1394 lemma take_Cons':
1.1395 "take n (x # xs) = (if n = 0 then [] else x # take (n - 1) xs)"
1.1396 @@ -3239,7 +3234,7 @@
1.1397 qed
1.1398
1.1399 lemma nth_upto: "i + int k \<le> j \<Longrightarrow> [i..j] ! k = i + int k"
1.1400 -apply(induction i j arbitrary: k rule: upto.induct)
1.1401 + apply(induction i j arbitrary: k rule: upto.induct)
1.1402 apply(subst upto_rec1)
1.1403 apply(auto simp add: nth_Cons')
1.1404 done
1.1405 @@ -3310,12 +3305,11 @@
1.1406
1.1407 lemma length_remdups_eq[iff]:
1.1408 "(length (remdups xs) = length xs) = (remdups xs = xs)"
1.1409 -apply(induct xs)
1.1410 - apply auto
1.1411 -apply(subgoal_tac "length (remdups xs) \<le> length xs")
1.1412 - apply arith
1.1413 -apply(rule length_remdups_leq)
1.1414 -done
1.1415 +proof (induct xs)
1.1416 + case (Cons a xs)
1.1417 + then show ?case
1.1418 + by simp (metis Suc_n_not_le_n impossible_Cons length_remdups_leq)
1.1419 +qed auto
1.1420
1.1421 lemma remdups_filter: "remdups(filter P xs) = filter P (remdups xs)"
1.1422 by (induct xs) auto
1.1423 @@ -3341,31 +3335,38 @@
1.1424 done
1.1425
1.1426 lemma distinct_take[simp]: "distinct xs \<Longrightarrow> distinct (take i xs)"
1.1427 -apply(induct xs arbitrary: i)
1.1428 - apply simp
1.1429 -apply (case_tac i)
1.1430 - apply simp_all
1.1431 -apply(blast dest:in_set_takeD)
1.1432 -done
1.1433 +proof (induct xs arbitrary: i)
1.1434 + case (Cons a xs)
1.1435 + then show ?case
1.1436 + by (metis Cons.prems append_take_drop_id distinct_append)
1.1437 +qed auto
1.1438
1.1439 lemma distinct_drop[simp]: "distinct xs \<Longrightarrow> distinct (drop i xs)"
1.1440 -apply(induct xs arbitrary: i)
1.1441 - apply simp
1.1442 -apply (case_tac i)
1.1443 - apply simp_all
1.1444 -done
1.1445 +proof (induct xs arbitrary: i)
1.1446 + case (Cons a xs)
1.1447 + then show ?case
1.1448 + by (metis Cons.prems append_take_drop_id distinct_append)
1.1449 +qed auto
1.1450
1.1451 lemma distinct_list_update:
1.1452 -assumes d: "distinct xs" and a: "a \<notin> set xs - {xs!i}"
1.1453 -shows "distinct (xs[i:=a])"
1.1454 + assumes d: "distinct xs" and a: "a \<notin> set xs - {xs!i}"
1.1455 + shows "distinct (xs[i:=a])"
1.1456 proof (cases "i < length xs")
1.1457 case True
1.1458 - with a have "a \<notin> set (take i xs @ xs ! i # drop (Suc i) xs) - {xs!i}"
1.1459 - apply (drule_tac id_take_nth_drop) by simp
1.1460 - with d True show ?thesis
1.1461 - apply (simp add: upd_conv_take_nth_drop)
1.1462 - apply (drule subst [OF id_take_nth_drop]) apply assumption
1.1463 - apply simp apply (cases "a = xs!i") apply simp by blast
1.1464 + with a have anot: "a \<notin> set (take i xs @ xs ! i # drop (Suc i) xs) - {xs!i}"
1.1465 + by simp (metis in_set_dropD in_set_takeD)
1.1466 + show ?thesis
1.1467 + proof (cases "a = xs!i")
1.1468 + case True
1.1469 + with d show ?thesis
1.1470 + by auto
1.1471 + next
1.1472 + case False
1.1473 + then show ?thesis
1.1474 + using d anot \<open>i < length xs\<close>
1.1475 + apply (simp add: upd_conv_take_nth_drop)
1.1476 + by (metis disjoint_insert(1) distinct_append id_take_nth_drop set_simps(2))
1.1477 + qed
1.1478 next
1.1479 case False with d show ?thesis by auto
1.1480 qed
1.1481 @@ -3380,22 +3381,14 @@
1.1482 text \<open>It is best to avoid this indexed version of distinct, but
1.1483 sometimes it is useful.\<close>
1.1484
1.1485 -lemma distinct_conv_nth:
1.1486 -"distinct xs = (\<forall>i < size xs. \<forall>j < size xs. i \<noteq> j \<longrightarrow> xs!i \<noteq> xs!j)"
1.1487 -apply (induct xs, simp, simp)
1.1488 -apply (rule iffI, clarsimp)
1.1489 - apply (case_tac i)
1.1490 -apply (case_tac j, simp)
1.1491 -apply (simp add: set_conv_nth)
1.1492 - apply (case_tac j)
1.1493 -apply (clarsimp simp add: set_conv_nth, simp)
1.1494 -apply (rule conjI)
1.1495 - apply (clarsimp simp add: set_conv_nth)
1.1496 - apply (erule_tac x = 0 in allE, simp)
1.1497 - apply (erule_tac x = "Suc i" in allE, simp, clarsimp)
1.1498 -apply (erule_tac x = "Suc i" in allE, simp)
1.1499 -apply (erule_tac x = "Suc j" in allE, simp)
1.1500 -done
1.1501 +lemma distinct_conv_nth: "distinct xs = (\<forall>i < size xs. \<forall>j < size xs. i \<noteq> j \<longrightarrow> xs!i \<noteq> xs!j)"
1.1502 +proof (induct xs)
1.1503 + case (Cons x xs)
1.1504 + show ?case
1.1505 + apply (auto simp add: Cons nth_Cons split: nat.split_asm)
1.1506 + apply (metis Suc_less_eq2 in_set_conv_nth less_not_refl zero_less_Suc)+
1.1507 + done
1.1508 +qed auto
1.1509
1.1510 lemma nth_eq_iff_index_eq:
1.1511 "\<lbrakk> distinct xs; i < length xs; j < length xs \<rbrakk> \<Longrightarrow> (xs!i = xs!j) = (i = j)"
1.1512 @@ -3420,7 +3413,7 @@
1.1513
1.1514 lemma distinct_swap[simp]: "\<lbrakk> i < size xs; j < size xs \<rbrakk> \<Longrightarrow>
1.1515 distinct(xs[i := xs!j, j := xs!i]) = distinct xs"
1.1516 -apply (simp add: distinct_conv_nth nth_list_update)
1.1517 + apply (simp add: distinct_conv_nth nth_list_update)
1.1518 apply safe
1.1519 apply metis+
1.1520 done
1.1521 @@ -3434,8 +3427,6 @@
1.1522
1.1523 lemma card_distinct: "card (set xs) = size xs ==> distinct xs"
1.1524 proof (induct xs)
1.1525 - case Nil thus ?case by simp
1.1526 -next
1.1527 case (Cons x xs)
1.1528 show ?case
1.1529 proof (cases "x \<in> set xs")
1.1530 @@ -3448,17 +3439,20 @@
1.1531 ultimately have False by simp
1.1532 thus ?thesis ..
1.1533 qed
1.1534 -qed
1.1535 +qed simp
1.1536
1.1537 lemma distinct_length_filter: "distinct xs \<Longrightarrow> length (filter P xs) = card ({x. P x} Int set xs)"
1.1538 by (induct xs) (auto)
1.1539
1.1540 lemma not_distinct_decomp: "\<not> distinct ws \<Longrightarrow> \<exists>xs ys zs y. ws = xs@[y]@ys@[y]@zs"
1.1541 -apply (induct n == "length ws" arbitrary:ws) apply simp
1.1542 -apply(case_tac ws) apply simp
1.1543 -apply (simp split:if_split_asm)
1.1544 -apply (metis Cons_eq_appendI eq_Nil_appendI split_list)
1.1545 -done
1.1546 +proof (induct n == "length ws" arbitrary:ws)
1.1547 + case (Suc n ws)
1.1548 + then show ?case
1.1549 + using length_Suc_conv [of ws n]
1.1550 + apply (auto simp: eq_commute)
1.1551 + apply (metis append_Nil in_set_conv_decomp_first)
1.1552 + by (metis append_Cons)
1.1553 +qed simp
1.1554
1.1555 lemma not_distinct_conv_prefix:
1.1556 defines "dec as xs y ys \<equiv> y \<in> set xs \<and> distinct xs \<and> as = xs @ y # ys"
1.1557 @@ -3690,7 +3684,6 @@
1.1558 then have "Suc 0 \<notin> f ` {0 ..< length (x1 # x2 # xs)}" by auto
1.1559 then show False using f_img \<open>2 \<le> length ys\<close> by auto
1.1560 qed
1.1561 -
1.1562 obtain ys' where "ys = x1 # x2 # ys'"
1.1563 using \<open>2 \<le> length ys\<close> f_nth[of 0] f_nth[of 1]
1.1564 apply (cases ys)
1.1565 @@ -3715,10 +3708,7 @@
1.1566 using Suc0_le_f_Suc f_mono by (auto simp: f'_def mono_iff_le_Suc le_diff_iff)
1.1567 next
1.1568 have "f' ` {0 ..< length (x2 # xs)} = (\<lambda>x. f x - 1) ` {0 ..< length (x1 # x2 #xs)}"
1.1569 - apply safe
1.1570 - apply (rename_tac [!] n, case_tac [!] n)
1.1571 - apply (auto simp: f'_def \<open>f 0 = 0\<close> \<open>f (Suc 0) = Suc 0\<close> intro: rev_image_eqI)
1.1572 - done
1.1573 + by (auto simp: f'_def \<open>f 0 = 0\<close> \<open>f (Suc 0) = Suc 0\<close> image_def Bex_def less_Suc_eq_0_disj)
1.1574 also have "\<dots> = (\<lambda>x. x - 1) ` f ` {0 ..< length (x1 # x2 #xs)}"
1.1575 by (auto simp: image_comp)
1.1576 also have "\<dots> = (\<lambda>x. x - 1) ` {0 ..< length ys}"
1.1577 @@ -3885,10 +3875,12 @@
1.1578
1.1579 lemma sum_count_set:
1.1580 "set xs \<subseteq> X \<Longrightarrow> finite X \<Longrightarrow> sum (count_list xs) X = length xs"
1.1581 -apply(induction xs arbitrary: X)
1.1582 - apply simp
1.1583 -apply (simp add: sum.If_cases)
1.1584 -by (metis Diff_eq sum.remove)
1.1585 +proof (induction xs arbitrary: X)
1.1586 + case (Cons x xs)
1.1587 + then show ?case
1.1588 + apply (auto simp: sum.If_cases sum.remove)
1.1589 + by (metis (no_types) Cons.IH Cons.prems(2) diff_eq sum.remove)
1.1590 +qed simp
1.1591
1.1592
1.1593 subsubsection \<open>@{const List.extract}\<close>
1.1594 @@ -3931,23 +3923,13 @@
1.1595
1.1596 lemma in_set_remove1[simp]:
1.1597 "a \<noteq> b \<Longrightarrow> a \<in> set(remove1 b xs) = (a \<in> set xs)"
1.1598 -apply (induct xs)
1.1599 - apply auto
1.1600 -done
1.1601 + by (induct xs) auto
1.1602
1.1603 lemma set_remove1_subset: "set(remove1 x xs) \<subseteq> set xs"
1.1604 -apply(induct xs)
1.1605 - apply simp
1.1606 -apply simp
1.1607 -apply blast
1.1608 -done
1.1609 + by (induct xs) auto
1.1610
1.1611 lemma set_remove1_eq [simp]: "distinct xs \<Longrightarrow> set(remove1 x xs) = set xs - {x}"
1.1612 -apply(induct xs)
1.1613 - apply simp
1.1614 -apply simp
1.1615 -apply blast
1.1616 -done
1.1617 + by (induct xs) auto
1.1618
1.1619 lemma length_remove1:
1.1620 "length(remove1 x xs) = (if x \<in> set xs then length xs - 1 else length xs)"
1.1621 @@ -4067,9 +4049,7 @@
1.1622 text\<open>Courtesy of Matthias Daum:\<close>
1.1623 lemma append_replicate_commute:
1.1624 "replicate n x @ replicate k x = replicate k x @ replicate n x"
1.1625 -apply (simp add: replicate_add [symmetric])
1.1626 -apply (simp add: add.commute)
1.1627 -done
1.1628 + by (metis add.commute replicate_add)
1.1629
1.1630 text\<open>Courtesy of Andreas Lochbihler:\<close>
1.1631 lemma filter_replicate:
1.1632 @@ -4090,23 +4070,24 @@
1.1633
1.1634 text\<open>Courtesy of Matthias Daum (2 lemmas):\<close>
1.1635 lemma take_replicate[simp]: "take i (replicate k x) = replicate (min i k) x"
1.1636 -apply (case_tac "k \<le> i")
1.1637 - apply (simp add: min_def)
1.1638 -apply (drule not_le_imp_less)
1.1639 -apply (simp add: min_def)
1.1640 -apply (subgoal_tac "replicate k x = replicate i x @ replicate (k - i) x")
1.1641 - apply simp
1.1642 -apply (simp add: replicate_add [symmetric])
1.1643 -done
1.1644 +proof (cases "k \<le> i")
1.1645 + case True
1.1646 + then show ?thesis
1.1647 + by (simp add: min_def)
1.1648 +next
1.1649 + case False
1.1650 + then have "replicate k x = replicate i x @ replicate (k - i) x"
1.1651 + by (simp add: replicate_add [symmetric])
1.1652 + then show ?thesis
1.1653 + by (simp add: min_def)
1.1654 +qed
1.1655
1.1656 lemma drop_replicate[simp]: "drop i (replicate k x) = replicate (k-i) x"
1.1657 -apply (induct k arbitrary: i)
1.1658 - apply simp
1.1659 -apply clarsimp
1.1660 -apply (case_tac i)
1.1661 - apply simp
1.1662 -apply clarsimp
1.1663 -done
1.1664 +proof (induct k arbitrary: i)
1.1665 + case (Suc k)
1.1666 + then show ?case
1.1667 + by (simp add: drop_Cons')
1.1668 +qed simp
1.1669
1.1670 lemma set_replicate_Suc: "set (replicate (Suc n) x) = {x}"
1.1671 by (induct n) auto
1.1672 @@ -4148,11 +4129,11 @@
1.1673
1.1674 lemma replicate_eq_replicate[simp]:
1.1675 "(replicate m x = replicate n y) \<longleftrightarrow> (m=n \<and> (m\<noteq>0 \<longrightarrow> x=y))"
1.1676 -apply(induct m arbitrary: n)
1.1677 - apply simp
1.1678 -apply(induct_tac n)
1.1679 -apply auto
1.1680 -done
1.1681 +proof (induct m arbitrary: n)
1.1682 + case (Suc m n)
1.1683 + then show ?case
1.1684 + by (induct n) auto
1.1685 +qed simp
1.1686
1.1687 lemma replicate_length_filter:
1.1688 "replicate (length (filter (\<lambda>y. x = y) xs)) x = filter (\<lambda>y. x = y) xs"
1.1689 @@ -4239,10 +4220,7 @@
1.1690 lemma enumerate_simps [simp, code]:
1.1691 "enumerate n [] = []"
1.1692 "enumerate n (x # xs) = (n, x) # enumerate (Suc n) xs"
1.1693 -apply (auto simp add: enumerate_eq_zip not_le)
1.1694 -apply (cases "n < n + length xs")
1.1695 - apply (auto simp add: upt_conv_Cons)
1.1696 -done
1.1697 + by (simp_all add: enumerate_eq_zip upt_rec)
1.1698
1.1699 lemma length_enumerate [simp]:
1.1700 "length (enumerate n xs) = length xs"
1.1701 @@ -4288,12 +4266,7 @@
1.1702
1.1703 lemma enumerate_append_eq:
1.1704 "enumerate n (xs @ ys) = enumerate n xs @ enumerate (n + length xs) ys"
1.1705 -unfolding enumerate_eq_zip
1.1706 -apply auto
1.1707 - apply (subst zip_append [symmetric]) apply simp
1.1708 - apply (subst upt_add_eq_append [symmetric])
1.1709 - apply (simp_all add: ac_simps)
1.1710 -done
1.1711 + by (simp add: enumerate_eq_zip add.assoc zip_append2)
1.1712
1.1713 lemma enumerate_map_upt:
1.1714 "enumerate n (map f [n..<m]) = map (\<lambda>k. (k, f k)) [n..<m]"
1.1715 @@ -4322,27 +4295,33 @@
1.1716 by(cases xs) simp_all
1.1717
1.1718 lemma rotate_length01[simp]: "length xs \<le> 1 \<Longrightarrow> rotate n xs = xs"
1.1719 -apply(induct n)
1.1720 - apply simp
1.1721 -apply (simp add:rotate_def)
1.1722 -done
1.1723 + by (induct n) (simp_all add:rotate_def)
1.1724
1.1725 lemma rotate1_hd_tl: "xs \<noteq> [] \<Longrightarrow> rotate1 xs = tl xs @ [hd xs]"
1.1726 by (cases xs) simp_all
1.1727
1.1728 lemma rotate_drop_take:
1.1729 "rotate n xs = drop (n mod length xs) xs @ take (n mod length xs) xs"
1.1730 -apply(induct n)
1.1731 - apply simp
1.1732 -apply(simp add:rotate_def)
1.1733 -apply(cases "xs = []")
1.1734 - apply (simp)
1.1735 -apply(case_tac "n mod length xs = 0")
1.1736 - apply(simp add:mod_Suc)
1.1737 - apply(simp add: rotate1_hd_tl drop_Suc take_Suc)
1.1738 -apply(simp add:mod_Suc rotate1_hd_tl drop_Suc[symmetric] drop_tl[symmetric]
1.1739 - take_hd_drop linorder_not_le)
1.1740 -done
1.1741 +proof (induct n)
1.1742 + case (Suc n)
1.1743 + show ?case
1.1744 + proof (cases "xs = []")
1.1745 + case False
1.1746 + then show ?thesis
1.1747 + proof (cases "n mod length xs = 0")
1.1748 + case True
1.1749 + then show ?thesis
1.1750 + apply (simp add: mod_Suc)
1.1751 + by (simp add: False Suc.hyps drop_Suc rotate1_hd_tl take_Suc)
1.1752 + next
1.1753 + case False
1.1754 + with \<open>xs \<noteq> []\<close> Suc
1.1755 + show ?thesis
1.1756 + by (simp add: rotate_def mod_Suc rotate1_hd_tl drop_Suc[symmetric] drop_tl[symmetric]
1.1757 + take_hd_drop linorder_not_le)
1.1758 + qed
1.1759 + qed simp
1.1760 +qed simp
1.1761
1.1762 lemma rotate_conv_mod: "rotate n xs = rotate (n mod length xs) xs"
1.1763 by(simp add:rotate_drop_take)
1.1764 @@ -4385,11 +4364,14 @@
1.1765 by(simp add:rotate_drop_take rev_drop rev_take)
1.1766 qed force
1.1767
1.1768 -lemma hd_rotate_conv_nth: "xs \<noteq> [] \<Longrightarrow> hd(rotate n xs) = xs!(n mod length xs)"
1.1769 -apply(simp add:rotate_drop_take hd_append hd_drop_conv_nth hd_conv_nth)
1.1770 -apply(subgoal_tac "length xs \<noteq> 0")
1.1771 - prefer 2 apply simp
1.1772 -using mod_less_divisor[of "length xs" n] by arith
1.1773 +lemma hd_rotate_conv_nth:
1.1774 + assumes "xs \<noteq> []" shows "hd(rotate n xs) = xs!(n mod length xs)"
1.1775 +proof -
1.1776 + have "n mod length xs < length xs"
1.1777 + using assms by simp
1.1778 + then show ?thesis
1.1779 + by (metis drop_eq_Nil hd_append2 hd_drop_conv_nth leD rotate_drop_take)
1.1780 +qed
1.1781
1.1782 lemma rotate_append: "rotate (length l) (l @ q) = q @ l"
1.1783 by (induct l arbitrary: q) (auto simp add: rotate1_rotate_swap)
1.1784 @@ -4408,14 +4390,13 @@
1.1785 by(simp add: nths_def length_filter_conv_card cong:conj_cong)
1.1786
1.1787 lemma nths_shift_lemma_Suc:
1.1788 - "map fst (filter (%p. P(Suc(snd p))) (zip xs is)) =
1.1789 - map fst (filter (%p. P(snd p)) (zip xs (map Suc is)))"
1.1790 -apply(induct xs arbitrary: "is")
1.1791 - apply simp
1.1792 -apply (case_tac "is")
1.1793 - apply simp
1.1794 -apply simp
1.1795 -done
1.1796 + "map fst (filter (\<lambda>p. P(Suc(snd p))) (zip xs is)) =
1.1797 + map fst (filter (\<lambda>p. P(snd p)) (zip xs (map Suc is)))"
1.1798 +proof (induct xs arbitrary: "is")
1.1799 + case (Cons x xs "is")
1.1800 + show ?case
1.1801 + by (cases "is") (auto simp add: Cons.hyps)
1.1802 +qed simp
1.1803
1.1804 lemma nths_shift_lemma:
1.1805 "map fst (filter (\<lambda>p. snd p \<in> A) (zip xs [i..<i + length xs])) =
1.1806 @@ -4424,26 +4405,26 @@
1.1807
1.1808 lemma nths_append:
1.1809 "nths (l @ l') A = nths l A @ nths l' {j. j + length l \<in> A}"
1.1810 -apply (unfold nths_def)
1.1811 -apply (induct l' rule: rev_induct, simp)
1.1812 -apply (simp add: upt_add_eq_append[of 0] nths_shift_lemma)
1.1813 -apply (simp add: add.commute)
1.1814 -done
1.1815 + unfolding nths_def
1.1816 +proof (induct l' rule: rev_induct)
1.1817 + case (snoc x xs)
1.1818 + then show ?case
1.1819 + by (simp add: upt_add_eq_append[of 0] nths_shift_lemma add.commute)
1.1820 +qed auto
1.1821
1.1822 lemma nths_Cons:
1.1823 "nths (x # l) A = (if 0 \<in> A then [x] else []) @ nths l {j. Suc j \<in> A}"
1.1824 -apply (induct l rule: rev_induct)
1.1825 - apply (simp add: nths_def)
1.1826 -apply (simp del: append_Cons add: append_Cons[symmetric] nths_append)
1.1827 -done
1.1828 +proof (induct l rule: rev_induct)
1.1829 + case (snoc x xs)
1.1830 + then show ?case
1.1831 + by (simp flip: append_Cons add: nths_append)
1.1832 +qed (auto simp: nths_def)
1.1833
1.1834 lemma nths_map: "nths (map f xs) I = map f (nths xs I)"
1.1835 by(induction xs arbitrary: I) (simp_all add: nths_Cons)
1.1836
1.1837 lemma set_nths: "set(nths xs I) = {xs!i|i. i<size xs \<and> i \<in> I}"
1.1838 -apply(induct xs arbitrary: I)
1.1839 -apply(auto simp: nths_Cons nth_Cons split:nat.split dest!: gr0_implies_Suc)
1.1840 -done
1.1841 + by (induct xs arbitrary: I) (auto simp: nths_Cons nth_Cons split:nat.split dest!: gr0_implies_Suc)
1.1842
1.1843 lemma set_nths_subset: "set(nths xs I) \<subseteq> set xs"
1.1844 by(auto simp add:set_nths)
1.1845 @@ -4792,8 +4773,7 @@
1.1846 show ?thesis
1.1847 unfolding transpose.simps \<open>i = Suc j\<close> nth_Cons_Suc "3.hyps"[OF j_less]
1.1848 apply (auto simp: transpose_aux_filter_tail filter_map comp_def length_transpose * ** *** XS_def[symmetric])
1.1849 - apply (rule list.exhaust)
1.1850 - by auto
1.1851 + by (simp add: nth_tl)
1.1852 qed
1.1853 qed simp_all
1.1854
1.1855 @@ -4917,11 +4897,7 @@
1.1856 qed
1.1857
1.1858 lemma infinite_UNIV_listI: "\<not> finite(UNIV::'a list set)"
1.1859 -apply (rule notI)
1.1860 -apply (drule finite_maxlen)
1.1861 -apply clarsimp
1.1862 -apply (erule_tac x = "replicate n undefined" in allE)
1.1863 -by simp
1.1864 + by (metis UNIV_I finite_maxlen length_replicate less_irrefl)
1.1865
1.1866
1.1867 subsection \<open>Sorting\<close>
1.1868 @@ -4936,10 +4912,11 @@
1.1869 by(simp)
1.1870
1.1871 lemma sorted_wrt2: "transp P \<Longrightarrow> sorted_wrt P (x # y # zs) = (P x y \<and> sorted_wrt P (y # zs))"
1.1872 -apply(induction zs arbitrary: x y)
1.1873 -apply(auto dest: transpD)
1.1874 -apply (meson transpD)
1.1875 -done
1.1876 +proof (induction zs arbitrary: x y)
1.1877 + case (Cons z zs)
1.1878 + then show ?case
1.1879 + by simp (meson transpD)+
1.1880 +qed auto
1.1881
1.1882 lemmas sorted_wrt2_simps = sorted_wrt1 sorted_wrt2
1.1883
1.1884 @@ -4969,9 +4946,7 @@
1.1885
1.1886 lemma sorted_wrt_iff_nth_less:
1.1887 "sorted_wrt P xs = (\<forall>i j. i < j \<longrightarrow> j < length xs \<longrightarrow> P (xs ! i) (xs ! j))"
1.1888 -apply(induction xs)
1.1889 -apply(auto simp add: in_set_conv_nth Ball_def nth_Cons split: nat.split)
1.1890 -done
1.1891 + by (induction xs) (auto simp add: in_set_conv_nth Ball_def nth_Cons split: nat.split)
1.1892
1.1893 lemma sorted_wrt_nth_less:
1.1894 "\<lbrakk> sorted_wrt P xs; i < j; j < length xs \<rbrakk> \<Longrightarrow> P (xs ! i) (xs ! j)"
1.1895 @@ -4981,10 +4956,11 @@
1.1896 by(induction n) (auto simp: sorted_wrt_append)
1.1897
1.1898 lemma sorted_wrt_upto[simp]: "sorted_wrt (<) [i..j]"
1.1899 -apply(induction i j rule: upto.induct)
1.1900 -apply(subst upto.simps)
1.1901 -apply(simp)
1.1902 -done
1.1903 +proof(induct i j rule:upto.induct)
1.1904 + case (1 i j)
1.1905 + from this show ?case
1.1906 + unfolding upto.simps[of i j] by auto
1.1907 +qed
1.1908
1.1909 text \<open>Each element is greater or equal to its index:\<close>
1.1910
1.1911 @@ -5313,12 +5289,13 @@
1.1912 qed
1.1913
1.1914 lemma finite_sorted_distinct_unique:
1.1915 -shows "finite A \<Longrightarrow> \<exists>!xs. set xs = A \<and> sorted xs \<and> distinct xs"
1.1916 -apply(drule finite_distinct_list)
1.1917 -apply clarify
1.1918 -apply(rule_tac a="sort xs" in ex1I)
1.1919 -apply (auto simp: sorted_distinct_set_unique)
1.1920 -done
1.1921 + assumes "finite A" shows "\<exists>!xs. set xs = A \<and> sorted xs \<and> distinct xs"
1.1922 +proof -
1.1923 + obtain xs where "distinct xs" "A = set xs"
1.1924 + using finite_distinct_list [OF assms] by metis
1.1925 + then show ?thesis
1.1926 + by (rule_tac a="sort xs" in ex1I) (auto simp: sorted_distinct_set_unique)
1.1927 +qed
1.1928
1.1929 lemma insort_insert_key_triv:
1.1930 "f x \<in> f ` set xs \<Longrightarrow> insort_insert_key f x xs = xs"
1.1931 @@ -5741,12 +5718,12 @@
1.1932 | insert: "ListMem x xs \<Longrightarrow> ListMem x (y # xs)"
1.1933
1.1934 lemma ListMem_iff: "(ListMem x xs) = (x \<in> set xs)"
1.1935 -apply (rule iffI)
1.1936 - apply (induct set: ListMem)
1.1937 - apply auto
1.1938 -apply (induct xs)
1.1939 - apply (auto intro: ListMem.intros)
1.1940 -done
1.1941 +proof
1.1942 + show "ListMem x xs \<Longrightarrow> x \<in> set xs"
1.1943 + by (induct set: ListMem) auto
1.1944 + show "x \<in> set xs \<Longrightarrow> ListMem x xs"
1.1945 + by (induct xs) (auto intro: ListMem.intros)
1.1946 +qed
1.1947
1.1948
1.1949 subsubsection \<open>Lists as Cartesian products\<close>
1.1950 @@ -5789,20 +5766,23 @@
1.1951 "lenlex r = inv_image (less_than <*lex*> lex r) (\<lambda>xs. (length xs, xs))"
1.1952 \<comment> \<open>Compares lists by their length and then lexicographically\<close>
1.1953
1.1954 -lemma wf_lexn: "wf r ==> wf (lexn r n)"
1.1955 -apply (induct n, simp, simp)
1.1956 -apply(rule wf_subset)
1.1957 - prefer 2 apply (rule Int_lower1)
1.1958 -apply(rule wf_map_prod_image)
1.1959 - prefer 2 apply (rule inj_onI, auto)
1.1960 -done
1.1961 +lemma wf_lexn: assumes "wf r" shows "wf (lexn r n)"
1.1962 +proof (induct n)
1.1963 + case (Suc n)
1.1964 + have inj: "inj (\<lambda>(x, xs). x # xs)"
1.1965 + using assms by (auto simp: inj_on_def)
1.1966 + have wf: "wf (map_prod (\<lambda>(x, xs). x # xs) (\<lambda>(x, xs). x # xs) ` (r <*lex*> lexn r n))"
1.1967 + by (simp add: Suc.hyps assms wf_lex_prod wf_map_prod_image [OF _ inj])
1.1968 + then show ?case
1.1969 + by (rule wf_subset) auto
1.1970 +qed auto
1.1971
1.1972 lemma lexn_length:
1.1973 "(xs, ys) \<in> lexn r n \<Longrightarrow> length xs = n \<and> length ys = n"
1.1974 by (induct n arbitrary: xs ys) auto
1.1975
1.1976 lemma wf_lex [intro!]: "wf r ==> wf (lex r)"
1.1977 - apply (unfold lex_def)
1.1978 + unfolding lex_def
1.1979 apply (rule wf_UN)
1.1980 apply (simp add: wf_lexn)
1.1981 apply (metis DomainE Int_emptyI RangeE lexn_length)
1.1982 @@ -5905,14 +5885,12 @@
1.1983 by (simp add:lex_conv)
1.1984
1.1985 lemma Cons_in_lex [simp]:
1.1986 - "((x # xs, y # ys) \<in> lex r) =
1.1987 + "((x # xs, y # ys) \<in> lex r) =
1.1988 ((x, y) \<in> r \<and> length xs = length ys \<or> x = y \<and> (xs, ys) \<in> lex r)"
1.1989 -apply (simp add: lex_conv)
1.1990 -apply (rule iffI)
1.1991 - prefer 2 apply (blast intro: Cons_eq_appendI, clarify)
1.1992 -apply (case_tac xys, simp, simp)
1.1993 -apply blast
1.1994 - done
1.1995 + apply (simp add: lex_conv)
1.1996 + apply (rule iffI)
1.1997 + prefer 2 apply (blast intro: Cons_eq_appendI, clarify)
1.1998 + by (metis hd_append append_Nil list.sel(1) list.sel(3) tl_append2)
1.1999
1.2000 lemma lex_append_rightI:
1.2001 "(xs, ys) \<in> lex r \<Longrightarrow> length vs = length us \<Longrightarrow> (xs @ us, ys @ vs) \<in> lex r"
1.2002 @@ -5960,12 +5938,13 @@
1.2003 by (unfold lexord_def, induct_tac x, auto)
1.2004
1.2005 lemma lexord_cons_cons[simp]:
1.2006 - "((a # x, b # y) \<in> lexord r) = ((a,b)\<in> r \<or> (a = b \<and> (x,y)\<in> lexord r))"
1.2007 - apply (unfold lexord_def, safe, simp_all)
1.2008 - apply (case_tac u, simp, simp)
1.2009 - apply (case_tac u, simp, clarsimp, blast, blast, clarsimp)
1.2010 - apply (erule_tac x="b # u" in allE)
1.2011 - by force
1.2012 + "((a # x, b # y) \<in> lexord r) = ((a,b)\<in> r \<or> (a = b \<and> (x,y)\<in> lexord r))"
1.2013 + unfolding lexord_def
1.2014 + apply (safe, simp_all)
1.2015 + apply (metis hd_append list.sel(1))
1.2016 + apply (metis hd_append list.sel(1) list.sel(3) tl_append2)
1.2017 + apply blast
1.2018 + by (meson Cons_eq_appendI)
1.2019
1.2020 lemmas lexord_simps = lexord_Nil_left lexord_Nil_right lexord_cons_cons
1.2021
1.2022 @@ -5989,24 +5968,17 @@
1.2023 (\<exists>i. i < min(length x)(length y) \<and> take i x = take i y \<and> (x!i,y!i) \<in> r))"
1.2024 apply (unfold lexord_def Let_def, clarsimp)
1.2025 apply (rule_tac f = "(% a b. a \<or> b)" in arg_cong2)
1.2026 + apply (metis Cons_nth_drop_Suc append_eq_conv_conj drop_all list.simps(3) not_le)
1.2027 apply auto
1.2028 - apply (rule_tac x="hd (drop (length x) y)" in exI)
1.2029 - apply (rule_tac x="tl (drop (length x) y)" in exI)
1.2030 - apply (erule subst, simp add: min_def)
1.2031 apply (rule_tac x ="length u" in exI, simp)
1.2032 - apply (rule_tac x ="take i x" in exI)
1.2033 - apply (rule_tac x ="x ! i" in exI)
1.2034 - apply (rule_tac x ="y ! i" in exI, safe)
1.2035 - apply (rule_tac x="drop (Suc i) x" in exI)
1.2036 - apply (drule sym, simp add: Cons_nth_drop_Suc)
1.2037 - apply (rule_tac x="drop (Suc i) y" in exI)
1.2038 - by (simp add: Cons_nth_drop_Suc)
1.2039 + by (metis id_take_nth_drop)
1.2040
1.2041 \<comment> \<open>lexord is extension of partial ordering List.lex\<close>
1.2042 lemma lexord_lex: "(x,y) \<in> lex r = ((x,y) \<in> lexord r \<and> length x = length y)"
1.2043 - apply (rule_tac x = y in spec)
1.2044 - apply (induct_tac x, clarsimp)
1.2045 - by (clarify, case_tac x, simp, force)
1.2046 +proof (induction x arbitrary: y)
1.2047 + case (Cons a x y) then show ?case
1.2048 + by (cases y) (force+)
1.2049 +qed auto
1.2050
1.2051 lemma lexord_irreflexive: "\<forall>x. (x,x) \<notin> r \<Longrightarrow> (xs,xs) \<notin> lexord r"
1.2052 by (induct xs) auto
1.2053 @@ -6049,12 +6021,15 @@
1.2054 lemma lexord_transI: "trans r \<Longrightarrow> trans (lexord r)"
1.2055 by (rule transI, drule lexord_trans, blast)
1.2056
1.2057 -lemma lexord_linear: "(\<forall>a b. (a,b)\<in> r \<or> a = b \<or> (b,a) \<in> r) \<Longrightarrow> (x,y) \<in> lexord r \<or> x = y \<or> (y,x) \<in> lexord r"
1.2058 - apply (rule_tac x = y in spec)
1.2059 - apply (induct_tac x, rule allI)
1.2060 - apply (case_tac x, simp, simp)
1.2061 - apply (rule allI, case_tac x, simp, simp)
1.2062 - by blast
1.2063 +lemma lexord_linear: "(\<forall>a b. (a,b) \<in> r \<or> a = b \<or> (b,a) \<in> r) \<Longrightarrow> (x,y) \<in> lexord r \<or> x = y \<or> (y,x) \<in> lexord r"
1.2064 +proof (induction x arbitrary: y)
1.2065 + case Nil
1.2066 + then show ?case
1.2067 + by (metis lexord_Nil_left list.exhaust)
1.2068 +next
1.2069 + case (Cons a x y) then show ?case
1.2070 + by (cases y) (force+)
1.2071 +qed
1.2072
1.2073 lemma lexord_irrefl:
1.2074 "irrefl R \<Longrightarrow> irrefl (lexord R)"
1.2075 @@ -6220,27 +6195,17 @@
1.2076 lemma lexordp_eq_trans:
1.2077 assumes "lexordp_eq xs ys" and "lexordp_eq ys zs"
1.2078 shows "lexordp_eq xs zs"
1.2079 -using assms
1.2080 -apply(induct arbitrary: zs)
1.2081 -apply(case_tac [2-3] zs)
1.2082 -apply auto
1.2083 -done
1.2084 + using assms
1.2085 + by (induct arbitrary: zs) (case_tac zs; auto)+
1.2086
1.2087 lemma lexordp_trans:
1.2088 assumes "lexordp xs ys" "lexordp ys zs"
1.2089 shows "lexordp xs zs"
1.2090 -using assms
1.2091 -apply(induct arbitrary: zs)
1.2092 -apply(case_tac [2-3] zs)
1.2093 -apply auto
1.2094 -done
1.2095 + using assms
1.2096 + by (induct arbitrary: zs) (case_tac zs; auto)+
1.2097
1.2098 lemma lexordp_linear: "lexordp xs ys \<or> xs = ys \<or> lexordp ys xs"
1.2099 -proof(induct xs arbitrary: ys)
1.2100 - case Nil thus ?case by(cases ys) simp_all
1.2101 -next
1.2102 - case Cons thus ?case by(cases ys) auto
1.2103 -qed
1.2104 + by(induct xs arbitrary: ys; case_tac ys; fastforce)
1.2105
1.2106 lemma lexordp_conv_lexordp_eq: "lexordp xs ys \<longleftrightarrow> lexordp_eq xs ys \<and> \<not> lexordp_eq ys xs"
1.2107 (is "?lhs \<longleftrightarrow> ?rhs")
1.2108 @@ -6258,13 +6223,11 @@
1.2109 by(auto simp add: lexordp_conv_lexordp_eq lexordp_eq_refl dest: lexordp_eq_antisym)
1.2110
1.2111 lemma lexordp_eq_linear: "lexordp_eq xs ys \<or> lexordp_eq ys xs"
1.2112 -apply(induct xs arbitrary: ys)
1.2113 -apply(case_tac [!] ys)
1.2114 -apply auto
1.2115 -done
1.2116 + by (induct xs arbitrary: ys) (case_tac ys; auto)+
1.2117
1.2118 lemma lexordp_linorder: "class.linorder lexordp_eq lexordp"
1.2119 -by unfold_locales(auto simp add: lexordp_conv_lexordp_eq lexordp_eq_refl lexordp_eq_antisym intro: lexordp_eq_trans del: disjCI intro: lexordp_eq_linear)
1.2120 + by unfold_locales
1.2121 + (auto simp add: lexordp_conv_lexordp_eq lexordp_eq_refl lexordp_eq_antisym intro: lexordp_eq_trans del: disjCI intro: lexordp_eq_linear)
1.2122
1.2123 end
1.2124
1.2125 @@ -6402,17 +6365,20 @@
1.2126 apply (induct arbitrary: xs set: Wellfounded.acc)
1.2127 apply (erule thin_rl)
1.2128 apply (erule acc_induct)
1.2129 -apply (rule accI)
1.2130 + apply (rule accI)
1.2131 apply (blast)
1.2132 done
1.2133
1.2134 lemma lists_accD: "xs \<in> lists (Wellfounded.acc r) \<Longrightarrow> xs \<in> Wellfounded.acc (listrel1 r)"
1.2135 -apply (induct set: lists)
1.2136 - apply (rule accI)
1.2137 - apply simp
1.2138 -apply (rule accI)
1.2139 -apply (fast dest: acc_downward)
1.2140 -done
1.2141 +proof (induct set: lists)
1.2142 + case Nil
1.2143 + then show ?case
1.2144 + by (meson acc.intros not_listrel1_Nil)
1.2145 +next
1.2146 + case (Cons a l)
1.2147 + then show ?case
1.2148 + by blast
1.2149 +qed
1.2150
1.2151 lemma lists_accI: "xs \<in> Wellfounded.acc (listrel1 r) \<Longrightarrow> xs \<in> lists (Wellfounded.acc r)"
1.2152 apply (induct set: Wellfounded.acc)
1.2153 @@ -6459,10 +6425,7 @@
1.2154
1.2155
1.2156 lemma listrel_mono: "r \<subseteq> s \<Longrightarrow> listrel r \<subseteq> listrel s"
1.2157 -apply clarify
1.2158 -apply (erule listrel.induct)
1.2159 -apply (blast intro: listrel.intros)+
1.2160 -done
1.2161 + by (meson listrel_iff_nth subrelI subset_eq)
1.2162
1.2163 lemma listrel_subset: "r \<subseteq> A \<times> A \<Longrightarrow> listrel r \<subseteq> lists A \<times> lists A"
1.2164 apply clarify
1.2165 @@ -6477,10 +6440,7 @@
1.2166 done
1.2167
1.2168 lemma listrel_sym: "sym r \<Longrightarrow> sym (listrel r)"
1.2169 -apply (auto simp add: sym_def)
1.2170 -apply (erule listrel.induct)
1.2171 -apply (blast intro: listrel.intros)+
1.2172 -done
1.2173 + by (simp add: listrel_iff_nth sym_def)
1.2174
1.2175 lemma listrel_trans: "trans r \<Longrightarrow> trans (listrel r)"
1.2176 apply (simp add: trans_def)
1.2177 @@ -7365,10 +7325,7 @@
1.2178 "(rel_set A ===> rel_set (list_all2 A) ===> rel_set (list_all2 A))
1.2179 set_Cons set_Cons"
1.2180 unfolding rel_fun_def rel_set_def set_Cons_def
1.2181 - apply safe
1.2182 - apply (simp add: list_all2_Cons1, fast)
1.2183 - apply (simp add: list_all2_Cons2, fast)
1.2184 - done
1.2185 + by (fastforce simp add: list_all2_Cons1 list_all2_Cons2)
1.2186
1.2187 lemma listset_transfer [transfer_rule]:
1.2188 "(list_all2 (rel_set A) ===> rel_set (list_all2 A)) listset listset"
|
__label__pos
| 0.992833 |
Kodeclik Blog
Empty Dictionaries in Python
Dictionaries are a very useful data structure in Python. They are a set of key-value pairs so specific values can be associated with specific keys. For instance, we can have a dictionary of food products in a grocery store (keys) and their prices (values).
An empty dictionary is one where there are no key-value pairs. This usually happens when the dictionary is initially created.
Creating an empty dictionary with {}
The easiest way to create an empty dictionary is as follows.
To confirm that this is really an empty dictionary, we can try to find its length:
The output is:
You can try to print mydictionary as follows:
This gives:
To confirm that mydictionary is really a dictionary, we can try to inspect its type:
The output is:
as expected.
Creating an empty dictionary with dict()
A second way to create an empty dictionary is using the dict() constructor without any arguments.
The output is:
Testing if a dictionary is empty using bool()
Finally, we will find it useful to test if a given dictionary is empty.
A dictionary evaluated in a boolean context returns True if it is not-empty and False otherwise. So to test if a dictionary is empty we can negate this result. Below is a function for this purpose:
When we use this function like so:
we get the expected output:
Testing if a dictionary is empty without bool()
In fact, we do not really have to use the bool() function. If the dictionary has at least one key, it will evaluate to True and False otherwise. So we can modify the function above as:
Again trying this function out:
gives:
In this blogpost we have learnt about creating and testing for empty dictionaries. Learn more about dictionaries in our posts about length of dictionary, how to check if two dictionaries are equal, and how to initialize a dictionary in Python.
Interested in more things Python? See our blogpost on Python's enumerate() capability. Also if you like Python+math content, see our blogpost on Magic Squares. Finally, master the Python print function!
Want to learn Python with us? Sign up for 1:1 or small group classes.
Join our mailing list
Subscribe to get updates about our classes, camps, coupons, and more.
• ABOUT
Copyright @ Kodeclik 2022. All rights reserved.
|
__label__pos
| 0.987486 |
Компонент VarDumper (сброс переменной)
Компонент VarDumper предоставляет механизмы для извлечения состояния из любой переменной PHP. Так как он надстраивается, он предоставляет улучшенную функцию dump(), которую вы можете использовать вместо var_dump.
Установка
1
$ composer require symfony/var-dumper --dev
Alternatively, you can clone the https://github.com/symfony/var-dumper repository.
Note
If you install this component outside of a Symfony application, you must require the vendor/autoload.php file in your code to enable the class autoloading mechanism provided by Composer. Read this article for more details.
Note
Если вы используете его внутри приложения Symfony, убедитесь, что был установлен DebugBundle (или запустите composer require symfony/debug-bundle, чтобы установить его).
Функция dump()
Компонент VarDumper создаёт глобальную функцию dump(), которую вы можете использовать вместо, например. var_dump. Используя её, вы получаете:
• Специализированный просмотр по объектам и источникам, чтобы, к примеру, отфильтровать внутренние объекты Doctrine при сбросе одной сущности прокси, или получить больше информации об открытых с помощью stream_get_meta_data файлах;
• Конфигурируемые форматы вывода: вывод HTML или цветной командной строки;
• Возможность сбрасывать внутренние ссылки, мягкие (объекты или источники) или жёсткие (=& в свойствах массивов или объектов). Повторяемое появление одного и того же объекта/масива/истоничка не будут больше появляться снова и снова. Более того, вы сможете исследовать структуру ссылок ваших данных;
• Возможность оперировать в контексте обработчика буферизации вывода.
Например:
require __DIR__.'/vendor/autoload.php';
// создать переменную, которая может быть чем угодно!
$someVar = ...;
dump($someVar);
// dump() возвращает переданное значение, чтобы вы могли сбросить объект и продолжать использовать его
dump($someObject)->someMethod();
По умолчанию, формат вывода и направление выбираются исходя из вашего текущего PHP SAPI:
• В командной строке (CLI SAPI), вывод пишется в STDOUT. Это может быть удивительно для некоторых, так как это обходит механизм буферизации вывода PHP;
• В других SAPI, сбросы пишутся в виде HTML в обычном выводе.
Note
Если вы хотите поймать вывод сброса в виде строки, пожалуйста, прочтите продвинутую документацию, которая содержит такие примеры. Вы также узнаете, как изменять формат или перенаправлять вывод туда, куда вам нужно.
Tip
Для того, чтобы функция dump() всегда была доступна при выполнении любого PHP кода, вы можете установить её на вашем компьютере глобально:
1. Запустите composer global require symfony/var-dumper;
2. Добавьте auto_prepend_file = ${HOME}/.composer/vendor/autoload.php к вашему файлу php.ini;
3. Иногда запускайте composer global update symfony/var-dumper, чтобы иметь последние исправления багов.
Tip
Компонент VarDumper также предоставляет функцию dd() (“dump and die” - “сбрось и умри”). Эта функция отображает переменные используя dump() и сразу прекращает исполнение скрипта (используя функцию PHP exit).
New in version 4.1: Метод dd() появился в Symfony 4.1.
Сервер дампов
New in version 4.1: Сервер дампов появился в Symfony 4.1.
Функция dump() выводит своё содержимое в то же окно браузера или консольный терминал, что и ваше приложение. Иногда смешение настоящего вывода с выводом отладочной информации может запутывать. Поэтому этот компонент предоставляет сервер для сбора всей отладочной информации.
Запустите сервер командой server:dump и когда вы будете вызывать dump(), отладочная информация не будет отображаться в потоке вывода, а отправится на этот сервер, который будет отображать это в своей конслоли или HTML-файле:
1
2
3
4
5
6
# отображает отладочную информацию в консоли:
$ ./bin/console server:dump
[OK] Server listening on tcp://0.0.0.0:9912
# сохраняет отладочную информацию в файле используя формат HTML:
$ ./bin/console server:dump --format=html > dump.html
Внути приложения Symfony, вывод дамп-сервера настраивается с помощью настройки dump_destination пакета debug.
Интеграция DebugBundle и Twig
DebugBundle позволяет большую интеграцию компонента в приложения Symfony.
Так как генрировние вывода (даже отладки) в контроллере или модели вашего приложения может просто его сломать (например, отправка HTTP заголовков или нарушение вашего просмотра), пакет конфигурирует функцию dump(), чтобы переменные сбрасывались в панели инструментов веб-отладки.
Но если панель инструментов нельзя отобразить так как вы, к примеру, вызвали die()/exit()/dd() или возникла фатальная ошибка, тогда дампы пишутся в обычном потоке вывода.
В шаблоне Twig доступны две конструкции для сброса переменной. Выбор между двумя в основном зависит от ваших личных предпочтейни, и всё же:
• {% dump foo.bar %} стоит использовать, когда исходный вывод шаблона не стоит изменять: переменные сбрасываются не встроено, а в панели инструментов веб отладки;
• и наоборот, {{ dump(foo.bar) }} сбрастывает встроено и поэтому может подходить или не подходить в вашем случае (например, не стоит использовать его в атрибуте HTML или теге <script>).
Это поведение можно изменить, сконфигурировав опцию debug.dump_destination. Прочтите больше об этом и других опциях в справочнике конфигурации DebugBundle.
Tip
Если сброшенное содержание сложное, рассмотрите использование локальное поле поиска, чтобы найти определённые переменные или значения. Для начала, нажмите где угодно в сброшенном содержании, а потом нажмите Ctrl. + F или Cmd. + F, чтобы появилось локальное поле поиска. Все общие сокращения для поиска результата поддерживаются (Ctrl. + G или Cmd. + G, F3, и др.) Когда закончите, нажмите Esc., чтобы спрятать поле.
Использование компонента VarDumper в вашем наборе тестов PHPUnit
Компонент VarDumper предоставляет a trait, который может помочь написать некоторые из ваших тестов для PHPUnit.
Это предоставит вам два новых утверждения:
assertDumpEquals()
верифицирует, чтобы сброс переменной, данный в виде второго аргументы, соответствовал ожидаемому сбросу, предоставленному в качестве первого аргумента.
assertDumpMatchesFormat()
такой же, как и предыдущий метод, но принимает заполнители в ожидаемом сбросе, основанном на методе assertStringMatchesFormat(), предоставленном PHPUnit.
Пример:
use PHPUnit\Framework\TestCase;
class ExampleTest extends TestCase
{
use \Symfony\Component\VarDumper\Test\VarDumperTestTrait;
public function testWithDumpEquals()
{
$testedVar = array(123, 'foo');
$expectedDump = <<<EOTXT
array:2 [
0 => 123
1 => "foo"
]
EOTXT;
$this->assertDumpEquals($expectedDump, $testedVar);
}
}
// если первый аругмент - строка, он должен быть всем ожидаемым сбросом
$this->assertDumpEquals($expectedDump, $testedVar);
// если первый аргумент - не строка, assertDumpEquals() сбрасывает его,
// и сопоставляет со сбросом второго аргумента
$this->assertDumpEquals($testedVar, $testedVar);
}
}
New in version 4.1: Возможность передачи нестроковых переменных в качестве первого аргумента assertDumpEquals() была представлена в Symfony 4.1.
Примеры сброса и вывод
Для простых переменных, чтение вывода должно быть прямолинейным. Вот некоторые примеры, отображающие первую переменную, определённую в PHP, а потом его представление сброса:
$var = array(
'a simple string' => "in an array of 5 elements",
'a float' => 1.0,
'an integer' => 1,
'a boolean' => true,
'an empty array' => array(),
);
dump($var);
../_images/01-simple.png
Note
Серая стрелка - это кнопка переключателя для показа / скрытия детей встроенных структур.
1
2
3
4
5
6
7
$var = "Это многострочная строка.\n";
$var .= "Наведение на строку показывает её длину.\n";
$var .= "Длина UTF-8 строк считается в рамках символов UTF-8.\n";
$var .= "Длина не-UTF-8 строк считается октетами.\n";
$var .= "Из-за этого `\xE9` октет (\\xE9),\n";
$var .= "Эта строка не UTF-8 валидна, поэтому `b` prefix.\n";
dump($var);
../_images/02-multi-line-str.png
1
2
3
4
5
6
7
8
9
class PropertyExample
{
public $publicProperty = 'The `+` prefix denotes public properties,';
protected $protectedProperty = '`#` protected ones and `-` private ones.';
private $privateProperty = 'Hovering a property shows a reminder.';
}
$var = new PropertyExample();
dump($var);
../_images/03-object.png
Note
#14 это внутренний объект для обработки. Он позволяет сранвивать два последовательных сброса одного и того же объекта.
1
2
3
4
5
6
7
8
class DynamicPropertyExample
{
public $declaredProperty = 'Это свойство объявлено в определении класса';
}
$var = new DynamicPropertyExample();
$var->undeclaredProperty = 'Runtime added dynamic properties have `"` around their name.';
dump($var);
../_images/04-dynamic-property.png
1
2
3
4
5
6
7
class ReferenceExample
{
public $info = "Цикличные и родственные ссылки отображаются, как `#number`.\n Наведение на них выделяет все экземпляры в одном сбросе.\n";
}
$var = new ReferenceExample();
$var->aCircularReference = $var;
dump($var);
../_images/05-soft-ref.png
1
2
3
4
5
6
7
8
$var = new \ErrorException(
"Для некоторых объектов свойства имеют специальные значения,\n"
."которые лучше всего отображаются в виде ограничений, вроде\n"
."`severity` ниже. Наведение на них отображает значение (`2`).\n",
0,
E_WARNING
);
dump($var);
../_images/06-constants.png
1
2
3
4
5
6
7
8
$var = array();
$var[0] = 1;
$var[1] =& $var[0];
$var[1] += 1;
$var[2] = array("Hard references (circular or sibling)");
$var[3] =& $var[2];
$var[3][] = "are dumped using `&number` prefixes.";
dump($var);
../_images/07-hard-ref.png
1
2
3
4
5
$var = new \ArrayObject();
$var[] = "Некоторые источники и специальные объекты, как текущий";
$var[] = "иногда лучше всего представить, используя виртуальные";
$var[] = "свойства, описывающие их внутреннее сстояние.";
dump($var);
../_images/08-virtual-property.png
1
2
3
4
5
6
7
8
$var = new AcmeController(
"Когда сброс превышает максимальный лимит объектов,\n"
."или когда встречаются специальные объекты,\n"
."дети могут быть заменены эллипсами и\n"
."опционально иметь число, которое сообщает как\n"
."и сколько было удалено; В этом случае `9`.\n"
);
dump($var);
../_images/09-cut.png
Эта документация является переводом официальной документации Symfony и предоставляется по свободной лицензии CC BY-SA 3.0.
|
__label__pos
| 0.6294 |
What is 307 percent of 353 - step by step solution
Simple and best practice solution for 307% of 353. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the calculator fields your own values, and You will get the solution.
To get the solution, we are looking for, we need to point out what we know.
1. We assume, that the number 353 is 100% - because it's the output value of the task.
2. We assume, that x is the value we are looking for.
3. If 353 is 100%, so we can write it down as 353=100%.
4. We know, that x is 307% of the output value, so we can write it down as x=307%.
5. Now we have two simple equations:
1) 353=100%
2) x=307%
where left sides of both of them have the same units, and both right sides have the same units, so we can do something like that:
353/x=100%/307%
6. Now we just have to solve the simple equation, and we will get the solution we are looking for.
7. Solution for what is 307% of 353
353/x=100/307
(353/x)*x=(100/307)*x - we multiply both sides of the equation by x
353=0.3257328990228*x - we divide both sides of the equation by (0.3257328990228) to get x
353/0.3257328990228=x
1083.71=x
x=1083.71
now we have:
307% of 353=1083.71
You can always share this solution
See similar equations:
| What is 51.5 percent of 22.4 - step by step solution | | 75.24 is what percent of 237 - step by step solution | | 18.75 is what percent of 450000 - step by step solution | | What is 3 percent of 33.8887 - step by step solution | | 205.97 is what percent of 230 - step by step solution | | 229.8 is what percent of 280.2 - step by step solution | | 1198 is what percent of 23000 - step by step solution | | 2.016 is what percent of 16 - step by step solution | | What is 229.8 percent of 280.2 - step by step solution | | 21000 is what percent of 13052 - step by step solution | | What is 1.6 percent of 40.9 - step by step solution | | 315000 is what percent of 83000000 - step by step solution | | What is 56.66 percent of 800 - step by step solution | | What is 1 percent of 39.745 - step by step solution | | 700000 is what percent of 660000000 - step by step solution | | 64 is what percent of 180000 - step by step solution | | What is 49.6 percent of 7660000000 - step by step solution | | 299 is what percent of 648 - step by step solution | | What is 10.9 percent of 3000000 - step by step solution | | 630 is what percent of 4000000 - step by step solution | | 1598 is what percent of 2331 - step by step solution | | What is 220300 percent of 330000000 - step by step solution | | What is 0.014 percent of 35000000 - step by step solution | | 684.95 is what percent of 721 - step by step solution | | 52360 is what percent of 280000 - step by step solution | | What is 0.014 percent of 9000000000 - step by step solution | | What is 299.99 percent of 119.44 - step by step solution | | What is 76.8 percent of 4.32 - step by step solution | | What is 1450 percent of 3.8 - step by step solution | | 35400 is what percent of 6500000 - step by step solution | | What is 2.5 percent of 25505000 - step by step solution | | What is 35300 percent of 650000 - step by step solution |
Equations solver categories
|
__label__pos
| 0.991168 |
Instructions
In the following questions each question is followed by data in the form of two statements labelled as I and II. You must decide whether the data given in the statements are sufficient to answer the questions.
Question 4
What is the number of elements in the set $$A \cup B$$?
I. Number of elements in $$A \cap B = 5$$
II. Number of elements of B which are not in A is 17
Create a FREE account and get:
• Download Maths Shortcuts PDF
• Get 300+ previous papers with solutions PDF
• 500+ Online Tests for Free
cracku
Boost your Prep!
Download App
|
__label__pos
| 0.861959 |
Download PDF
The Data Detective: Ten Easy Rules to Make Sense of Statistics
author
Tim Harford
Riverhead Books, New York, 2021, 336 pp., $21.49
In our everyday lives we are constantly confronted with a myriad of data—from health news to political opinion polls—presented to us as hard facts based on statistics. In such circumstances, the natural tendency is to deduce that if it is based on statistics, it must be true. But how many times have we encountered divergent statistics on the same issue? How does one know if the facts presented are true?
This is where Tim Harford, in his latest book, The Data Detective , makes an important contribution by presenting, in an intuitive way, basic rules that can help evaluate if the facts labeled as true statistics make sense. The book is nicely designed for a broad audience, presenting a series of captivating and amusing stories that illustrate how statistics can mislead as well as examples of serious statistical studies that have changed our knowledge and behavior—for instance, the effects of smoking on health. While avoiding the specialized jargon and technical aspects of the statistical profession, the author argues convincingly, based on his experience and research, that statistics should be seen as a tool that can help us understand the world in which we live, a bit like the telescope is to astronomy, to borrow his analogy.
Statistics should be seen as a tool that can help us understand the world in which we live.
Building on well-researched examples across domains and time, Harford reminds us of key steps to take when analyzing a series of statistics, including maintaining some distance so as not to be influenced by our biases and personal experiences, which may not be representative; pausing and reflecting before coming to a conclusion; and, as a detective would, asking simple questions (What are we trying to measure? What is the sample or universe used?) to get a sense of context and perspective. His examples of the different measures of income and wealth, poverty, health, and murder rates—as well as prediction of election results—are telling, and we can be seriously misled if we don’t scrutinize carefully the data we regularly encounter.
The book also delves into new areas such as big data and computer algorithms, presenting some of the benefits of these new sources of large administrative data sets but reminding us also of their limitations and potential biases. Harford’s book illustrates with convincing examples the importance of data transparency, vigorous analysis, and the protection of the independence of statistical agencies, which he rightly calls “nations’ statistical bedrock.”
The Data Detective comes at the right time: we face an onslaught of statistics on critical issues such as the consequences of climate change, the COVID-19 pandemic, the economic downturn, and Brexit, just for starters. This well-documented book is a must for anyone who is curious about how to make sense of all the information about this complex world in which we live.
author
LOUIS MARC DUCHARME , chief statistician, data officer, and director, IMF Statistics Department
Opinions expressed in articles and other materials are those of the authors; they do not necessarily represent the views of the IMF and its Executive Board, or IMF policy.
|
__label__pos
| 0.698919 |
Nachschlagewerk MQL5Grundlagen der SpracheDatentypenStrukturen, Klassen und Schnittstellen
Strukturen, Klassen und Schnittstellen
Strukturen
Eine Struktur umfasst eine Reihe von Elementen beliebigen Typs (außer dem Typ void). Auf diese Weise gruppiert eine Struktur logisch verbundene Daten unterschiedlicher Typen.
Strukturdeklaration
Der Strukturdatentyp wird wie folgt bestimmt:
struct Strukturname
{
Elementenbeschreibung
};
Der Strukturname kann nicht als Identifikator verwendet werden (Name der Variable oder Funktion). Man bedenke, dass die Strukturelemente in MQL5 ohne Ausrichtung direkt aufeinander folgen. In der Sprache C++ wird diese Festlegung an den Kompiler mit der Anweisung gerichtet.
#pragma pack(1)
Wenn es notwendig ist, eine andere Ausrichtung in der Struktur durchzuführen, muss man "Platzhalter" der gewünschten Größen verwenden.
Beispiel:
struct trade_settings
{
uchar slippage; // Wert der zulässigen Slippage - Größe 1 Byte
char reserved1; // 1 Byte Platzhalter
short reserved2; // 2 Byte Platzhalter
int reserved4; // noch 4 Byte Platzhalter. Bereitgestellte Ausrichtung von 8 Byte
double take; // Preis eines fixierten Take-Profits
double stop; // Preis eines schützenden Stop-Loss
};
Eine solche Beschreibung ausgerichteter Strukturen ist nur für die Übergabe an importierte dll-Funktionen notwendig.
Achtung: dieses Beispiel zeigt falsch entworfene Daten. Es ist besser, zuerst die Daten von Take-Profit und Stop-Loss mit dem Typ double und dann das Slippage-Member des Uchartyps zu deklarieren. In diesem Fall ist die interne Darstellung der Daten immer gleich, unabhängig von dem im #pragma pack() angegebenen Wert.
Wenn die Struktur Variablen der Art string und/oder Objekt des dynamischen Feldes enthält, weist der Kompiler für eine solche Struktur einen impliziten Konstruktor zu, in dem alle Mitglieder vom Typ string zurückgesetzt werden und die dynamischen Array-Objekt werden korrekt initialisiert.
Einfache Strukturen
Strukturen, die keine Zeichenketten, Klassenobjekte, Pointer und Objekte von dynamischen Arrays enthalten, werden als einfache Strukturen bezeichnet. Variablen einfacher Strukturen sowie deren Arrays können als Parameter an importierte Funktionen aus DLL. übergeben werden.
Das Kopieren von einfachen Strukturen ist nur in zwei Fällen erlaubt:
• Wenn die Objekte zum gleichen Strukturtyp gehören.
• Wenn die Objekte linear miteinander verbunden sind, was bedeutet, dass eine Struktur ein Nachkomme einer anderen ist.
Um ein Beispiel zu geben, entwickeln wir die nutzerdefinierte Struktur CustomMqlTick mit ihrem Inhalt, der identisch ist mit der integrierten Struktur MqlTick. Der Kompiler erlaubt es nicht, den Wert des MqlTick-Objekts in das Objekt vom Typ CustomMqlTick zu kopieren. Auch die direkte Typenumwandlung zum benötigten Typ verursacht einen Kompilerfehler:
//--- Kopieren einfacher Strukturen verschiedener Typen ist verboten
my_tick1=last_tick; // der Kompiler wirft hier den Fehler aus
//--- Eine Typenumwandlung von unterschiedlichen Strukturen ist auch verboten
my_tick1=(CustomMqlTick)last_tick;// der Kompiler wirft hier den Fehler aus
Daher bleibt nur eine Option - das Kopieren der Werte der Strukturelemente, eines nach dem anderen. Es ist weiterhin erlaubt, die Werte des gleichen Typs von CustomMqlTick zu kopieren.
CustomMqlTick my_tick1,my_tick2;
//--- Es ist erlaubt, die Objekte des gleichen Typs CustomMqlTick wie folgt zu kopieren
my_tick2=my_tick1;
//--- Erstellen Sie ein Array aus den Objekten der einfachen CustomMqlTick-Struktur und schreiben Sie Werte in diese Struktur.
CustomMqlTick arr[2];
arr[0]=my_tick1;
arr[1]=my_tick2;
Die Funktion ArrayPrint() wird aufgerufen, um die Werte des Arrays arr[] im Journal auszudrucken.
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Entwickeln einer Struktur ähnlich der integrierten MqlTick
struct CustomMqlTick
{
datetime time; // Aktualisierungszeit des letzten Preises
double bid; // Aktueller Bid-Preis
double ask; // Aktueller Ask-Preis
double last; // Aktueller Preis des letzten Handels (Last)
ulong volume; // Volumen des aktuellen, letzten Preises
long time_msc; // Aktualisierungszeit des letzten Preises in Millisekunden
uint flags; // Tick-Flags
};
//--- Abrufen des letzten Tickwertes
MqlTick last_tick;
CustomMqlTick my_tick1,my_tick2;
//--- Versuch die Daten von MqlTick nach CustomMqlTick zu kopieren
if(SymbolInfoTick(Symbol(),last_tick))
{
//--- Kopieren von unverbundenen Strukturen ist verboten
//1. my_tick1=last_tick; // der Kompiler wirft hier den Fehler aus
//--- Auch eine Typenumwandlung von unverbundenen Strukturen untereinander ist verboten
//2. my_tick1=(CustomMqlTick)last_tick;// Der Kompiler wirft hier den Fehler aus
//--- Daher werden die Strukturmitglieder einzeln kopiert
my_tick1.time=last_tick.time;
my_tick1.bid=last_tick.bid;
my_tick1.ask=last_tick.ask;
my_tick1.volume=last_tick.volume;
my_tick1.time_msc=last_tick.time_msc;
my_tick1.flags=last_tick.flags;
//--- Es ist erlaubt, die Objekte des gleichen Typs CustomMqlTick wie folgt zu kopieren
my_tick2=my_tick1;
//--- Erstellen Sie ein Array aus den Objekten der einfachen CustomMqlTick-Struktur und schreiben Sie Werte in diese Struktur.
CustomMqlTick arr[2];
arr[0]=my_tick1;
arr[1]=my_tick2;
ArrayPrint(arr);
//--- Beispiel einer Anzeige der Werte des Arrays mit den Objekten vom Typ CustomMqlTick
/*
[time] [bid] [ask] [last] [volume] [time_msc] [flags]
[0] 2017.05.29 15:04:37 1.11854 1.11863 +0.00000 1450000 1496070277157 2
[1] 2017.05.29 15:04:37 1.11854 1.11863 +0.00000 1450000 1496070277157 2
*/
}
else
Print("SymbolInfoTick() failed, error = ",GetLastError());
}
Das zweite Beispiel zeigt die Eigenschaften des linearen Kopierens einfacher Strukturen. Angenommen, wir haben die Grundstruktur Tier, von der die Strukturen für Katze und Hund abgeleitet werden. Wir können die Objekte Tier und Katze sowie Tier und Hund ineinander kopieren, aber wir können Katze und Hund nicht ineinander kopieren, obwohl beide Nachkommen der Struktur Tier sind.
//--- Struktur zur Beschreibung von Hund
struct Hund: Tier
{
bool hunting; // Beutejäger
};
//--- Struktur der Beschreibung der Katzen
struct Katze: Tier
{
bool home; // Eigenzüchtung
};
//--- Erstellen der abgeleiteten Strukturen
Hund hund;
Katze katze;
//--- Kann vom Vor- auf den Nachfahren kopiert werden (Tier ==> Hund)
hund=ein_Tier;
hund.swim=true; // Hunde können schwimmen
//--- Objekte der Nachfahren können nicht aufeinander kopiert werden (Hund != Katze)
katze=hund; // der Kompiler wirft einen Fehler aus
Der ganze Code des Beispiels:
//--- Basisstruktur zur Beschreibung von Tier
struct Tier
{
int head; // Anzahl der Köpfe
int legs; // Anzahl der Beine
int wings; // Anzahl der Flügel
bool tail; // Schwanz
bool fly; // fliegt
bool swim; // schwimmt
bool run; // rennt
};
//--- Struktur zur Beschreibung von Hund
struct Hund: Tier
{
bool hunting; // Beutejäger
};
//--- Struktur der Beschreibung der Katzen
struct Katze: Tier
{
bool home; // Eigenzüchtung
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Erstellen und beschreiben des Basistyps von Tier
Tier ein_Tier;
ein_Tier.head=1;
ein_Tier.legs=4;
ein_Tier.wings=0;
ein_Tier.tail=true;
ein_Tier.fly=false;
ein_Tier.swim=false;
ein_Tier.run=true;
//--- create objects of child types
Hund hund;
Katze katze;
//--- Kann vom Vor- auf den Nachfahren kopiert werden (Tier ==> Hund)
hund=ein_Tier;
hund.swim=true; // Hunde können schwimmen
//--- Objekte der Nachfahren können nicht aufeinander kopiert werden (Hund != Katze)
//katze=hund; // der Kompiler wirft einen Fehler aus
//--- Daher können die Elemente nur eines nach dem anderen kopiert werden
katze.head=hund.head;
katze.legs=hund.legs;
katze.wings=hund.wings;
katze.tail=hund.tail;
katze.fly=hund.fly;
katze.swim=false; // Katzen können nicht schwimmen
//--- Man kann die Werte vom Nach- auf den Vorfahren zu kopieren
Tier elephant;
elephant=katze;
elephant.run=false;// Elefanten können nicht rennen
elephant.swim=true;// Elefanten können schwimmen
//--- Erstellen eines Arrays
Tier tiere[4];
tiere[0]=ein_tier;
tiere[1]=hund;
tiere[2]=katze;
tiere[3]=elephant;
//--- Ausdrucken
ArrayPrint(animals);
//--- Berechnungsergebnis
/*
[head] [legs] [wings] [tail] [fly] [swim] [run]
[0] 1 4 0 true false false true
[1] 1 4 0 true false true true
[2] 1 4 0 true false false false
[3] 1 4 0 true false true false
*/
}
Ein anderer Weg einfache Strukturen zu kopieren ist die Verwendung von unions. Die Objekte der Struktur sollten Mitglieder der gleichen union sein - siehe das Beispiel für union.
Zugang zu Strukturgliedern
Strukturname ist ein neuer Datentyp und ermöglicht Variablen dieses Typs zu vereinbaren. Struktur kann nur einmal im Rahmen dieses Projekts vereinbart werden. Zugriff auf die Strukturmitglieder erhält man durch die Operation Punkt (.) durchgeführt.
Beispiel:
struct trade_settings
{
double take; // Preis für Take-Profit
double stop; // Preis für Stop-Loss
uchar slippage; // Wert der zulässigen Slippage
};
//--- Variable des Typs trade_settings ist erzeugt und initialisiert
trade_settings my_set={0.0,0.0,5};
if (input_TP>0) my_set.take=input_TP;
'pack' zum Ausrichten von Struktur- und Klassenfeldern
Das spezielle Attribut pack ermöglicht die Ausrichtung von Struktur- oder Klassenfeldern.
pack([n])
wobei n einer der folgenden Werte ist: 1, 2, 4, 8 oder 16. Es kann auch fehlen.
Beispiel:
struct pack(sizeof(long)) MyStruct
{
// Strukturmitglieder müssen auf eine 8-Byte-Grenze ausgerichtet werden
};
oder
struct MyStruct pack(sizeof(long))
{
// Strukturmitglieder müssen auf eine 8-Byte-Grenze ausgerichtet werden
};
'pack(1)' wird standardmäßig auf Strukturen angewendet. Das bedeutet, dass sich die Strukturelemente nacheinander im Speicher befinden und die Strukturgröße gleich der Summe der Größe ihrer Elemente ist.
Beispiel:
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Einfache Struktur ohne Ausrichtung
struct Simple_Structure
{
char c; // sizeof(char)=1
short s; // sizeof(short)=2
int i; // sizeof(int)=4
double d; // sizeof(double)=8
};
//--- Deklaration einer einfachen Struktur
Simple_Structure s;
//--- Anzeige der Größe von jedem Strukturmitglied
Print("sizeof(s.c)=",sizeof(s.c));
Print("sizeof(s.s)=",sizeof(s.s));
Print("sizeof(s.i)=",sizeof(s.i));
Print("sizeof(s.d)=",sizeof(s.d));
//--- Sicherstellen, dass die Größe der POD-Struktur gleich der Summe der Mitgliedergrößen ist
Print("sizeof(simple_structure)=",sizeof(simple_structure));
/*
Ergebnis:
sizeof(s.c)=1
sizeof(s.s)=2
sizeof(s.i)=4
sizeof(s.d)=8
sizeof(simple_structure)=15
*/
}
Das Ausrichten der Strukturfelder kann erforderlich sein, wenn Daten mit Bibliotheken von Drittanbietern (*.DLL) ausgetauscht werden, bei denen diese Ausrichtung angewendet wird.
Lassen Sie uns anhand einiger Beispiele zeigen, wie die Ausrichtung funktioniert. Wir werden eine Struktur verwenden, die aus vier Mitgliedern besteht und nicht ausgerichtet ist.
//--- Einfache Struktur ohne Ausrichtung
struct Simple_Structure pack() // es wird keine Größe angegeben, eine Ausrichtung auf den Grenzwert von 1 Byte wird gesetzt
{
char c; // sizeof(char)=1
short s; // sizeof(short)=2
int i; // sizeof(int)=4
double d; // sizeof(double)=8
};
//--- Deklaration einer einfachen Struktur
Simple_Structure s;
Strukturfelder sind gemäß der Deklarationsreihenfolge und Typengröße nacheinander im Speicher angeordnet. Die Strukturgröße ist 15, während ein Offset zu den Strukturfeldern in den Arrays undefiniert ist.
simple_structure_alignment
Deklarieren wir nun die gleiche Struktur mit der Ausrichtung von 4 Bytes und führen den Code aus.
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Eine einfache Struktur mit einer Ausrichtung von 4 Bytes
struct Simple_Structure pack(4)
{
char c; // sizeof(char)=1
short s; // sizeof(short)=2
int i; // sizeof(int)=4
double d; // sizeof(double)=8
};
//--- Deklaration einer einfachen Struktur
Simple_Structure s;
//--- Anzeige der Größe von jedem Strukturmitglied
Print("sizeof(s.c)=",sizeof(s.c));
Print("sizeof(s.s)=",sizeof(s.s));
Print("sizeof(s.i)=",sizeof(s.i));
Print("sizeof(s.d)=",sizeof(s.d));
//--- make sure the size of POD structure is now not equal to the sum of its members' size
Print("sizeof(simple_structure)=",sizeof(simple_structure));
/*
Ergebnis:
sizeof(s.c)=1
sizeof(s.s)=2
sizeof(s.i)=4
sizeof(s.d)=8
sizeof(simple_structure)=16 // structure size has changed
*/
}
Die Strukturgröße hat sich so verändert, dass alle Mitglieder von 4 Bytes und mehr einen Offset vom Anfang des Strukturfachs von 4 Bytes haben. Kleinere Elemente sind auf ihre eigene Größengrenze auszurichten (z.B. 2 für 'short'). So sieht es aus (das hinzugefügte Byte wird grau dargestellt).
simple_structure_alignment_pack
In diesem Fall wird 1 Byte nach dem Mitglied s.c hinzugefügt, so dass das Feld s.s (sizeof(short)==2) die Grenze von 2 Bytes hat (Ausrichtung für den Typ 'short').
Der Offset zum Anfang der Struktur im Array ist ebenfalls auf die 4-Byte-Grenze ausgerichtet, d.h. die Adressen der Elemente a[0], a[1] und a[n] sollen bei Simple_Structure arr[] ein Vielfaches von 4 Bytes sein.
Betrachten wir zwei weitere Strukturen, die aus ähnlichen Typen mit 4-Byte-Ausrichtung, aber mit einer unterschiedlichen Reihenfolge der Mitglieder. In der ersten Struktur befinden sich die Elemente in aufsteigender Reihenfolge der Typengröße.
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Eine einfache Struktur, ausgerichtet auf eine Begrenzung von 4 Bytes
struct CharShortInt pack(4)
{
char c; // sizeof(char)=1
short s; // sizeof(short)=2
int i; // sizeof(double)=4
};
//--- Deklaration einer einfachen Struktur
CharShortInt ch_sh_in;
//--- Anzeige der Größe von jedem Strukturmitglied
Print("sizeof(ch_sh_in.c)=",sizeof(ch_sh_in.c));
Print("sizeof(ch_sh_in.s)=",sizeof(ch_sh_in.s));
Print("sizeof(ch_sh_in.i)=",sizeof(ch_sh_in.i));
//--- Sicherstellen, dass die Größe der POD-Struktur gleich der Summe der Mitgliedergrößen ist
Print("sizeof(CharShortInt)=",sizeof(CharShortInt));
/*
Ergebnis:
sizeof(ch_sh_in.c)=1
sizeof(ch_sh_in.s)=2
sizeof(ch_sh_in.i)=4
sizeof(CharShortInt)=8
*/
}
Wie wir sehen können, ist die Strukturgröße 8 und besteht aus den beiden 4-Byte-Blöcken. Der erste Block enthält die Felder mit den Typen 'char' und 'short', während der zweite Block das Feld mit dem Typ 'int' enthält.
charshortint
Nun verwandeln wir die erste Struktur in die zweite, die sich nur in der Feldreihenfolge unterscheidet, indem wir das Element mit dem Typ 'short' an das Ende verschieben.
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Eine einfache Struktur, ausgerichtet auf eine Begrenzung von 4 Bytes
struct CharIntShort pack(4)
{
char c; // sizeof(char)=1
int i; // sizeof(double)=4
short s; // sizeof(short)=2
};
//--- Deklaration einer einfachen Struktur
CharIntShort ch_in_sh;
//--- Anzeige der Größe von jedem Strukturmitglied
Print("sizeof(ch_in_sh.c)=",sizeof(ch_in_sh.c));
Print("sizeof(ch_in_sh.i)=",sizeof(ch_in_sh.i));
Print("sizeof(ch_in_sh.s)=",sizeof(ch_in_sh.s));
//--- Sicherstellen, dass die Größe der POD-Struktur gleich der Summe der Mitgliedergrößen ist
Print("sizeof(CharIntShort)=",sizeof(CharIntShort));
/*
Ergebnis:
sizeof(ch_in_sh.c)=1
sizeof(ch_in_sh.i)=4
sizeof(ch_in_sh.s)=2
sizeof(CharIntShort)=12
*/
}
Obwohl sich der Strukturinhalt nicht geändert hat, hat sich die Größe durch die Änderung der Elementreihenfolge erhöht.
charintshort
Die Ausrichtung sollte auch bei der Vererbung berücksichtigt werden. Lassen Sie uns dies anhand der einfachen übergeordneten Struktur mit einem einzigen Element vom Typ 'char' demonstrieren. Die Strukturgröße ohne Ausrichtung ist 1.
struct Eltern
{
char c; // sizeof(char)=1
};
Erstellen wir die abgeleitete Klasse Kind mit dem Mitglied vom Typ 'short' (sizeof(short)=2).
struct Kind pack(2) : Parent
{
short s; // sizeof(short)=2
};
Im Ergebnis ist die Strukturgröße gleich 4, wenn die Ausrichtung auf 2 Bytes festgelegt wird, obwohl die Größe der Elemente 3 ist. In diesem Beispiel sollen der übergeordneten Klasse 2 Bytes zugeordnet werden, so dass der Zugriff auf das Feld 'short' der Unterklasse auf 2 Bytes ausgerichtet ist.
Das Wissen darüber, wie Speicher für die Strukturmitglieder zugeteilt wird, ist notwendig, wenn eine MQL5-Anwendung mit gelesenen/geschriebenen Daten von Dateien oder Streams arbeitet.
Das MQL5\Include\WinAPI-Verzeichnis der Standardbibliothek enthält die Funktionen zum Arbeiten mit den WinAPI-Funktionen. Diese Funktionen wenden die Strukturen mit einer bestimmten Ausrichtung auf die Fälle an, in denen sie für die Arbeit mit WinAPI erforderlich sind.
offsetof ist ein spezieller Befehl, der sich direkt auf das Attribut pack bezieht. Es ermöglicht uns, das Offset eines Elements am Anfang der Struktur zu abzurufen.
//--- Deklaration der Variablen vom Typ Kind
Children child;
//--- Erkennen des Offsets vom Anfang der Struktur
Print("offsetof(child.c)=",offsetof(child.c));
Print("offsetof(child.s)=",offsetof(child.s));
/*
Ergebnis:
offsetof(child.c)=0
offsetof(child.s)=2
*/
Modifikator 'final'
Das Vorhandensein des Modifikators 'final' bei der Deklaration der Struktur verbietet weitere Vererbung. Wenn es nicht notwendig ist, die Struktur weiter zu ändern oder wenn Änderungen aus Sicherheitsgründen unzulässig sind, deklarieren Sie diese mit dem Modifikator 'final'. Dadurch werden alle Mitglieder der Struktur implizit als endgültig gelten.
struct settings final
{
//--- Strukturkörper
};
struct trade_settings : public settings
{
//--- Strukturkörper
};
Bei einem Versuch der Vererbung von einer Struktur mit dem Modifikator 'final', wie im Beispiel oben, wirft der Kompiler einen Fehler aus:
cannot inherit from 'settings' as it has been declared as 'final'
see declaration of 'settings'
Klassen
Klassen unterscheiden sich von den Strukturen wie folgt:
• Das Schlüsselwort Klasse wird in der Deklaration verwendet;
• Standardmäßig haben alle Klassenmitglieder Zugriff auf die Spezifikation private, sofern nicht anders angegeben. Datenelemente der Struktur haben die standardmäßig die Zugriffsart 'public', sofern nicht anders angegeben;
• Klassenobjekte haben immer eine Tabelle mit virtuellen Funktionen, auch wenn in der Klasse keine virtuellen Funktionen deklariert sind. Strukturen können keine virtuellen Funktionen haben;
• Der Operator new kann auf Klassenobjekte angewendet werden, aber nicht auf Strukturen;
Klassen und Strukturen können einen expliziten Konstruktor und Destruktor haben. Wenn Ihr Konstruktor explizit definiert ist, ist die Initialisierung einer Struktur- oder Klassenvariablen mit der Initialisierungssequenz nicht möglich.
Beispiel:
struct trade_settings
{
double take; // Preis von Take-Profit
double stop; // Preis von Stop-Loss
uchar slippage; // Wert der zulässigen Slippage
//--- Konstruktor
trade_settings() { take=0.0; stop=0.0; slippage=5; }
//--- Destruktor
~trade_settings() { Print("Das ist Ende"); }
};
//--- Der Kompiler generiert eine Fehlermeldung, dass eine Initialisierung unmöglich ist.
trade_settings my_set={0.0,0.0,5};
Konstruktoren und Destruktoren
Konstruktor ist eine spezielle Funktion, die automatisch beim Erstellung von Struktur- oder Klassenobjekt aufgerufen. Er wird normalerweise und für Initialisierung von Klassenglieder verwendet. Weitere werden wir nur über die Klassen reden, während das gleiche gilt für Strukturen, soweit nicht anders angegeben. Der Name eines Konstruktors muss mit dem Klassennamen gleich sein. Der Konstruktor hat keinen Rückgabetyp (Sie können den Typ void angeben).
Definierte Klasseglieder – Zeichenketten, dynamische Arrays und Objekte, die Initialisierung erfordern - werden in jedem Fall initialisiert werden, unabhängig davon, ob es ein Konstruktor gibt.
Jede Klasse kann mehrere Konstruktoren mit einer unterschiedlichen Anzahl von Parametern und Initialisierungsliste haben. Ein Konstruktor, der die Angabe von Parametern erfordert, heißt parametrischer Konstruktor.
Ein Konstruktor ohne Parameter ist ein Standard-Konstruktor. Wenn keine Konstruktoren in einer Klasse deklariert sind, erstellt der Kompiler einen Standard-Konstruktor während der Kompilierung.
//+------------------------------------------------------------------+
//| Klasse für Arbeiten mit einem Datum |
//+------------------------------------------------------------------+
class MyDateClass
{
private:
int m_year; // Jahr
int m_month; // Monat
int m_day; // Tag des Monats
int m_hour; // Stunde an einem Tag
int m_minute; // Minuten
int m_second; // Sekunden
public:
//--- Standard-Konstruktor
MyDateClass(void);
//--- parametrischer Konstruktor
MyDateClass(int h,int m,int s);
};
Sie können einen Konstruktor in der Klassenbeschreibung deklarieren und dann dessen Körper definieren. Beispielsweise können zwei Konstruktoren der Klasse MyDateClass auf folgende Weise definiert werden:
//+------------------------------------------------------------------+
//| Standard-Konstruktor |
//+------------------------------------------------------------------+
MyDateClass::MyDateClass(void)
{
//---
MqlDateTime mdt;
datetime t=TimeCurrent(mdt);
m_year=mdt.year;
m_month=mdt.mon;
m_day=mdt.day;
m_hour=mdt.hour;
m_minute=mdt.min;
m_second=mdt.sec;
Print(__FUNCTION__);
}
//+------------------------------------------------------------------+
//| Parametrischer Konstruktor |
//+------------------------------------------------------------------+
MyDateClass::MyDateClass(int h,int m,int s)
{
MqlDateTime mdt;
datetime t=TimeCurrent(mdt);
m_year=mdt.year;
m_month=mdt.mon;
m_day=mdt.day;
m_hour=h;
m_minute=m;
m_second=s;
Print(__FUNCTION__);
}
Im Standard-Konstruktor werden alle Klassenglieder mit Hilfe der Funktion TimeCurrent() gefüllt. Im parametrischen Konstruktor werden nur die Werte der Stunde übergeben. Die anderen Mitglieder der Klasse (m_year, m_month und m_day) werden automatisch mit dem aktuellen Datum initialisiert.
Der Standard-Konstruktor hat eine besondere Bedeutung bei der Initialisierung eines Arrays von Objekten seiner Klasse. Ein Konstruktor, der für alle Parameter Standardwerte setzt, ist kein Standard-Konstruktor. Wir zeigen dies am Beispiel:
//+------------------------------------------------------------------+
//| Klasse mit einem Standard-Konstruktor |
//+------------------------------------------------------------------+
class CFoo
{
datetime m_call_time; // Zeit des letzten Aufrufs des Objektes
public:
//--- Konstruktor mit einem Parameter, der einen Standardwert hat, ist kein Standard-Konstruktor
CFoo(const datetime t=0){m_call_time=t;};
//--- Copy-Konstruktor
CFoo(const CFoo &foo){m_call_time=foo.m_call_time;};
string ToString(){return(TimeToString(m_call_time,TIME_DATE|TIME_SECONDS));};
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
// CFoo foo; // diese Option kann nicht verwendet werden - Standard-Konstruktor ist nicht definiert
//--- Gültige Optionen für Erstellung des Objekts CFoo
CFoo foo1(TimeCurrent()); // Ein expliziter Aufruf des parametrischen Konstruktors
CFoo foo2(); // Ein expliziter Aufruf des parametrischen Konstruktors min einem Standardparameter
CFoo foo3=D'2009.09.09'; // Ein impliziter Aufruf des parametrischen Konstruktors
CFoo foo40(foo1); // Ein expliziter Aufruf des Copy-Konstruktors
CFoo foo41=foo1; // Ein impliziter Aufruf des Copy-Konstruktors
CFoo foo5; // Ein expliziter Aufruf des Standard-Konstruktors (wenn es gibt kein Standard-Konstruktor,
// wird ein parametrischer Konstruktor mit Standardparameter aufgerufen)
//--- Gültige Optionen für Erlangung von Zeigern CFoo
CFoo *pfoo6=new CFoo(); // dynamische Erstellung des Objekts und Erhaltung des Zeigers darauf
CFoo *pfoo7=new CFoo(TimeCurrent());// Eine andere Variante der dynamischen Erstellung des Objekts
CFoo *pfoo8=GetPointer(foo1); // pfoo8 zeigt nun auf Objekt foo1
CFoo *pfoo9=pfoo7; // pfoo9 und pfoo7 zeigen auf das gleiche Objekt
// CFoo foo_array[3]; // diese Option kann nicht verwendet werden - Standard-Konstruktor ist nicht definiert
//--- Ableiten Werte m_call_time
Print("foo1.m_call_time=",foo1.ToString());
Print("foo2.m_call_time=",foo2.ToString());
Print("foo3.m_call_time=",foo3.ToString());
Print("foo4.m_call_time=",foo4.ToString());
Print("foo5.m_call_time=",foo5.ToString());
Print("pfoo6.m_call_time=",pfoo6.ToString());
Print("pfoo7.m_call_time=",pfoo7.ToString());
Print("pfoo8.m_call_time=",pfoo8.ToString());
Print("pfoo9.m_call_time=",pfoo9.ToString());
//--- Dynamisch erstellte Objekte löschen
delete pfoo6;
delete pfoo7;
//delete pfoo8; // Es ist nicht notwendig pfoo8 explizit zu löschen, da es auf das automatisch erstellten Objekt foo1 zeigt
//delete pfoo9; // Es ist nicht notwendig pfoo9 explizit zu löschen, da es auf das gleichen Objekt wie pfoo7 zeigt.
}
Wenn Sie in diesem Beispiel diese Zeile entkommentieren
//CFoo foo_array[3]; // diese Variante kann nicht verwendet werden - es wurde kein Standard-Konstruktor angegeben
oder
//CFoo foo_dyn_array[]; // diese Variante kann nicht verwendet sein - es wurde kein Standard-Konstruktor angegeben
dann wird der Kompiler den Fehler "default constructor is not defined" zurückgeben.
Wenn die Klasse einen Konstruktor hat, der vom Benutzer definiert ist, wird der Standard-Konstruktor vom Kompiler nicht generiert. Dies bedeutet, dass, wenn ein parametrischer Konstruktor in einer Klasse deklariert worden ist, aber kein Standard-Konstruktor, kann man die Arrays von Objekten dieser Klasse nicht deklarieren. Der Kompiler für diesen Skript einen Fehler auswerfen:
//+------------------------------------------------------------------+
//| Eine Klasse ohne Standard-Konstruktor |
//+------------------------------------------------------------------+
class CFoo
{
string m_name;
public:
CFoo(string name) { m_name=name;}
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Fehler "default constructor is not defined" wird beim Kompilieren zurückgegeben
CFoo badFoo[5];
}
In diesem Beispiel hat die Klasse CFoo einen deklarierten parametrischen Konstruktor - in solchen Fällen erstellt der Kompiler keinen Standard-Konstruktor während der Kompilierung. Gleichzeitig wird bei der Deklaration eines Arrays von Objekten davon ausgegangen, dass alle Objekte automatisch angelegt und initialisiert werden sollen. Bei der automatischen Initialisierung eines Objekts ist es notwendig, einen Standard-Konstruktor aufzurufen, aber da der Standard-Konstruktor nicht explizit deklariert und nicht automatisch vom Kompiler generiert wird, ist es unmöglich, ein solches Objekt zu erstellen. Aus diesem Grund erzeugt der Kompiler bei der Kompilierung einen Fehler.
Es gibt eine spezielle Syntax, um Objekte durch dem Konstruktor zu initialisieren. Konstruktor-Initialisierer (Sonderkonstruktionen zur Initialisierung) für die Mitglieder einer Struktur oder Klasse können in der Initialisierungsliste angegeben werden.
Eine Initialisierungsliste ist eine durch Kommata getrennte Liste von Initialisatoren, die nach dem Doppelpunkt nach der Liste der Parameter eines Konstruktors steht und dem Körper vorangestellt wird (steht vor einer öffnenden Klammer). Es gibt mehrere Anforderungen:
• Initialisierungslisten können nur in Konstruktoren verwendet werden.
• Mitglieder von Eltern können nicht in der Initialisierungsliste initialisiert werden
• Nach der Initialisierungsliste muss Definition (Implementierung) der Funktion folgen.
Hier ist ein Beispiel für mehrere Konstruktoren zur Initialisierung von Klassenmitgliedern.
//+------------------------------------------------------------------+
//| Klasse um den Namen einer Person zu speichern |
//+------------------------------------------------------------------+
class CPerson
{
string m_first_name; // Vorname
string m_second_name; // Familienname
public:
//--- Ein leerer Standard-Konstruktor
CPerson() {Print(__FUNCTION__);};
//--- Parametrischer Konstruktor
CPerson(string full_name);
//--- Ein Konstruktor mit der Initialisierungsliste
CPerson(string surname,string name): m_second_name(surname), m_first_name(name) {};
void PrintName(){PrintFormat("Name=%s Surname=%s",m_first_name,m_second_name);};
};
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
CPerson::CPerson(string full_name)
{
int pos=StringFind(full_name," ");
if(pos>=0)
{
m_first_name=StringSubstr(full_name,0,pos);
m_second_name=StringSubstr(full_name,pos+1);
}
}
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Erhalten eine Fehlermeldung "default constructor is not defined"
CPerson people[5];
CPerson Tom="Tom Sawyer"; // Tom Sawyer
CPerson Huck("Huckleberry","Finn"); // Huckleberry Finn
CPerson *Pooh = new CPerson("Winnie","Pooh"); // Winnie the Pooh
//--- Werte ausgeben
Tom.PrintName();
Huck.PrintName();
Pooh.PrintName();
//--- Das dynamisch erstellte Objekt löschen
delete Pooh;
}
In diesem Fall hat die Klasse CPerson drei Konstruktoren:
1. Einen expliziten Standard-Konstruktor, der das Erstellen eines Arrays von Objekten dieser Klasse ermöglicht.
2. Ein Konstruktor mit einem Parameter, der einen vollständigen Namen als Parameter erhält und ihn entsprechend der gefundenen Anordnung in den Namen und Vornamen teilt;
3. Ein Konstruktor mit zwei Parametern, der Initialisierungsliste enthält. Initialisieren – m_second_name(surname) und m_first_name(name).
Beachten Sie, wie die Initialisierung mit einer Liste die Zuweisung ersetzt. Die einzelnen Mitglieder werden wie folgt initialisiert:
Klassenglieder (Liste von Ausdrücken)
In der Initialisierungsliste können die Mitglieder in beliebiger Reihenfolge stehen, aber alle Mitglieder der Klasse werden nach der Reihenfolge der Deklaration initialisiert. Dies bedeutet, dass im dritten Konstruktor zuerst das Mitglied m_first_name initialisiert, da es an erster Stelle steht, und danach m_second_name. Dies sollte in Fällen berücksichtigt werden, in denen die Initialisierung von einigen Klassenmitgliedern von den Werten der anderen Klassenmitglieder abhängt.
Wenn kein Standard-Konstruktor in der Basisklasse deklariert ist, und ein oder mehrere Konstruktoren mit Parametern deklariert sind, sollten Sie immer einen der Basisklassenkonstruktoren in der Initialisierungsliste anrufen. Sie ist eine Kommata getrennte, reguläre Mitgliederliste und wird bei der Initialisierung des Objekts unabhängig von der Position in der Initialisierungsliste zuerst aufgerufen.
//+------------------------------------------------------------------+
//| Basisklasse |
//+------------------------------------------------------------------+
class CFoo
{
string m_name;
public:
//--- Konstruktor mit der Initialisierungsliste
CFoo(string name) : m_name(name) { Print(m_name);}
};
//+------------------------------------------------------------------+
//| Nachfolger der Klasse CFoo |
//+------------------------------------------------------------------+
class CBar : CFoo
{
CFoo m_member; // Klassenglieder ist ein Objekt von Parent
public:
//--- Standard-Konstruktor ruft den Konstruktor von Eltern in der Initialisierungsliste
CBar(): m_member(_Symbol), CFoo("CBAR") {Print(__FUNCTION__);}
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
CBar bar;
}
In diesem Beispiel während der Erstellung des Objekts bar wird Standard-Konstruktor CBar() aufgerufen. In diesem Standard-Konstruktor erst wird Konstruktor für den Vorfahr CFoo, dann wird Konstruktor für Klassenglied m_member augerufen.
Destruktor ist eine Sonderfunktion, die authomatisch aufgerufen wird, wenn das Klassenobjekt beseitigt wird. Destruktorname wird so geschrieben, wie der Klassenname mit Tilde(~). Zeilen, dynamische Felde und Objekte, die initialisiert werden müssen, werden auf jeden Fall deinitialisiert, unabhängig von der Anwesenheit des Destruktors. Wenn es einen Destruktor gibt, werden diese Handlungen nach Destruktoraufruf durchgeführt werden.
Destruktoren sind immer virtuell, unabhängig davon, ob sie mit dem Schlüuesselwort virtual erklärt werden oder nicht.
Methodenbestimmung der Klassen
Funktionen-Methoden der Klasse können sowohl innerhalb, als auch ausserhalb der Klassenerklärung bestimmt werden. Wenn die Methode innerhalb der Klasse bestimmt wird, folgt Ihr Körper unmittelbar nach der Methodenerklärung.
Beispiel:
class CTetrisShape
{
protected:
int m_type;
int m_xpos;
int m_ypos;
int m_xsize;
int m_ysize;
int m_prev_turn;
int m_turn;
int m_right_border;
public:
void CTetrisShape();
void SetRightBorder(int border) { m_right_border=border; }
void SetYPos(int ypos) { m_ypos=ypos; }
void SetXPos(int xpos) { m_xpos=xpos; }
int GetYPos() { return(m_ypos); }
int GetXPos() { return(m_xpos); }
int GetYSize() { return(m_ysize); }
int GetXSize() { return(m_xsize); }
int GetType() { return(m_type); }
void Left() { m_xpos-=SHAPE_SIZE; }
void Right() { m_xpos+=SHAPE_SIZE; }
void Rotate() { m_prev_turn=m_turn; if(++m_turn>3) m_turn=0; }
virtual void Draw() { return; }
virtual bool CheckDown(int& pad_array[]);
virtual bool CheckLeft(int& side_row[]);
virtual bool CheckRight(int& side_row[]);
};
Funktionen mit SetRightBorder(int border) nach Draw() werden vereinbart und bestimmt innerhalb der Klasse CTetrisShape.
Konstruktor CTetrisShape() und Methoden CheckDown(int& pad_array[]), CheckLeft(int& side_row[]) und CheckRight(int& side_row[]) werden nur innerhalb der Klasse erklärt, aber noch nicht bestimmt. Bestimmungen dieser Funktionen müssen weiter nach Kode folgen. Um die Methode ausser der Klasse zu bestimmen, wird Operation der Kontexterlaubnis verwendet , als Kontext funktioniert der Klassenname.
Beispiel:
//+------------------------------------------------------------------+
//| Konstruktor der Basisklasse |
//+------------------------------------------------------------------+
void CTetrisShape::CTetrisShape()
{
m_type=0;
m_ypos=0;
m_xpos=0;
m_xsize=SHAPE_SIZE;
m_ysize=SHAPE_SIZE;
m_prev_turn=0;
m_turn=0;
m_right_border=0;
}
//+-----------------------------------------------------------------------+
//| Kontrolle der Möglichkeit nach unten zu bewegen (für Stab und Kubus) |
//+-----------------------------------------------------------------------+
bool CTetrisShape::CheckDown(int& pad_array[])
{
int i,xsize=m_xsize/SHAPE_SIZE;
//---
for(i=0; i<xsize; i++)
{
if(m_ypos+m_ysize>=pad_array[i]) return(false);
}
//---
return(true);
}
Zugangsmodifikatoren public, protected und private
Bei der Ausarbeitung der neuen Klasse ist es empfohlen, den Zugang zu Aussengliedern zu begrenzen. Für diese Zwecke werden Schlüsselwörter private oder protected verwendet. In diesem Fall wird der Zugang zu diesen versteckten Daten nur aus Funktionen-Methoden derselben Klasse erfolgen. Wenn das Schluesselwort protected verwendet wird, kann der Zugang zu versteckten Daten auch aus Methoden der Klassen erfolgen - Nachfolger dieser Klasse. In gleicher Weise kann der Zugang auch zu Funktionen-Metoden der Klasse begrenzt werden.
Wenn es notwendig ist, den Zugang zu Mitgliedern und/oder Methoden der Klasse vollkommen zu eröffnen, wird das Schlüsselwort publicverwendet.
Beispiel:
class CTetrisField
{
private:
int m_score; // Stand
int m_ypos; // laufende Figurlage
int m_field[FIELD_HEIGHT][FIELD_WIDTH]; // Dom Matrix
int m_rows[FIELD_HEIGHT]; // Numerierung von DOM Reihen
int m_last_row; // Letzte freie Reihe
CTetrisShape *m_shape; // Tetrisfigur
bool m_bover; // Spiel beendet
public:
void CTetrisField() { m_shape=NULL; m_bover=false; }
void Init();
void Deinit();
void Down();
void Left();
void Right();
void Rotate();
void Drop();
private:
void NewShape();
void CheckAndDeleteRows();
void LabelOver();
};
Alle Mitglieder und Methoden der Klasse, definiert nach dem Spezifikator public: (und bis nächsten Spezifikator), sind für jeden Zugriff auf ein Objekt dieser Klasse zugänglich. In diesem Beispiel sind es folgende Mitglieder: Die Funktionen CTetrisField(), Init(), Deinit(), Down(), Left(), Right(), Rotate() und Drop().
Alle Mitglieder der Klasse, deklariert nach dem Spezifikator private: (und bis nächsten Spezifikator) sind nur für die Funktions-Mitglieder dieser Klasse zugänglich. Spezifikatoren für einen Zugriff auf die Elemente werden immer mit einem Doppelpunkt (:) und können in der Klassenbestimmung mehrmals auftreten.
Zugang zu Gliedern der Basisklasse kann bei Vererbung in abgeleiteten Klassen neu definiert werden.
Modifikator 'final'
Das Vorhandensein des Modifikators 'final' bei der Deklaration der Struktur verbietet weitere Vererbung. Wenn es nicht notwendig ist, die Struktur weiter zu ändern oder wenn Änderungen aus Sicherheitsgründen unzulässig sind, deklarieren Sie diese mit dem Modifikator 'final'. Dabei werden alle Glieder der Struktur implizit auch als final gelten.
class CFoo final
{
//--- Körper der Klasse
};
class CBar : public CFoo
{
//--- Körper der Klasse
};
Bei einem Versuch der Vererbung von einer Struktur mit dem Modifikator 'final', wie im Beispiel oben, wirft der Kompiler einen Fehler aus:
cannot inherit from 'CFoo' as it has been declared as 'final'
see declaration of 'CFoo'
Unions (union)
Eine Union stellt einen besonderen Datentyp dar und setzt sich aus mehreren Variablen zusammen, die sich denselben Speicherbereich teilen. Daraus folgt, dass eine Union die Interpretation einer und derselben Reihenfolge der Bytes auf zwei (oder mehr) verschiedenen Weisen ermöglicht. Die Deklaration einer Union ist gleich der Deklaration einer Struktur und beginnt mit dem Schlüsselwort union.
union LongDouble
{
long long_value;
double double_value;
};
Im Gegensatz zu Strukturen, gehören verschiedene Komponenten einer Union zu demselben Speicherbereich. In diesem Beispiel wurde die LongDouble Union deklariert, in welcher sich der Wert vom Typ long und der Wert vom Typ double einen Speicherbereich teilen. Es ist wichtig zu verstehen, die Union kann nicht gleichzeitig den ganzzahligen Wert long und den reellen Wert double speichern (wie es in Strukturen der Fall ist), denn die Variablen long_value und double_value überschneiden sich (im Speicher). Das MQL5-Programm kann aber jederzeit die Information, die in der Union enthalten ist, als einen ganzzahligen (long) oder reellen Wert (double) bearbeiten. Daraus folgt, dass die Union die Interpretation einer und derselben Reihenfolge von Daten auf zwei (oder mehr) verschiedenen Weisen ermöglicht.
Bei der Deklaration der Union reserviert der Compiler einen Speicherbereich, der für das Speichern des größten nach dem Volumen Typs in der Union der Variablen ausreichend ist. Für den Zugang zu einem Element der Union wird dieselbe Syntax wie für Strukturen verwendet – der Point-Operator.
union LongDouble
{
long long_value;
double double_value;
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//---
LongDouble lb;
//--- erhalten wir die ungültige Zahl -nan(ind) und geben sie aus
lb.double_value=MathArcsin(2.0);
printf("1. double=%f integer=%I64X",lb.double_value,lb.long_value);
//--- die größte normalisierte Zahl (DBL_MAX)
lb.long_value=0x7FEFFFFFFFFFFFFF;
printf("2. double=%.16e integer=%I64X",lb.double_value,lb.long_value);
//--- die kleinste positive normalisierte Zahl (DBL_MIN)
lb.long_value=0x0010000000000000;
printf("3. double=%.16e integer=%.16I64X",lb.double_value,lb.long_value);
}
/* Das Ergebnis der Ausführung
1. double=-nan(ind) integer=FFF8000000000000
2. double=1.7976931348623157e+308 integer=7FEFFFFFFFFFFFFF
3. double=2.2250738585072014e-308 integer=0010000000000000
*/
Da Unions es dem Programm erlauben, dieselben Daten im Speicher unterschiedlich zu interpretieren, werden sie häufig in den Fällen verwendet, wenn eine ungewöhnliche Typumwandlung benötigt wird.
Auf Unions kann keine Vererbung angewendet werden und sie können per Definition keine statischen Members haben. Sonst verhält sich union wie eine Struktur, deren Mitglieder eine Nullpunktverschiebung haben. Die folgenden Typen können nicht Mitglieder einer Union sein:
• dynamische Arrays
• Strings
• Pointer auf Objekte und Funktionen
• Objekte von Klassen
• Objekte der Strukturen, die Konstruktoren und Destruktoren haben
• Objekte der Strukturen, die Mitglieder der Punkte 1-5 beinhalten
Eine Union kann, so wie Klassen, Konstruktoren und Destruktoren und Methoden haben. Standardmäßig haben die Members einer Union den Zugriffstyp public, für die Erstellung privater Elemente ist das Schlüsselwort private zu verwenden. Alle diese Möglichkeiten sind in einem Beispiel vorgestellt. Das Beispiel zeigt, wie man die Farbe mit dem Typ color in ARGB umwandeln kann, wie dies die ColorToARGB() Funktion macht.
//+------------------------------------------------------------------+
//| Union für die Umwandlung von color(BGR) in ARGB |
//+------------------------------------------------------------------+
union ARGB
{
uchar argb[4];
color clr;
//--- Konstruktoren
ARGB(color col,uchar a=0){Color(col,a);};
~ARGB(){};
//--- öffentliche Methoden
public:
uchar Alpha(){return(argb[3]);};
void Alpha(const uchar alpha){argb[3]=alpha;};
color Color(){ return(color(clr));};
//--- private Methoden
private:
//+------------------------------------------------------------------+
//| Setzen der Farbe und des Wertes des Alpha-Kanals |
//+------------------------------------------------------------------+
void Color(color col,uchar alpha)
{
//--- Farbe dem clr Member setzen
clr=col;
//--- den Wert der Alpha Komponente setzen - Transparenzlevel
argb[3]=alpha;
//--- die Bytes R und B (Red und Blue) miteinander tauschen
uchar t=argb[0];argb[0]=argb[2];argb[2]=t;
};
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- 0x55 bedeutet 55/255=21.6 % (0% - völlig durchsichtig)
uchar alpha=0x55;
//--- der color Typ wird als 0x00BBGGRR dargestellt
color test_color=clrDarkOrange;
//--- hier erhalten wir die Werte der Bytes aus der ARGB Union
uchar argb[];
PrintFormat("0x%.8X - so sieht der color Typ für %s aus, BGR=(%s)",
test_color,ColorToString(test_color,true),ColorToString(test_color));
//--- der ARGB Typ wird als 0x00RRGGBB dargestellt, die RR und BB Komponenten haben Plätze getauscht
ARGB argb_color(test_color);
//--- kopieren wir das Array der Bytes
ArrayCopy(argb,argb_color.argb);
//--- so sieht es in der ARGB Darstellung aus
PrintFormat("0x%.8X - ARGB Darstellung mit dem Alpha-Kanal=0x%.2x, ARGB=(%d,%d,%d,%d)",
argb_color.clr,argb_color.Alpha(),argb[3],argb[2],argb[1],argb[0]);
//--- fügen wir den Wert der Transparenz hinzu
argb_color.Alpha(alpha);
//--- versuchen wir ARGB als "color" Typ zu definieren
Print("ARGB als color=(",argb_color.clr,") Alpha-Kanal=",argb_color.Alpha());
//--- kopieren wir das Array der Bytes
ArrayCopy(argb,argb_color.argb);
//--- so sieht es in der ARGB Darstellung aus
PrintFormat("0x%.8X - ARGB Darstellung mit dem Alpha-Kanal=0x%.2x, ARGB=(%d,%d,%d,%d)",
argb_color.clr,argb_color.Alpha(),argb[3],argb[2],argb[1],argb[0]);
//--- vergleichen mit dem Ergebnis der ColorToARGB() Funktion
PrintFormat("0x%.8X - Ergebnis von ColorToARGB(%s,0x%.2x)",ColorToARGB(test_color,alpha),
ColorToString(test_color,true),alpha);
}
/* Das Ergebnis der Ausführung
0x00008CFF - so sieht der Typ color für clrDarkOrange, BGR=(255,140,0)
0x00FF8C00 - ARGB Darstellung mit dem Alpha-Kanal=0x00, ARGB=(0,255,140,0)
ARGB als color=(0,140,255) Alpha-Kanal=85
0x55FF8C00 - ARGB Darstellung mit dem Alpha-Kanal=0x55, ARGB=(85,255,140,0)
0x55FF8C00 - Ergebnis von ColorToARGB(clrDarkOrange,0x55)
*/
Schnittstellen (Interfaces)
Eine Schnittstelle gibt an, welche Funktionalität eine Klasse dann umsetzen kann. In der Tat ist sie eine Klasse, die keine Mitglieder enthalten kann und keinen Konstruktor und/oder Destruktor hat. Alle in einer Schnittstelle deklarierten Methoden sind rein virtuell, auch ohne eine explizite Definition.
Eine Schnittstelle wird mit dem Schlüsselwort interface definiert. Zum Beispiel:
//--- Basisschnittstelle für die Beschreibung von Tieren
interface ITier
{
//--- Die Methoden einer Schnittstelle haben public-Zugang
void Sound(); // Tierlaut
};
//+------------------------------------------------------------------+
//| Klasse CKatze ist von der Schnittstelle ITier geerbt |
//+------------------------------------------------------------------+
class CKatze : public ITier
{
public:
CKatze() { Print("Katze was born"); }
~CKatze() { Print("Katze is dead"); }
//--- Implementieren die Methode Sound der Schnittstelle ITier
void Sound(){ Print("meou"); }
};
//+------------------------------------------------------------------+
//| Klasse CHund ist von der Schnittstelle ITier geerbt |
//+------------------------------------------------------------------+
class CHund : public ITier
{
public:
CHund() { Print("Hund was born"); }
~CHund() { Print("Hund is dead"); }
//--- Implementieren die Methode Sound der Schnittstelle ITier
void Sound(){ Print("guaf"); }
};
//+------------------------------------------------------------------+
//| Script Programm Start Funktion |
//+------------------------------------------------------------------+
void OnStart()
{
//--- Ein Array von Zeigern auf Objekte vom Typ ITier
ITier *animals[2];
//--- Wir erzeugen abgeleiteten Klassen von ITier und speichern Zeiger auf sie in einem Array
animals[0]=new CKatze;
animals[1]=new CHund;
//--- Rufen auf die Methode Sound() der Basisschnittstelle ITier für jede Kindklasse
for(int i=0;i<ArraySize(animals);++i)
animals[i].Sound();
//--- Objekte löschen
for(int i=0;i<ArraySize(animals);++i)
delete animals[i];
//--- Das Ergebnis
/*
Katze was born
Hund was born
meou
guaf
Katze is dead
Hund is dead
*/
}
Wie im Fall mit abstrakte Klassen, kann man nicht eine Objekt-Schnittstelle ohne Vererbung erstellen. Die Schnittstelle kann nur von anderen Schnittstellen vererbt werden, und auch kann Eltern der Klasse sein. Eine Schnittstelle hat immer die öffentliche Sichtbarkeit.
Die Schnittstelle kann nicht innerhalb einer Klassendeklaration oder Strukturdeklaration deklariert werden, aber man kann den Zeiger auf die Schnittstelle in einer Variablen vom Typ void * speichern. Im Allgemeinen können Sie einen Zeiger auf ein Objekt einer beliebigen Klasse in der Variable vom Typ void * speichern. Um einen void * Zeiger auf einen Zeiger auf ein Objekt einer bestimmten Klasse umzuwandeln, wird der Operator dynamic_cast verwendet. Wenn die Umwandlung nicht möglich ist, wird das Ergebnis der Operation dynamic_cast NULL.
Siehe auch
Objektorientiertes Programmieren
|
__label__pos
| 0.624629 |
1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Every open set contains both open and closed cell.
1. Sep 30, 2007 #1
1. The problem statement, all variables and given/known data
If an open cell is defined as (a1,b1) X (a2,b2) X .... (an,bn) in R^n and closed cell is defined as [a1,b1] X [a2,b2] X .... [an,bn], then every open set in R^n contains an open-n-cell and a closed-n-cell.
2. Relevant equations
Def: An open set is a set which has all points as interior points
3. The attempt at a solution
I am a bit unsure about my line of reasoning. Can someone please take a look?
Let y belong to an n-cell, where y = (y1,...,yn) such that ai<yi<bi, where i = 1,2...n. I can prove that every k cell is compact and that if p,q are two points that belong to the n-cell , then d(p,q) < delta, where delta = sqrt( sum(bi-ai)^2). I will omit the proof for here. But I am going to use this result to solve this problem.
Now, let us consider a point xo = {(ai+bi/2)} , i= 1,2,....n. Now, the distance from the points s (a1,a2,a3...an) = distance from r (b1,b2,b3...bn) = delta/2. Let x0 belong to the cell. It is easy to see this because each point of x0 is between (ai,bi) for i=1,2....n
Hence, I can construct a neighborhood N, with x0 as center and radius delta/2+h/2, where h>0 such that the diameter of this neighborhood is delta+h. This will contain points x, such that d(xo,x) < (delta+h)/2. This N will contain both s and r as internal points. Why? Because any neighborhood around s or r with radius h/2 will be inside the sphere N. Hence, s and r shall be internal points of N.
As a result any point p,q that belong to the cell shall also be internal points of N because the max distance between such points shall be delta. We just saw that the end points r and s are internal points. Hence, if p,q belong to the cell , they shall belong to the N. Note that the greatest distance between two points of a sphere is along a diameter (end points of diameter). That shall be delta + h. So, it will contain any points such that the distance between them is only delta.
Another way to look at p as internal to the spehere is by taking a point p that belongs to the cell such that d(p,xo) = delta/2. We are justified in assuming this because distance of any two points in the cell is less than delta. Let us have a neighborgood N with radius h/2 around p. Then any point m on this Neighborgood shall be inside the sphere with radius delta/2+ h2/ and center x0. (Using triangular inequality). Hence, p is internal to the sphere. In other words all points that belong to cell shall be internal to the sphere N. And N is open because all neighborhoods are open sets. Hence the open set shall contain the open n-cell and also the closed n-cell.
I know it does not sound very precise. But any suggestions to make it better? Thank you.
Last edited: Sep 30, 2007
2. jcsd
3. Sep 30, 2007 #2
Dick
User Avatar
Science Advisor
Homework Helper
Why not just show that for e.g. a1 and b1 there exists c1 and d1 such that a1<c1<d1<b1. Do this for all i and consider the open cell (c1,d1)x(c2,d2)x..x(cn,dn) and the closed cell [c1,d1]x[c2,d2]x..x[cn,dn]?
4. Sep 30, 2007 #3
That makes sense too. I suppose in a different way, I was trying to prove the same. But putting it so neatly sounds lot better.
Last edited: Oct 1, 2007
5. Sep 30, 2007 #4
Dick
User Avatar
Science Advisor
Homework Helper
Yeah, if you talking cells talk cell language. If you are talking balls talk ball language. They are both the same in the end but shifting back and forth just makes things needlessly complicated.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: Every open set contains both open and closed cell.
1. Open and Closed Sets (Replies: 1)
Loading...
|
__label__pos
| 0.716587 |
View on
MetaCPAN
search.cpan.org is shutting down
For details read Perl NOC. After June 25th this page will redirect to MetaCPAN.org
Andy Armstrong > Data-Object-AutoWrap-0.02 > Data::Object::AutoWrap
Download:
Data-Object-AutoWrap-0.02.tar.gz
Dependencies
Annotate this POD
View/Report Bugs
Module Version: 0.02 Source
NAME ^
Data::Object::AutoWrap - Autogenerate accessors for R/O object data
VERSION ^
This document describes Data::Object::AutoWrap version 0.02
SYNOPSIS ^
package MyData;
# Our data is in $self->{data}
use Data::Object::AutoWrap qw( data );
sub new {
my ( $class, $data ) = @_;
bless { data => $data }, $class;
}
# ... and then later, elsewhere ...
my $d = MyData->new( { foo => 1, bar => [ 1, 2, 3 ] } );
print $d->foo; # prints "1"
print $d->bar( 2 ); # prints "3"
DESCRIPTION ^
This is an experimental module designed to simplify the implementation of read-only objects with value semantics.
Objects created using Data::Objects::AutoWrap are bound to a Perl data structure. The automatically provide read only accessor methods for the elements of that structure.
Declaring an autowrapped class
As in the example above an autowrapped class is created by adding the line
use Data::Object::AutoWrap qw( fieldname );
We assume (for now) that the class is hash based and that this hash contains a key called fieldname. The corresponding value is the data structure that will be exposed as the module's interface. The 'root' level of this data structure must itself be a hash - we need the key names so we can generate corresponding methods. Below the root of the data structure any type may be used.
If the fieldname is omitted the entire contents of the object's hash will be exposed.
Accessors
For each key in the value hash a corresponding read-only accessor is made available. In order for these accessors to be callable the key names must also be valid Perl method names - it's OK to have a key called '*(&!*(&£' but it's rather tricky to call the corresponding accessor.
The generated accessors are AUTOLOADed. As a result the bound data structure may be a different shape for each instance of the containing class: the accessors are virtual - they don't actually exist in the module's symbol table.
In the following examples we'll assume that we have a Data::Object::AutoWrap based class called MyData that gets the data structure to bind to as the argument to its constructor. The code fragment in the synopsis is a suitable implementation of such a class.
Scalar Accessors
Any scalars in the hash get an accessor that takes no arguments and returns the corresponding value:
my $sc = MyData->new({ flimp_count => 1 });
my $fc = $sc->flimp_count; # gets 1
An error is raised if arguments are passed to the accessor.
Hash Accessors
Any nested hashes in the data structure get accessors that return recursively wrapped hashes. That means that this will work:
my $hc = MyData->new(
{
person => {
name => 'Andy',
job => 'Perl baiter',
},
}
);
print $hc->person->job; # prints "Perl baiter"
Array accessors
The accessor for array values accepts an optional subscript:
my $ac = MyData->new( { list => [ 12, 27, 36, 43, ] } );
my $third = $ac->list( 3 ); # gets 36
Called in a list context with no arguments the accessor for an array returns that array:
my @list = $ac->list; # gets the whole list
Accessors for other types
Anything that's not an array or a hash gets the scalar accessor - so things like globs will also be accessible.
Accessor parameters
Array and hash accessors can accept more than one parameter. For example if you have an array of arrays you can subscript into it like this:
my $gc = MyData->new(
{
grid => [
[ 0, 1, 2, 3 ],
[ 4, 5, 6, 7 ],
[ 8, 9, 10, 11 ],
[ 12, 13, 14, 15 ],
],
}
);
my $dot = $gc->grid( 3, 4 ); # gets 11
In general any parameters specify a path through the data structure:
my $hc = MyData->new(
{
deep => {
smash => 'pumpkins',
eviscerate => [ 'a', 'b', 'c' ],
lament => { fine => 'camels' }
}
}
);
print $hc->deep( 'smash' ); # 'pumpkins'
print $hc->deep( 'eviscerate', 1 ); # 'b'
print $hc->deep( 'lament', 'fine' ); # 'camels'
print $hc->deep->lament->fine; # also 'camels'
print $hc->deep( 'lament' )->fine; # 'camels' again
print $hc->deep->lament( 'fine' ); # more 'camels'
CAVEATS ^
This is experimental code. Don't be using it in, for example, a life support system, ATM or space shuttle.
AUTOLOAD
Data::Object::AutoWrap injects an AUTOLOAD handler into the package from which it is used. It doesn't care about any existing AUTOLOAD or any that might be provided by a superclass. Given that it's designed for the implementation of simple, value like objects this shouldn't be a problem - but you've been warned.
Performance
It's slow. Slow as mollasses in an igloo. Last time I checked the autogenerated accessors are something like fifteen times slower than the simplest hand wrought accessor.
This can probably be improved.
CONFIGURATION AND ENVIRONMENT ^
Data::Object::AutoWrap requires no configuration files or environment variables.
DEPENDENCIES ^
None.
INCOMPATIBILITIES ^
None reported.
BUGS AND LIMITATIONS ^
Yes, probably.
Please report any bugs or feature requests to [email protected], or through the web interface at http://rt.cpan.org.
AUTHOR ^
Andy Armstrong <[email protected]>
LICENCE AND COPYRIGHT ^
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.
Copyright (c) 2008, Message Systems, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name Message Systems, Inc. nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax highlighting:
|
__label__pos
| 0.683649 |
259.47 WHQL @ Microsoft Update
Discussion in 'Videocards - NVIDIA GeForce Drivers Section' started by justinkw1, Aug 28, 2010.
Tags:
?
How is this driver working for you?
1. Better than previous drivers
48.2%
2. About the same as previous drivers
34.1%
3. Worse than previous drivers
17.6%
1. Carfax
Carfax Ancient Guru
Messages:
2,913
Likes Received:
465
GPU:
NVidia Titan Xp
I loaded up Crysis and played a few levels, and I noticed a lot more framerate dips compared to the 259.32 drivers.
Switched back to 259.32, and the framerate dips and stutters disappeared, so it had to be the 259.47 drivers that was causing them.
2. Redemption80
Redemption80 Ancient Guru
Messages:
18,494
Likes Received:
266
GPU:
GALAX 970/ASUS 970
Thanks for that, will keep these on the HTPC, seem very stable so far, but will stick with the 259.32 on my gaming one, since they have been the best drivers so far for my GTX460.
3. antonyfrn
antonyfrn Maha Guru
Messages:
1,268
Likes Received:
7
GPU:
EVGA GTX 1070 FTW
going back to my old drivers getting menus not drawing correctly
4. tooli
tooli New Member
Messages:
2
Likes Received:
0
GPU:
Gigabyte 460GTX,1GB
Updated to 259.47, restarted and DPC latency are still off the chart when streaming - exactly as it was with previous drivers.
5. CPC_RedDawn
CPC_RedDawn Ancient Guru
Messages:
8,926
Likes Received:
1,182
GPU:
6800XT Nitro+ SE
What is DPC latency...? Sorry if this seems noobish but you dont know until you ask.
Might try these out. Will these be the WHQL driver that Nvidia is meant to release soon or is that going to be from the 260.xx family as I have heard a lot of people talking about a 260.xx driver release soon...
Thanks.
6. Kolt
Kolt Ancient Guru
Messages:
1,650
Likes Received:
507
GPU:
RTX 2080 OC
It's paranoia. That's what it is :p
7. PirateNeilsouth
PirateNeilsouth Ancient Guru
Messages:
1,773
Likes Received:
0
GPU:
980 OC
No , these don't have the gpu/stutter fixes meant for next week nor do they have the gothic 1-2 fixes due after. Still don't know why people don't wait till monday-wednesday
8. TheHunter
TheHunter Banned
Messages:
13,405
Likes Received:
1
GPU:
MSi N570GTX TFIII [OC|PE]
these are marked as
over at laptop2go
9. slick3
slick3 Ancient Guru
Messages:
1,844
Likes Received:
221
GPU:
RTX 2070 +85/1200
NVIDIA should take time a release a GOOD driver for the GTX 400 series. They are feeding us the same crap release after release. Fingers cross for the next 260 series driver
10. Carfax
Carfax Ancient Guru
Messages:
2,913
Likes Received:
465
GPU:
NVidia Titan Xp
Can you elaborate on the "stutter fixes?"
That explains why they are complete sh!t then.
The 259.32 drivers are actually very good, the best yet for my 480s..
11. justinkw1
justinkw1 Member
Messages:
31
Likes Received:
0
GPU:
4x4GB G.skill Sniper 2133
They are marked as HP/Compaq - OEM at LV2G (yes, I was the one who posted it there too) because of the package's NVBL.inf file which specifically supports the Quadro mobile GPUs for HP's business notebook line.
This package contains a second INF file that most of us will use due to compatibility, and that's the NV_WHQL.inf which makes compatibility universal for all GPUs since GeForce 6.
Last edited: Aug 29, 2010
12. narsnail
narsnail Guest
Spectacular, us users with GTX 460s are having a lot of issues, and there has not been one solid driver release yet. I have yet to try these, but I expect nothing to be fixed. People complain about ATI drivers sucking, but my god Nvidia is far worse IMO.
13. okidna
okidna Active Member
Messages:
87
Likes Received:
1
GPU:
MSI GTX 1080 Ti
what issues? be more spesific please..
460 user here and have nothing to complain about drivers.. 259.32 is good driver..
14. zeroberto
zeroberto Member
Messages:
15
Likes Received:
0
GPU:
EVGA GTX460 768MB
I have sound crackling issues with the previous whql and beta drivers, probably caused by high dpc latency. But this happens only when I listen to the music in winamp or in other players.
As soon as I uninstall the nvidia drivers everything is ok.
Will try this new beta...
15. PirateNeilsouth
PirateNeilsouth Ancient Guru
Messages:
1,773
Likes Received:
0
GPU:
980 OC
16. okidna
okidna Active Member
Messages:
87
Likes Received:
1
GPU:
MSI GTX 1080 Ti
17. PirateNeilsouth
PirateNeilsouth Ancient Guru
Messages:
1,773
Likes Received:
0
GPU:
980 OC
sigh , the gothic problems relates to ALL nvidia cards !
The low gpu usage relates to ALL nvidia cards including the Gtx 2xx range
The BfBc 2 stutter problems relate to the GTX 4xx range
Won't keep explaining it , i'm in a ****ty mood today
18. RedruM-X
RedruM-X Master Guru
Messages:
210
Likes Received:
2
GPU:
MSI GTX 1070 Gaming X 8G
You seem to be missing the point.
What should've sticked from his explanatios is PHYSX > GPU ...
In other words, put it on GPU instead of CPU OR AUTO ofcourse !
AUTO will choose CPU probably, and you need to choose GPU, ALWAYS !
19. yosef019
yosef019 Ancient Guru
Messages:
2,151
Likes Received:
9
GPU:
EVGA 1080 ti
i dont miss noting i play game with 45-70 fps in some playses from nowere is slowdowns 15-20 fps 259.47 fix a litle i think
ihave other problem sinuglarity wont start with this drivers other i dont know
20. narsnail
narsnail Guest
Issues with me personally, are multiple games crashing that did not when I had my 4870 installed. Crysis, Dragon Age, The Witcher, DOW II, the list goes on. I like how cool and quiet this card runs, but for stability sake, I like ATI better.
Share This Page
|
__label__pos
| 0.577275 |
Express.js: Introduction to API
·
2 min read
API. What is it?
Application Programming Interface (API) in just an interface to interact with data. In context of a full-stack web app, frontend app request the data through the API which is defined in the server side. Generally, the data is exchanged in JSON(JavaScript Object Notation) format.
Let's setup an API in express. Now, when I said setting up an API, it simply refers to handling the user request for specific routes.
const express = require('express');
const app = express();
const data = require("./data.js");
// This is what setting up an API means
app.get("/", (req, res) => {
// 'json()' sends response in JSON format
res.json(data);
});
app.listen(5000, () => console.log("Listening in Port 5000!!!"));
You might be familiar with the above code format if you have read my previous article about Express.js Basic Introduction. The extra things are:
• We import './data.js' which contains JSON data in it. Instead of using a database, we are using JS file for simplicity.
• When the user sends a request to '/' route using GET method(which happens by default), then the callback function of 'app.get()' executes.
• Now, instead of 'res.send()', we use 'res.json()' to send JSON data as response.
That our basis API setup in Node.js and you can setup API for multiple routes using multiple http methods(GET, POST, PUT, DELETE). That's all for now!
|
__label__pos
| 0.916339 |
Version 7.10.2
Back |
WsTranscodingProfile
Returns list of transcoding profiles for the given account
URL Format
/ws/ws_transcoding_profile/api/{app_token}/mode/json/apiv/5?start={start}&end={end}&excludeGlobal={true}&excludeParent={true}&name={name}
Parameters
Name Req Type Description
app_token string The api token for the given account. You can find the api token when navigating through account->api accounts
apiv integer The api version; only version 5 supported.
excludeParent string If set to 'true' excludes the transcoding profiles from the parent account
excludeGlobal string If set to 'true' excludes the global transcoding profiles.
name string partial search based on the name of the transcoding profile
start int Will return a subset of available transcoding profiless. Used primarily for pagination. Default is set to 0
end integer Will return a subset of available folders. Used primarily for pagination. Default is set to start+20 Zero-based exclusive index: E.G. &start=0&send=4 will return 5 transcoding profiles from the beginning.
Possible Responses
• 222 Transcoding profiles found
• 1044 Transcoding profiles not found
• 102 Invalid app token.
• 659 Invalid Client
Example Response
{
"response": {
"success": {
"code": 222,
"message": "Transcoding profiles found",
"details": ""
},
"WsTranscodingProfile": {
"transcodingProfile": [
{
"id": 777,
"name": "3037-watermarking-test",
"clientid": 73,
"sortnum": 0,
"defaultWidth": 0,
"maxWidth": 0,
"allowUpscale": true,
"addReferenceFile": true,
"transcodeReferenceFile": false,
"enableSourceFileRequirements": true
},
{
"id": 778,
"name": "3037-watermarking-test2",
"clientid": 73,
"sortnum": 0,
"defaultWidth": 0,
"maxWidth": 0,
"allowUpscale": true,
"addReferenceFile": true,
"transcodeReferenceFile": false,
"enableSourceFileRequirements": false
},
{
"id": 721,
"name": "Ar_ref_on_upsacale_on",
"clientid": 73,
"sortnum": 0,
"defaultWidth": 0,
"maxWidth": 0,
"allowUpscale": true,
"addReferenceFile": true,
"transcodeReferenceFile": false,
"enableSourceFileRequirements": false
}
],
"totalCount": "27",
"defaultVideoTranscodingProfile": {
"id": 621,
"name": "3. Universal (3 renditions)",
"clientid": 0,
"sortnum": 0,
"defaultWidth": 0,
"maxWidth": 0,
"allowUpscale": true,
"addReferenceFile": false,
"transcodeReferenceFile": false,
"enableSourceFileRequirements": false
},
"defaultAudioTranscodingProfile": {
"id": 621,
"name": "3. Universal (3 renditions)",
"clientid": 0,
"sortnum": 0,
"defaultWidth": 0,
"maxWidth": 0,
"allowUpscale": true,
"addReferenceFile": false,
"transcodeReferenceFile": false,
"enableSourceFileRequirements": false
}
}
}
}
|
__label__pos
| 0.999942 |
Math Doubts
Subtraction of Like Algebraic Terms
A mathematical operation of subtracting an algebraic term from its like algebraic term is called the subtraction of like algebraic terms.
Introduction
The like algebraic terms often involve in subtraction to subtract an algebraic term from another. Actually, the like algebraic terms have the same literal factor commonly. Hence, it is possible to subtract an algebraic term from another easily but the subtraction of them is also an algebraic term and it is in the same form of the like algebraic terms.
Example
$2x^2y$ and $5x^2y$ are two two like algebraic terms.
01
Display a Negative sign between terms
Take, the algebraic term $5x^2y$ is subtracted from its like term $2x^2y$. So, write $2x^2y$ first, then $5x^2y$ but display a negative sign $(-)$ between them to express subtraction mathematically.
$2x^2y-5x^2y$
02
Obtain subtraction of them
The like algebraic terms have a common literal factor $x^2y$. So, it can be taken common from them.
$\implies 2x^2y-5x^2y = (2-5)x^2y$
Now, perform the subtraction of the numbers and multiply it by the literal factor of them.
$\implies 2x^2y-5x^2y = -3x^2y$
The subtraction of them is an algebraic term and it is also in the same of the like algebraic terms. The example has proved that the subtraction of like algebraic terms is also a like algebraic term.
More Examples
Observe the following examples to know how to subtract an algebraic term from its like algebraic term mathematically.
$(1)\,\,\,\,\,\,$ $7a-5a$ $\,=\,$ $(7-5)a$ $\,=\,$ $2a$
$(2)\,\,\,\,\,\,$ $2bc-10bc$ $\,=\,$ $(2-10)bc$ $\,=\,$ $-8bc$
$(3)\,\,\,\,\,\,$ $3c^2-2c^2$ $\,=\,$ $(3-2)c^2$ $\,=\,$ $c^2$
$(4)\,\,\,\,\,\,$ $17d^3e^2f-23d^3e^2f$ $\,=\,$ $(17-23)d^3e^2f$ $\,=\,$ $-6d^3e^2f$
$(5)\,\,\,\,\,\,$ $5ghi-ghi$ $\,=\,$ $(5-1)ghi$ $\,=\,$ $4ghi$
Math Doubts
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
Follow us on Social Media
Mobile App for Android users Math Doubts Android App
Math Problems
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
Learn more
|
__label__pos
| 0.997869 |
next-sitemap
TypeScript icon, indicating that this package has built-in type declarations
4.2.3 • Public • Published
BANNER
Table of contents
Getting started
Installation
yarn add next-sitemap
Create config file
next-sitemap requires a basic config file (next-sitemap.config.js) under your project root
next-sitemap will load environment variables from .env files by default.
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: process.env.SITE_URL || 'https://example.com',
generateRobotsTxt: true, // (optional)
// ...other options
}
Building sitemaps
Add next-sitemap as your postbuild script
{
"build": "next build",
"postbuild": "next-sitemap"
}
Custom config file
You can also use a custom config file instead of next-sitemap.config.js. Just pass --config <your-config-file>.js to build command (Example: custom-config-file)
{
"build": "next build",
"postbuild": "next-sitemap --config awesome.config.js"
}
Building sitemaps with pnpm
When using pnpm you need to create a .npmrc file in the root of your project if you want to use a postbuild step:
//.npmrc
enable-pre-post-scripts=true
Index sitemaps (Optional)
📣 From next-sitemap v2.x onwards, sitemap.xml will be Index Sitemap. It will contain urls of all other generated sitemap endpoints.
Index sitemap generation can be turned off by setting generateIndexSitemap: false in next-sitemap config file. (This is useful for small/hobby sites which does not require an index sitemap) (Example: no-index-sitemaps)
Splitting large sitemap into multiple files
Define the sitemapSize property in next-sitemap.config.js to split large sitemap into multiple files.
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: 'https://example.com',
generateRobotsTxt: true,
sitemapSize: 7000,
}
Above is the minimal configuration to split a large sitemap. When the number of URLs in a sitemap is more than 7000, next-sitemap will create sitemap (e.g. sitemap-0.xml, sitemap-1.xml) and index (e.g. sitemap.xml) files.
Configuration Options
property description type
siteUrl Base url of your website string
output (optional) Next.js output modes. Check documentation. standalone, export
changefreq (optional) Change frequency. Default daily string
priority (optional) Priority. Default 0.7 number
sitemapBaseFileName (optional) The name of the generated sitemap file before the file extension. Default "sitemap" string
alternateRefs (optional) Denote multi-language support by unique URL. Default [] AlternateRef[]
sitemapSize(optional) Split large sitemap into multiple files by specifying sitemap size. Default 5000 number
autoLastmod (optional) Add <lastmod/> property. Default true true
exclude (optional) Array of relative paths (wildcard pattern supported) to exclude from listing on sitemap.xml or sitemap-*.xml. e.g.: ['/page-0', '/page-*', '/private/*'].
Apart from this option next-sitemap also offers a custom transform option which could be used to exclude urls that match specific patterns
string[]
sourceDir (optional) next.js build directory. Default .next string
outDir (optional) All the generated files will be exported to this directory. Default public string
transform (optional) A transformation function, which runs for each relative-path in the sitemap. Returning null value from the transformation function will result in the exclusion of that specific path from the generated sitemap list. async function
additionalPaths (optional) Async function that returns a list of additional paths to be added to the generated sitemap list. async function
generateIndexSitemap Generate index sitemaps. Default true boolean
generateRobotsTxt (optional) Generate a robots.txt file and list the generated sitemaps. Default false boolean
robotsTxtOptions.transformRobotsTxt (optional) Custom robots.txt transformer function. (Example: custom-robots-txt-transformer)
Default: async(config, robotsTxt)=> robotsTxt
async function
robotsTxtOptions.policies (optional) Policies for generating robots.txt.
Default:
[{ userAgent: '*', allow: '/' }]
IRobotPolicy[]
robotsTxtOptions.additionalSitemaps (optional) Options to add additional sitemaps to robots.txt host entry string[]
robotsTxtOptions.includeNonIndexSitemaps (optional) From v2.4x onwards, generated robots.txt will only contain url of index sitemap and custom provided endpoints from robotsTxtOptions.additionalSitemaps.
This is to prevent duplicate url submission (once through index-sitemap -> sitemap-url and once through robots.txt -> HOST)
Set this option true to add all generated sitemap endpoints to robots.txt
Default false (Recommended)
boolean
Custom transformation function
Custom transformation provides an extension method to add, remove or exclude path or properties from a url-set. Transform function runs for each relative path in the sitemap. And use the key: value object to add properties in the XML.
Returning null value from the transformation function will result in the exclusion of that specific relative-path from the generated sitemap list.
/** @type {import('next-sitemap').IConfig} */
module.exports = {
transform: async (config, path) => {
// custom function to ignore the path
if (customIgnoreFunction(path)) {
return null
}
// only create changefreq along with path
// returning partial properties will result in generation of XML field with only returned values.
if (customLimitedField(path)) {
// This returns `path` & `changefreq`. Hence it will result in the generation of XML field with `path` and `changefreq` properties only.
return {
loc: path, // => this will be exported as http(s)://<config.siteUrl>/<path>
changefreq: 'weekly',
}
}
// Use default transformation for all other cases
return {
loc: path, // => this will be exported as http(s)://<config.siteUrl>/<path>
changefreq: config.changefreq,
priority: config.priority,
lastmod: config.autoLastmod ? new Date().toISOString() : undefined,
alternateRefs: config.alternateRefs ?? [],
}
},
}
Additional paths function
additionalPaths this function can be useful if you have a large list of pages, but you don't want to render them all and use fallback: true. Result of executing this function will be added to the general list of paths and processed with sitemapSize. You are free to add dynamic paths, but unlike additionalSitemap, you do not need to split the list of paths into different files in case there are a lot of paths for one file.
If your function returns a path that already exists, then it will simply be updated, duplication will not happen.
/** @type {import('next-sitemap').IConfig} */
module.exports = {
additionalPaths: async (config) => {
const result = []
// required value only
result.push({ loc: '/additional-page-1' })
// all possible values
result.push({
loc: '/additional-page-2',
changefreq: 'yearly',
priority: 0.7,
lastmod: new Date().toISOString(),
// acts only on '/additional-page-2'
alternateRefs: [
{
href: 'https://es.example.com',
hreflang: 'es',
},
{
href: 'https://fr.example.com',
hreflang: 'fr',
},
],
})
// using transformation from the current configuration
result.push(await config.transform(config, '/additional-page-3'))
return result
},
}
Google News, image and video sitemap
Url set can contain additional sitemaps defined by google. These are Google News sitemap, image sitemap or video sitemap. You can add the values for these sitemaps by updating entry in transform function or adding it with additionalPaths. You have to return a sitemap entry in both cases, so it's the best place for updating the output. This example will add an image and news tag to each entry but IRL you would of course use it with some condition or within additionalPaths result.
/** @type {import('next-sitemap').IConfig} */
const config = {
transform: async (config, path) => {
return {
loc: path, // => this will be exported as http(s)://<config.siteUrl>/<path>
changefreq: config.changefreq,
priority: config.priority,
lastmod: config.autoLastmod ? new Date().toISOString() : undefined,
images: [{ loc: 'https://example.com/image.jpg' }],
news: {
title: 'Article 1',
publicationName: 'Google Scholar',
publicationLanguage: 'en',
date: new Date(),
},
}
},
}
export default config
Full configuration example
Here's an example next-sitemap.config.js configuration with all options
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: 'https://example.com',
changefreq: 'daily',
priority: 0.7,
sitemapSize: 5000,
generateRobotsTxt: true,
exclude: ['/protected-page', '/awesome/secret-page'],
alternateRefs: [
{
href: 'https://es.example.com',
hreflang: 'es',
},
{
href: 'https://fr.example.com',
hreflang: 'fr',
},
],
// Default transformation function
transform: async (config, path) => {
return {
loc: path, // => this will be exported as http(s)://<config.siteUrl>/<path>
changefreq: config.changefreq,
priority: config.priority,
lastmod: config.autoLastmod ? new Date().toISOString() : undefined,
alternateRefs: config.alternateRefs ?? [],
}
},
additionalPaths: async (config) => [
await config.transform(config, '/additional-page'),
],
robotsTxtOptions: {
policies: [
{
userAgent: '*',
allow: '/',
},
{
userAgent: 'test-bot',
allow: ['/path', '/path-2'],
},
{
userAgent: 'black-listed-bot',
disallow: ['/sub-path-1', '/path-2'],
},
],
additionalSitemaps: [
'https://example.com/my-custom-sitemap-1.xml',
'https://example.com/my-custom-sitemap-2.xml',
'https://example.com/my-custom-sitemap-3.xml',
],
},
}
Above configuration will generate sitemaps based on your project and a robots.txt like this.
# *
User-agent: *
Allow: /
# test-bot
User-agent: test-bot
Allow: /path
Allow: /path-2
# black-listed-bot
User-agent: black-listed-bot
Disallow: /sub-path-1
Disallow: /path-2
# Host
Host: https://example.com
# Sitemaps
Sitemap: https://example.com/sitemap.xml # Index sitemap
Sitemap: https://example.com/my-custom-sitemap-1.xml
Sitemap: https://example.com/my-custom-sitemap-2.xml
Sitemap: https://example.com/my-custom-sitemap-3.xml
Generating dynamic/server-side sitemaps
next-sitemap now provides two APIs to generate server side sitemaps. This will help to dynamically generate index-sitemap(s) and sitemap(s) by sourcing data from CMS or custom source.
• getServerSideSitemapIndex: Generates index sitemaps based on urls provided and returns application/xml response. Supports next13+ route.{ts,js} file.
• To continue using inside pages directory, import getServerSideSitemapIndexLegacy instead.
• getServerSideSitemap: Generates sitemap based on field entires and returns application/xml response. Supports next13+ route.{ts,js} file.
• To continue using inside pages directory, import getServerSideSitemapLegacy instead.
Server side index-sitemaps (getServerSideSitemapIndex)
Here's a sample script to generate index-sitemap on server side.
1. Index sitemap (app directory)
Create app/server-sitemap-index.xml/route.ts file.
// app/server-sitemap-index.xml/route.ts
import { getServerSideSitemapIndex } from 'next-sitemap'
export async function GET(request: Request) {
// Method to source urls from cms
// const urls = await fetch('https//example.com/api')
return getServerSideSitemapIndex([
'https://example.com/path-1.xml',
'https://example.com/path-2.xml',
])
}
2. Index sitemap (pages directory) (legacy)
Create pages/server-sitemap-index.xml/index.tsx file.
// pages/server-sitemap-index.xml/index.tsx
import { getServerSideSitemapIndexLegacy } from 'next-sitemap'
import { GetServerSideProps } from 'next'
export const getServerSideProps: GetServerSideProps = async (ctx) => {
// Method to source urls from cms
// const urls = await fetch('https//example.com/api')
return getServerSideSitemapIndexLegacy(ctx, [
'https://example.com/path-1.xml',
'https://example.com/path-2.xml',
])
}
// Default export to prevent next.js errors
export default function SitemapIndex() {}
Exclude server index sitemap from robots.txt
Now, next.js is serving the dynamic index-sitemap from http://localhost:3000/server-sitemap-index.xml.
List the dynamic sitemap page in robotsTxtOptions.additionalSitemaps and exclude this path from static sitemap list.
// next-sitemap.config.js
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: 'https://example.com',
generateRobotsTxt: true,
exclude: ['/server-sitemap-index.xml'], // <= exclude here
robotsTxtOptions: {
additionalSitemaps: [
'https://example.com/server-sitemap-index.xml', // <==== Add here
],
},
}
In this way, next-sitemap will manage the sitemaps for all your static pages and your dynamic index-sitemap will be listed on robots.txt.
server side sitemap (getServerSideSitemap)
Here's a sample script to generate sitemaps on server side.
1. Sitemaps (app directory)
Create app/server-sitemap.xml/route.ts file.
// app/server-sitemap.xml/route.ts
import { getServerSideSitemap } from 'next-sitemap'
export async function GET(request: Request) {
// Method to source urls from cms
// const urls = await fetch('https//example.com/api')
return getServerSideSitemap([
{
loc: 'https://example.com',
lastmod: new Date().toISOString(),
// changefreq
// priority
},
{
loc: 'https://example.com/dynamic-path-2',
lastmod: new Date().toISOString(),
// changefreq
// priority
},
])
}
2. Sitemaps (pages directory) (legacy)
Create pages/server-sitemap.xml/index.tsx file.
// pages/server-sitemap.xml/index.tsx
import { getServerSideSitemapLegacy } from 'next-sitemap'
import { GetServerSideProps } from 'next'
export const getServerSideProps: GetServerSideProps = async (ctx) => {
// Method to source urls from cms
// const urls = await fetch('https//example.com/api')
const fields = [
{
loc: 'https://example.com', // Absolute url
lastmod: new Date().toISOString(),
// changefreq
// priority
},
{
loc: 'https://example.com/dynamic-path-2', // Absolute url
lastmod: new Date().toISOString(),
// changefreq
// priority
},
]
return getServerSideSitemapLegacy(ctx, fields)
}
// Default export to prevent next.js errors
export default function Sitemap() {}
Now, next.js is serving the dynamic sitemap from http://localhost:3000/server-sitemap.xml.
List the dynamic sitemap page in robotsTxtOptions.additionalSitemaps and exclude this path from static sitemap list.
// next-sitemap.config.js
/** @type {import('next-sitemap').IConfig} */
module.exports = {
siteUrl: 'https://example.com',
generateRobotsTxt: true,
exclude: ['/server-sitemap.xml'], // <= exclude here
robotsTxtOptions: {
additionalSitemaps: [
'https://example.com/server-sitemap.xml', // <==== Add here
],
},
}
In this way, next-sitemap will manage the sitemaps for all your static pages and your dynamic sitemap will be listed on robots.txt.
Typescript JSDoc
Add the following line of code in your next-sitemap.config.js for nice typescript autocomplete! 💖
/** @type {import('next-sitemap').IConfig} */
module.exports = {
// YOUR CONFIG
}
TS_JSDOC
Contribution
All PRs are welcome :)
Package Sidebar
Install
npm i next-sitemap
Weekly Downloads
352,438
Version
4.2.3
License
MIT
Unpacked Size
340 kB
Total Files
94
Last publish
Collaborators
• iamvishnusankar
|
__label__pos
| 0.55859 |
2
votes
2answers
5k views
Why does Messages app take up so much space?
I was recently inspecting the Usage tab in my iPhone's settings. The Messages app is #2 on the list with almost 700 MB. Anecdotal reports have it that the app can take up multiple GB of space. So what ...
0
votes
0answers
153 views
Is there a “windirstat”/“Disk Inventory X” type iOS app? (ie app for visualizing hard drive usage on iOS)
This is a pretty simple question. Assuming you know what WinDirStat &/or Disc Inventory X do, which is visualize the usage of your hard drive on a file-by-file basis, sized in proportion to the ...
1
vote
1answer
50 views
View music size by artist on iOS
I'm trying to free up space on an iPhone. Most of the space is used by music. Is there a way to see which artists are using the most amount of space? Logically this would be displayed under "System ...
0
votes
2answers
977 views
Is it safe to delete ipsw files?
I just updated my iPod Touch to iOS 5.1. During the process, iTunes downloaded a 750MB file: ~/Library/iTunes/iPod Software Updates/iPod4,1_5.1_9B176_Restore.ipsw. I have no use for this file anymore, ...
8
votes
3answers
40k views
Where does iTunes save iOS updates?
When I choose to install a new version of iOS on my device via iTunes, the installation file is huge (e.g. 750 MB). Where does iTunes save this file, and does it get deleted after the installation is ...
19
votes
5answers
37k views
Simple way to see a list of installed iOS apps with their storage space size?
The storage on my iPhone is nearly full. Most of the space is used by apps. I'd like to see which apps consume the most space on my device. Is there a way to get a list of the apps with their ...
|
__label__pos
| 0.531876 |
White Paper
COVID-19 Clicks: How Phishing Capitalized On A Global Crisis
Source: Webroot
Fish On Hooks
As a cybercriminal tactic, phishing is not new. In fact, one of the very first records of the term appeared in an early internet “cracking” application in January of 1996. Despite its age, phishing continues to be one of the most pervasive cyber threats individuals and businesses face. When technology moves at today’s astonishing rates, why is such an old method of internet trickery still so common? The answer is simple: because it’s still wildly successful. Perhaps the more important question, then, is: why are people still clicking?
We surveyed 7,000 office workers in the United States, United Kingdom, Australia/New Zealand, Germany, France, Italy and Japan on their understanding of phishing, their email and click habits, and how their online lives have changed since the beginning of the COVID-19 pandemic. First, we compared our new data with answers from our survey last year, featured in the report Hook, Line, and Sinker: Why Phishing Attacks Work. We then worked with Dr. Prashanth Rajivan, assistant professor at the University of Washington, to get his take on why 8 in 10 people worldwide claim to take adequate steps to determine the legitimacy of emails, yet 3 in 10 admit to having fallen for a phishing scam in the last year.
According to Dr. Rajivan, what we need to consider is that human beings aren’t necessarily good at dealing with uncertainty, which is part of why cybercriminals capitalize on upheaval (such as a global pandemic) to launch attacks.
In this report, we’ll dive into the survey results, present insights and analysis from Dr. Rajivan and our own cybersecurity experts and reveal real-world concerns from workers around the globe. Finally, we’ll offer steps to help businesses and individuals stay resilient against phishing attacks.
VIEW THE WHITE PAPER!
Signing up provides unlimited access to:
Signing up provides unlimited access to:
• Trend and Leadership Articles
• Case Studies
• Extensive Product Database
• Premium Content
HELLO. PLEASE LOG IN. X
Not yet a member of VAR Insights? Register today.
ACCOUNT SIGN UP X
Please fill in your account details
Login Information
ACCOUNT SIGN UP
Subscriptions
Sign up for the newsletter that brings you the industry's latest news, technologies, trends and products.
|
__label__pos
| 0.761197 |
Proof of Stake
PoS
Proof of stake is a protocol designed to reach distributed consensus without the need for an ‘energy-intensive’ mining process. It is used by cryptocurrencies to validate and append transactions in the blockchain and also generate new coins in circulation. The nodes on the PoS network are staked by ‘forgers’ or ‘minters’. Proof of Stake was developed as an alternative to Proof of Work (PoW) (what is Proof of Work?) which is an energy-intensive process which involves setting up GPUs or ASIC miners globally to ensure distributed consensus.
The concept of nodes is important here because unlike miners which are sometimes also called ‘nodes’ in the blockchain, the node in PoS is a server which validates the transactions on the network. The percentage of coins that each node holds determines the reward for that node. In Proof of Stake (PoS), the coins are ‘forged’ or ‘minted’ not mined as on the PoW protocol.
When a transaction is sent across a PoS network, the coins in circulation are forged by the nodes when they validate and confirm the transactions. Forging involves creating a duplicate copy of the coins, it is also the process by which new coins are ‘minted’, as cryptocurrency tokens are all fungible it is the process of generating new coins. PoS algorithms are designed to reward these newly minted coins to the forgers or nodes according to their stake in the network. The nodes which process a malicious block or transaction can be voted-out or boycotted by the other nodes as well. Peercoin, Nxt and Ardor are some of the most popular and oldest cryptocurrencies to incorporate the PoS algorithm.
Advantages and Drawbacks of PoS
Proof of Stake is an environmentally friendly algorithm, the economy and generation process are intertwined in an intelligent way to ensure distributed consensus. The nodes on a PoS Blockchain would require nothing more than a small IC or Raspberi Pi running a small application. The PoS system is so energy efficient that a node can be powered by a USB port alone.
The Proof of Stake (PoS) system also has some drawbacks that have to be meticulously deterred by the developing team to ensure fair economic activity on the blockchain. The most popular and possible of those is the 51% attack. This happens when an individual stakeholder acquires more than 50% of the tokens in circulation, this gives the individual centralized power over the network. The stakeholder can vote to mutate the blockchain according to his whim having majority control over the network.
PoS, the future of Ethereuem
Ethereuem is the most popular smart contract development platform. The distribution of Ether Tokens is also wide-spread enough to ensure decentralisation. The network has outlined a new design and has determined to shift from Proof of Work (PoW) to Proof of Stake (PoS) network to compete with the more advanced EOS, Cardano and Tron. With the help of PoS, these networks provide almost instantaneous and zero fees transactions.
Leave a Reply
avatar
Subscribe
Notify of
|
__label__pos
| 0.730427 |
View Javadoc
1 /**
2 * BSD-style license; for more info see http://pmd.sourceforge.net/license.html
3 */
4 package net.sourceforge.pmd.lang.symboltable;
5
6
7 /**
8 * This is a declaration of a name, e.g. a variable or method name.
9 * See {@link AbstractNameDeclaration} for a base class.
10 */
11 public interface NameDeclaration {
12
13 /**
14 * Gets the node which manifests the declaration.
15 * @return the node
16 */
17 ScopedNode getNode();
18
19 /**
20 * Gets the image of the node. This is usually the name of the declaration
21 * such as the variable name.
22 * @return the image
23 * @see #getName()
24 */
25 String getImage();
26
27 /**
28 * Gets the scope in which this name has been declared.
29 * @return the scope
30 */
31 Scope getScope();
32
33 /**
34 * Gets the name of the declaration, such as the variable name.
35 * @return
36 */
37 String getName();
38 }
|
__label__pos
| 0.998646 |
/[pcsx2_0.9.7]/trunk/pcsx2/Vif_Codes.cpp
ViewVC logotype
Contents of /trunk/pcsx2/Vif_Codes.cpp
Parent Directory Parent Directory | Revision Log Revision Log
Revision 280 - (show annotations) (download)
Thu Dec 23 12:02:12 2010 UTC (9 years, 6 months ago) by william
File size: 20514 byte(s)
re-commit (had local access denied errors when committing)
1 /* PCSX2 - PS2 Emulator for PCs
2 * Copyright (C) 2002-2010 PCSX2 Dev Team
3 *
4 * PCSX2 is free software: you can redistribute it and/or modify it under the terms
5 * of the GNU Lesser General Public License as published by the Free Software Found-
6 * ation, either version 3 of the License, or (at your option) any later version.
7 *
8 * PCSX2 is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
9 * without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
10 * PURPOSE. See the GNU General Public License for more details.
11 *
12 * You should have received a copy of the GNU General Public License along with PCSX2.
13 * If not, see <http://www.gnu.org/licenses/>.
14 */
15
16 #include "PrecompiledHeader.h"
17 #include "Common.h"
18 #include "GS.h"
19 #include "Gif.h"
20 #include "Vif_Dma.h"
21 #include "newVif.h"
22 #include "VUmicro.h"
23
24 #define vifOp(vifCodeName) _vifT int __fastcall vifCodeName(int pass, const u32 *data)
25 #define pass1 if (pass == 0)
26 #define pass2 if (pass == 1)
27 #define pass3 if (pass == 2)
28 #define vif1Only() { if (!idx) return vifCode_Null<idx>(pass, (u32*)data); }
29 vifOp(vifCode_Null);
30
31 //------------------------------------------------------------------
32 // Vif0/Vif1 Misc Functions
33 //------------------------------------------------------------------
34
35 static __fi void vifFlush(int idx) {
36 if (!idx) vif0FLUSH();
37 else vif1FLUSH();
38 }
39
40 static __fi void vuExecMicro(int idx, u32 addr) {
41 VIFregisters& vifRegs = vifXRegs;
42 int startcycles = 0;
43 //vifFlush(idx);
44
45 //if(vifX.vifstalled == true) return;
46
47 if (vifRegs.itops > (idx ? 0x3ffu : 0xffu)) {
48 Console.WriteLn("VIF%d ITOP overrun! %x", idx, vifRegs.itops);
49 vifRegs.itops &= (idx ? 0x3ffu : 0xffu);
50 }
51
52 vifRegs.itop = vifRegs.itops;
53
54 if (idx) {
55 // in case we're handling a VIF1 execMicro, set the top with the tops value
56 vifRegs.top = vifRegs.tops & 0x3ff;
57
58 // is DBF flag set in VIF_STAT?
59 if (vifRegs.stat.DBF) {
60 // it is, so set tops with base, and clear the stat DBF flag
61 vifRegs.tops = vifRegs.base;
62 vifRegs.stat.DBF = false;
63 }
64 else {
65 // it is not, so set tops with base + offset, and set stat DBF flag
66 vifRegs.tops = vifRegs.base + vifRegs.ofst;
67 vifRegs.stat.DBF = true;
68 }
69 }
70
71 if(!idx)startcycles = VU0.cycle;
72 else startcycles = VU1.cycle;
73
74 if (!idx) vu0ExecMicro(addr);
75 else vu1ExecMicro(addr);
76
77 if(!idx) { g_vu0Cycles += (VU0.cycle-startcycles) * BIAS; g_packetsizeonvu = vif0.vifpacketsize; }
78 else { g_vu1Cycles += (VU1.cycle-startcycles) * BIAS; g_packetsizeonvu = vif1.vifpacketsize; }
79 //DevCon.Warning("Ran VU%x, VU0 Cycles %x, VU1 Cycles %x", idx, g_vu0Cycles, g_vu1Cycles);
80 GetVifX.vifstalled = true;
81 }
82
83 u8 schedulepath3msk = 0;
84
85 void Vif1MskPath3() {
86
87 vif1Regs.mskpath3 = schedulepath3msk & 0x1;
88 GIF_LOG("VIF MSKPATH3 %x gif str %x path3 status %x", vif1Regs.mskpath3, gifch.chcr.STR, GSTransferStatus.PTH3);
89 gifRegs.stat.M3P = vif1Regs.mskpath3;
90
91 if (!vif1Regs.mskpath3)
92 {
93 //if(GSTransferStatus.PTH3 > TRANSFER_MODE && gif->chcr.STR) GSTransferStatus.PTH3 = TRANSFER_MODE;
94 //DevCon.Warning("Mask off");
95 //if(GSTransferStatus.PTH3 >= PENDINGSTOP_MODE) GSTransferStatus.PTH3 = IDLE_MODE;
96 if(gifRegs.stat.P3Q)
97 {
98 gsInterrupt();//gsInterrupt();
99 }
100
101 }// else if(!gif->chcr.STR && GSTransferStatus.PTH3 == IDLE_MODE) GSTransferStatus.PTH3 = STOPPED_MODE;//else DevCon.Warning("Mask on");
102
103 schedulepath3msk = 0;
104 }
105
106 //------------------------------------------------------------------
107 // Vif0/Vif1 Code Implementations
108 //------------------------------------------------------------------
109
110 vifOp(vifCode_Base) {
111 vif1Only();
112 pass1 { vif1Regs.base = vif1Regs.code & 0x3ff; vif1.cmd = 0; }
113 pass3 { VifCodeLog("Base"); }
114 return 0;
115 }
116
117 extern bool SIGNAL_IMR_Pending;
118 static __aligned16 u32 partial_write[4];
119 static uint partial_count = 0;
120
121 template<int idx> __fi int _vifCode_Direct(int pass, const u8* data, bool isDirectHL) {
122 pass1 {
123 vif1Only();
124 int vifImm = (u16)vif1Regs.code;
125 vif1.tag.size = vifImm ? (vifImm*4) : (65536*4);
126 vif1.vifstalled = true;
127 gifRegs.stat.P2Q = true;
128 if (gifRegs.stat.PSE) // temporarily stop
129 {
130 Console.WriteLn("Gif dma temp paused? VIF DIRECT");
131 vif1.GifWaitState = 3;
132 vif1Regs.stat.VGW = true;
133 }
134 //Should cause this to split here to try and time PATH3 right.
135 return 0;
136 }
137 pass2 {
138 vif1Only();
139
140 if (GSTransferStatus.PTH3 < IDLE_MODE || gifRegs.stat.P1Q == true)
141 {
142 if(gifRegs.stat.APATH == GIF_APATH2 || ((GSTransferStatus.PTH3 <= IMAGE_MODE && gifRegs.stat.IMT && (vif1.cmd & 0x7f) == 0x50)) && gifRegs.stat.P1Q == false)
143 {
144 //Do nothing, allow it
145 vif1Regs.stat.VGW = false;
146 //if(gifRegs.stat.APATH != GIF_APATH2)DevCon.Warning("Continue DIRECT/HL %x P3 %x APATH %x P1Q %x", vif1.cmd, GSTransferStatus.PTH3, gifRegs.stat.APATH, gifRegs.stat.P1Q);
147 }
148 else
149 {
150 //DevCon.Warning("Stall DIRECT/HL %x P3 %x APATH %x P1Q %x", vif1.cmd, GSTransferStatus.PTH3, gifRegs.stat.APATH, gifRegs.stat.P1Q);
151 vif1Regs.stat.VGW = true; // PATH3 is in image mode (DIRECTHL), or busy (BOTH no IMT)
152 vif1.GifWaitState = 0;
153 vif1.vifstalled = true;
154 return 0;
155 }
156 }
157 if(SIGNAL_IMR_Pending == true)
158 {
159 //DevCon.Warning("Path 2 Paused (At start)");
160 vif1.vifstalled = true;
161 return 0;
162 }
163 if (gifRegs.stat.PSE) // temporarily stop
164 {
165 Console.WriteLn("Gif dma temp paused? VIF DIRECT");
166 vif1.GifWaitState = 3;
167 vif1.vifstalled = true;
168 vif1Regs.stat.VGW = true;
169 return 0;
170 }
171
172 // HACK ATTACK!
173 // we shouldn't be clearing the queue flag here at all. Ideally, the queue statuses
174 // should be checked, handled, and cleared from the EOP check in GIFPath only. --air
175 gifRegs.stat.clear_flags(GIF_STAT_P2Q);
176
177 uint minSize = aMin(vif1.vifpacketsize, vif1.tag.size);
178 uint ret;
179
180 if(minSize < 4 || partial_count > 0)
181 {
182 // When TTE==1, the VIF might end up sending us 8-byte packets instead of the usual 16-byte
183 // variety, if DIRECT tags cross chain dma boundaries. The actual behavior of real hardware
184 // is unknown at this time, but it seems that games *only* ever try to upload zero'd data
185 // in this situation.
186 //
187 // Games that use TTE==1 and DIRECT in this fashion: ICO
188 //
189 // Because DIRECT normally has a strict QWC alignment requirement, and this funky behavior
190 // only seems to happen on TTE mode transfers with their split-64-bit packets, there shouldn't
191 // be any need to worry about queuing more than 16 bytes of data,
192 //
193
194
195
196 ret = 0;
197 minSize = aMin(minSize, 4-partial_count);
198 for( uint i=0; i<(minSize & 3); ++i)
199 {
200 partial_write[partial_count++] = ((u32*)data)[i];
201 ret++;
202 }
203
204 pxAssume( partial_count <= 4 );
205
206 if (partial_count == 4)
207 {
208 GetMTGS().PrepDataPacket(GIF_PATH_2, 1);
209 GIFPath_CopyTag(GIF_PATH_2, (u128*)partial_write, 1);
210 GetMTGS().SendDataPacket();
211 partial_count = 0;
212 }
213 }
214 else
215 {
216 if (!minSize)
217 DevCon.Warning("VIF DIRECT (PATH2): No Data Transfer?");
218
219 // TTE=1 mode is the only time we should be getting DIRECT packet sizes that are
220 // not a multiple of QWC, and those are assured to be under 128 bits in size.
221 // So if this assert is triggered then it probably means something else is amiss.
222 pxAssertMsg((minSize & 3) == 0, "DIRECT packet size is not a multiple of QWC." );
223
224 GetMTGS().PrepDataPacket(GIF_PATH_2, minSize/4);
225 ret = GIFPath_CopyTag(GIF_PATH_2, (u128*)data, minSize/4)*4;
226 GetMTGS().SendDataPacket();
227 }
228
229 vif1.tag.size -= ret;
230
231 if(vif1.tag.size == 0)
232 {
233 vif1.cmd = 0;
234 }
235 vif1.vifstalled = true;
236 return ret;
237 }
238 return 0;
239 }
240
241 vifOp(vifCode_Direct) {
242 pass3 { VifCodeLog("Direct"); }
243 return _vifCode_Direct<idx>(pass, (u8*)data, 0);
244 }
245
246 vifOp(vifCode_DirectHL) {
247 pass3 { VifCodeLog("DirectHL"); }
248 return _vifCode_Direct<idx>(pass, (u8*)data, 1);
249 }
250
251 // ToDo: FixMe
252 vifOp(vifCode_Flush) {
253 vif1Only();
254 vifStruct& vifX = GetVifX;
255 pass1 { vifFlush(idx); vifX.cmd = 0; }
256 pass3 { VifCodeLog("Flush"); }
257 return 0;
258 }
259
260 // ToDo: FixMe
261 vifOp(vifCode_FlushA) {
262 vif1Only();
263 vifStruct& vifX = GetVifX;
264 pass1 {
265 vifFlush(idx);
266 // Gif is already transferring so wait for it.
267 if (gifRegs.stat.P1Q || GSTransferStatus.PTH3 <= PENDINGSTOP_MODE) {
268 //DevCon.Warning("VIF FlushA Wait MSK = %x", vif1Regs.mskpath3);
269 //
270
271 //DevCon.WriteLn("FlushA path3 Wait! PTH3 MD %x STR %x", GSTransferStatus.PTH3, gif->chcr.STR);
272 vif1Regs.stat.VGW = true;
273 vifX.GifWaitState = 1;
274 vifX.vifstalled = true;
275 } // else DevCon.WriteLn("FlushA path3 no Wait! PTH3 MD %x STR %x", GSTransferStatus.PTH3, gif->chcr.STR);
276
277 vifX.cmd = 0;
278 }
279 pass3 { VifCodeLog("FlushA"); }
280 return 0;
281 }
282
283 // ToDo: FixMe
284 vifOp(vifCode_FlushE) {
285 vifStruct& vifX = GetVifX;
286 pass1 { vifFlush(idx); vifX.cmd = 0; }
287 pass3 { VifCodeLog("FlushE"); }
288 return 0;
289 }
290
291 vifOp(vifCode_ITop) {
292 pass1 { vifXRegs.itops = vifXRegs.code & 0x3ff; GetVifX.cmd = 0; }
293 pass3 { VifCodeLog("ITop"); }
294 return 0;
295 }
296
297 vifOp(vifCode_Mark) {
298 vifStruct& vifX = GetVifX;
299 pass1 {
300 vifXRegs.mark = (u16)vifXRegs.code;
301 vifXRegs.stat.MRK = true;
302 vifX.cmd = 0;
303 }
304 pass3 { VifCodeLog("Mark"); }
305 return 0;
306 }
307
308 static __fi void _vifCode_MPG(int idx, u32 addr, const u32 *data, int size) {
309 VURegs& VUx = idx ? VU1 : VU0;
310 pxAssume(VUx.Micro > 0);
311
312 if (memcmp_mmx(VUx.Micro + addr, data, size*4)) {
313 // Clear VU memory before writing!
314 // (VUs expect size to be 32-bit scale, same as VIF's internal working sizes)
315 if (!idx) CpuVU0->Clear(addr, size);
316 else CpuVU1->Clear(addr, size);
317 memcpy_fast(VUx.Micro + addr, data, size*4);
318 }
319 }
320
321 vifOp(vifCode_MPG) {
322 vifStruct& vifX = GetVifX;
323 pass1 {
324 int vifNum = (u8)(vifXRegs.code >> 16);
325 vifX.tag.addr = (u16)(vifXRegs.code << 3) & (idx ? 0x3fff : 0xfff);
326 vifX.tag.size = vifNum ? (vifNum*2) : 512;
327 //vifFlush(idx);
328 return 1;
329 }
330 pass2 {
331 if (vifX.vifpacketsize < vifX.tag.size) { // Partial Transfer
332 if((vifX.tag.addr + vifX.vifpacketsize*4) > (idx ? 0x4000 : 0x1000)) {
333 DevCon.Warning("Vif%d MPG Split Overflow", idx);
334 }
335 _vifCode_MPG(idx, vifX.tag.addr, data, vifX.vifpacketsize);
336 vifX.tag.addr += vifX.vifpacketsize * 4;
337 vifX.tag.size -= vifX.vifpacketsize;
338 return vifX.vifpacketsize;
339 }
340 else { // Full Transfer
341 if((vifX.tag.addr + vifX.tag.size*4) > (idx ? 0x4000 : 0x1000)) {
342 DevCon.Warning("Vif%d MPG Split Overflow", idx);
343 }
344 _vifCode_MPG(idx, vifX.tag.addr, data, vifX.tag.size);
345 int ret = vifX.tag.size;
346 vifX.tag.size = 0;
347 vifX.cmd = 0;
348 return ret;
349 }
350 }
351 pass3 { VifCodeLog("MPG"); }
352 return 0;
353 }
354
355 vifOp(vifCode_MSCAL) {
356 vifStruct& vifX = GetVifX;
357 pass1 { vifFlush(idx); vuExecMicro(idx, (u16)(vifXRegs.code) << 3); vifX.cmd = 0;}
358 pass3 { VifCodeLog("MSCAL"); }
359 return 0;
360 }
361
362 vifOp(vifCode_MSCALF) {
363 vifStruct& vifX = GetVifX;
364 pass1 { vifFlush(idx); vuExecMicro(idx, (u16)(vifXRegs.code) << 3); vifX.cmd = 0; }
365 pass3 { VifCodeLog("MSCALF"); }
366 return 0;
367 }
368
369 vifOp(vifCode_MSCNT) {
370 vifStruct& vifX = GetVifX;
371 pass1 { vifFlush(idx); vuExecMicro(idx, -1); vifX.cmd = 0; }
372 pass3 { VifCodeLog("MSCNT"); }
373 return 0;
374 }
375
376 // ToDo: FixMe
377 vifOp(vifCode_MskPath3) {
378 vif1Only();
379 pass1 {
380 //I Hate the timing sensitivity of this stuff
381 if (vif1ch.chcr.STR && vif1.lastcmd != 0x13) {
382 schedulepath3msk = 0x10 | ((vif1Regs.code >> 15) & 0x1);
383 }
384 else
385 {
386 schedulepath3msk = (vif1Regs.code >> 15) & 0x1;
387 Vif1MskPath3();
388 }
389 if(vif1ch.chcr.STR)vif1.vifstalled = true;
390 vif1.cmd = 0;
391 }
392 pass3 { VifCodeLog("MskPath3"); }
393 return 0;
394 }
395
396 vifOp(vifCode_Nop) {
397 pass1 { GetVifX.cmd = 0; }
398 pass3 { VifCodeLog("Nop"); }
399 return 0;
400 }
401
402 // ToDo: Review Flags
403 vifOp(vifCode_Null) {
404 vifStruct& vifX = GetVifX;
405 pass1 {
406 // if ME1, then force the vif to interrupt
407 if (!(vifXRegs.err.ME1)) { // Ignore vifcode and tag mismatch error
408 Console.WriteLn("Vif%d: Unknown VifCmd! [%x]", idx, vifX.cmd);
409 vifXRegs.stat.ER1 = true;
410 vifX.vifstalled = true;
411 //vifX.irq++;
412 }
413 vifX.cmd = 0;
414 }
415 pass2 { Console.Error("Vif%d bad vifcode! [CMD = %x]", idx, vifX.cmd); }
416 pass3 { VifCodeLog("Null"); }
417 return 0;
418 }
419
420 vifOp(vifCode_Offset) {
421 vif1Only();
422 pass1 {
423 vif1Regs.stat.DBF = false;
424 vif1Regs.ofst = vif1Regs.code & 0x3ff;
425 vif1Regs.tops = vif1Regs.base;
426 vif1.cmd = 0;
427 }
428 pass3 { VifCodeLog("Offset"); }
429 return 0;
430 }
431
432 template<int idx> static __fi int _vifCode_STColRow(const u32* data, u32* pmem2) {
433 vifStruct& vifX = GetVifX;
434
435 int ret = min(4 - vifX.tag.addr, vifX.vifpacketsize);
436 pxAssume(vifX.tag.addr < 4);
437 pxAssume(ret > 0);
438
439 switch (ret) {
440 case 4:
441 pmem2[3] = data[3];
442 case 3:
443 pmem2[2] = data[2];
444 case 2:
445 pmem2[1] = data[1];
446 case 1:
447 pmem2[0] = data[0];
448 break;
449 jNO_DEFAULT
450 }
451
452 vifX.tag.addr += ret;
453 vifX.tag.size -= ret;
454 if (!vifX.tag.size) vifX.cmd = 0;
455
456 return ret;
457 }
458
459 vifOp(vifCode_STCol) {
460 vifStruct& vifX = GetVifX;
461 pass1 {
462 vifX.tag.addr = 0;
463 vifX.tag.size = 4;
464 return 1;
465 }
466 pass2 {
467 return _vifCode_STColRow<idx>(data, &vifX.MaskCol._u32[vifX.tag.addr]);
468 }
469 pass3 { VifCodeLog("STCol"); }
470 return 0;
471 }
472
473 vifOp(vifCode_STRow) {
474 vifStruct& vifX = GetVifX;
475
476 pass1 {
477 vifX.tag.addr = 0;
478 vifX.tag.size = 4;
479 return 1;
480 }
481 pass2 {
482 return _vifCode_STColRow<idx>(data, &vifX.MaskRow._u32[vifX.tag.addr]);
483 }
484 pass3 { VifCodeLog("STRow"); }
485 return 0;
486 }
487
488 vifOp(vifCode_STCycl) {
489 vifStruct& vifX = GetVifX;
490 pass1 {
491 vifXRegs.cycle.cl = (u8)(vifXRegs.code);
492 vifXRegs.cycle.wl = (u8)(vifXRegs.code >> 8);
493 vifX.cmd = 0;
494 }
495 pass3 { VifCodeLog("STCycl"); }
496 return 0;
497 }
498
499 vifOp(vifCode_STMask) {
500 vifStruct& vifX = GetVifX;
501 pass1 { vifX.tag.size = 1; }
502 pass2 { vifXRegs.mask = data[0]; vifX.tag.size = 0; vifX.cmd = 0; }
503 pass3 { VifCodeLog("STMask"); }
504 return 1;
505 }
506
507 vifOp(vifCode_STMod) {
508 pass1 { vifXRegs.mode = vifXRegs.code & 0x3; GetVifX.cmd = 0; }
509 pass3 { VifCodeLog("STMod"); }
510 return 0;
511 }
512
513 template< uint idx >
514 static uint calc_addr(bool flg)
515 {
516 VIFregisters& vifRegs = vifXRegs;
517
518 uint retval = vifRegs.code;
519 if (idx && flg) retval += vifRegs.tops;
520 return retval & (idx ? 0x3ff : 0xff);
521 }
522
523 vifOp(vifCode_Unpack) {
524 pass1 {
525 vifUnpackSetup<idx>(data);
526 return 1;
527 }
528 pass2 { return nVifUnpack<idx>((u8*)data); }
529 pass3 {
530 vifStruct& vifX = GetVifX;
531 VIFregisters& vifRegs = vifXRegs;
532 uint vl = vifX.cmd & 0x03;
533 uint vn = (vifX.cmd >> 2) & 0x3;
534 bool flg = (vifRegs.code >> 15) & 1;
535 static const char* const vntbl[] = { "S", "V2", "V3", "V4" };
536 static const uint vltbl[] = { 32, 16, 8, 5 };
537
538 VifCodeLog("Unpack %s_%u (%s) @ 0x%04X%s (cl=%u wl=%u num=0x%02X)",
539 vntbl[vn], vltbl[vl], (vifX.cmd & 0x10) ? "masked" : "unmasked",
540 calc_addr<idx>(flg), flg ? "(FLG)" : "",
541 vifRegs.cycle.cl, vifRegs.cycle.wl, (vifXRegs.code >> 16) & 0xff
542 );
543 }
544 return 0;
545 }
546
547 //------------------------------------------------------------------
548 // Vif0/Vif1 Code Tables
549 //------------------------------------------------------------------
550
551 __aligned16 FnType_VifCmdHandler* const vifCmdHandler[2][128] =
552 {
553 {
554 vifCode_Nop<0> , vifCode_STCycl<0> , vifCode_Offset<0> , vifCode_Base<0> , vifCode_ITop<0> , vifCode_STMod<0> , vifCode_MskPath3<0>, vifCode_Mark<0>, /*0x00*/
555 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x08*/
556 vifCode_FlushE<0> , vifCode_Flush<0> , vifCode_Null<0> , vifCode_FlushA<0> , vifCode_MSCAL<0> , vifCode_MSCALF<0> , vifCode_Null<0> , vifCode_MSCNT<0>, /*0x10*/
557 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x18*/
558 vifCode_STMask<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x20*/
559 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x28*/
560 vifCode_STRow<0> , vifCode_STCol<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x30*/
561 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x38*/
562 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x40*/
563 vifCode_Null<0> , vifCode_Null<0> , vifCode_MPG<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x48*/
564 vifCode_Direct<0> , vifCode_DirectHL<0>, vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x50*/
565 vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0> , vifCode_Null<0>, /*0x58*/
566 vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Null<0>, /*0x60*/
567 vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0>, /*0x68*/
568 vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Null<0>, /*0x70*/
569 vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Null<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> , vifCode_Unpack<0> /*0x78*/
570 },
571 {
572 vifCode_Nop<1> , vifCode_STCycl<1> , vifCode_Offset<1> , vifCode_Base<1> , vifCode_ITop<1> , vifCode_STMod<1> , vifCode_MskPath3<1>, vifCode_Mark<1>, /*0x00*/
573 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x08*/
574 vifCode_FlushE<1> , vifCode_Flush<1> , vifCode_Null<1> , vifCode_FlushA<1> , vifCode_MSCAL<1> , vifCode_MSCALF<1> , vifCode_Null<1> , vifCode_MSCNT<1>, /*0x10*/
575 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x18*/
576 vifCode_STMask<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x20*/
577 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x28*/
578 vifCode_STRow<1> , vifCode_STCol<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x30*/
579 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x38*/
580 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x40*/
581 vifCode_Null<1> , vifCode_Null<1> , vifCode_MPG<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x48*/
582 vifCode_Direct<1> , vifCode_DirectHL<1>, vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x50*/
583 vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1> , vifCode_Null<1>, /*0x58*/
584 vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Null<1>, /*0x60*/
585 vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1>, /*0x68*/
586 vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Null<1>, /*0x70*/
587 vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Null<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> , vifCode_Unpack<1> /*0x78*/
588 }
589 };
ViewVC Help
Powered by ViewVC 1.1.22
|
__label__pos
| 0.98467 |
'
Массив представляет собой совокупность данных одного типа с общим для всех элементов именем. Массив относится к структурированным типам данных (упорядоченная совокупность данных). Номера элементов массива иначе называются индексами, а сами элементы массива — переменными с индексами (индексированными переменными).
Понравилась презентация – покажи это...
Слайд 1
Массив представляет собой совокупность данных одного типа с общим для всех элементов именем. Массив относится к структурированным типам данных (упорядоченная совокупность данных). Номера элементов массива иначе называются индексами, а сами элементы массива — переменными с индексами (индексированными переменными).
Слайд 2
Значение элемента массива а[3]=2, а[7]=4. Данные в массивах сохраняются только до конца работы программы. Для их долговременного хранения программа должна записать данные в файл.
Слайд 3
Характеристики массива: тип — общий тип всех элементов массива; размерность (ранг) — количество индексов массива; диапазон изменения индекса (индексов) — определяет количество элементов в массиве
Слайд 4
Способы описания массива Например: Const n=100; var a: array[1..n] of real; { 100 элементов — вещественные числа } b: array[0..50] of char; { 51 элемент — символы } с: array[-3..4] of boolean; { 8 элементов — логические значения } x,y: array[1..20] of integer; { два массива x и у содержат по 20 элементов — целые числа } var ИмяМассива: array [НижняяГраница.. ВерхняяГраница] of Тип Элементов;
Слайд 5
Способы описания массива Массив можно описать как типизированную константу. Например: const x: array[1..5] of integer=(l,3,5,7,9); В этом примере не просто выделяется память под массив, а происходит заполнение ячеек заданными значениями по строкам.
Слайд 6
Способы описания массива Предварительное описание типа в разделе описания типов данных. Например. Type z: array[1..20] of integer; Var x, y: z Type ИмяТипа = аггау [НижняяГраница.. ВерхняяГраница ] of Тип Элементов; Var ИмяМассива : ИмяТипа;
Слайд 7
Как вы думаете, при выполнении программы обязательно заполнять все ячейки данными? Почему? Если ячейка не заполнена то, какое значение в ней находится? Может ли реальное количество элементов в массиве может быть меньше, чем указано при описании? Почему? А может быть больше? Почему?
Слайд 8
Способы заполнения массива 1. Ввод данных с клавиатуры: for i:=1 to n do read (a[i]); 2. Ввод данных с помощью датчика случайных чисел. Например. Заполним массив числами в диапазоне от -3 до 7. randomize; for i:=1 to n do a[i]:=random(11)-3; 3. Считывая значения элементов из файла: for i:=1 to n do read (f, a[i]);
Слайд 9
Способы заполнения массива 4. Присваивание заданных значений; Например. Заполним массив четными числами for i:=1 to n do a[i]:=i*2; или for i:=1 to n do begin readln (x); if x mod 2=0 then a[i]:=x Вывод элементов массива Вывод элементов массива осуществляется в цикле: for i:=1 to n do write (a[i],’ ‘)
Слайд 10
Действия с одномерными массивами Например. Var A, B: array[1..n] of integer;
Слайд 11
Действия над элементами массива. Вычислим сумму элементов. … Const n=10; Var a:array[1..n] of integer; {описываем массив а} i, s: integer; begin randomize; s:=0; for i:=1 to n do begin a[i]:=random(11)-3; {заполняем массив а случайными числами } write (a[i],’ ‘); {вывожу заполненный массив} end; for i:=1 to n do s:=s+a[i]; {находим сумму элементов массива а} writeln (‘сумма элементов массива =’, s) {выводим ответ } end.
Слайд 12
Например: найти произведение элементов имеющих нечетный индекс. … Const n=10; Var a:array[1..n] of integer; {описываем массив а} i, p: integer; begin randomize; p:=1; for i:=1 to n do begin a[i]:=random(11)-3; {заполняем массив а случайными числами } write (a[i],’ ‘); {вывожу заполненный массив} end; for i:=1 to n do if i mod 2<>0 then p:=p*a[i] {находим произведение элементов массива а имеющих нечетный индекс} writeln (‘призведение элементов массива =’, s) {выводим ответ } end. Действия над элементами массива.
Слайд 13
Например, найти номер первого из элементов массива A, имеющего значение равное нулю. Если таких элементов нет, вывести соответствующее сообщение. Const n=10; Var a:array[1..n] of integer; i, p: integer; begin randomize; p:=1; for i:=1 to n do begin a[i]:=random(11)-3; {заполняем массив а случайными числами } write (a[i],’ ‘); {вывожу заполненный массив} end; i:=1; Repeat i:=i+1; until (a[i]=0) or (i=n) ; выход из цикла, когда нашли нужный элемент или массив закончился} if a[i]=0 then writeln (‘номер первого нулевого элемента=’, i) else writeln (‘ таких элементов нет!’); end. Действия над элементами массива.
Слайд 14
Поиск максимального (минимального) элемента и его номера. Например, в одномерном массиве подсчитать количество элементов равных минимальному. Действия над элементами массива.
Слайд 15
Подведение итогов урока Чем ценны массивы? Каким образом задается описание массива, что в нем указывается? Каким образом задается обращение к элементу массива? Почему при описании массива предпочтительнее употреблять константы , а не указывать размеры массива в явном виде?
×
HTML:
Ссылка:
|
__label__pos
| 0.826812 |
50003. Turtle Graphics
I'm a slow walker, but I never walk backwards.
Problem description
There is an old drawing program called turtle graphics. The program is given a series of $L$ lines and it will draw them on an $X$ by $Y$ matrix. A line consists of a series of $n$ points, and the program will draw a straight line between two consecutive points. For simplicity we assume that the each segment of a line is either vertical, horizontal, or $45^{\circ}$ diagonal.
The input is as follow. The first line has $L$, $X$, and $Y$. All of them are between 1 and 100. Each of the next $L$ lines starts with $n$, the number of points in that line, and the next $n$ pairs of coordinates indicate the position of the points in the that line. Note that $n$ is positive and could be very large so you cannot store the coordinates in an array. If you are given a point that is not in the grids, or does not form a vertical, horizontal, or $45^{\circ}$ diagonal line with the previous point, report the line number and point number and stops the program.
If the input is correct, then the output has $X$ lines, each of them has $Y$ numbers. You should output a $1$ if that cell is drawn, $0$ otherwise. If the input is incorrect report the line number and point number of the point that is causing the problem. Both line number and point number start from $1$.
Technical Specification and constraints
• 10 pt. $L$ is $1$ and the line is a valid horizontal line, i.e, $n$ is $2$.
• 30 pt. $L$ is $1$ and the lines are only horizontal and vertical, $n$ is between $1$ and $10$, and the input is always correct.
• 40 pt. $L$ is between $1$ and $100$ and the lines could be vertical, horizontal, or $45^{\circ}$ diagonal, $n$ is between $1$ and $10$, and the input is always correct.
• 10 pt. $L$ is between $1$ and $100$ and the lines could be vertical, horizontal, or $45^{\circ}$ diagonal, $n$ is between $1$ and $10$, and input could be incorrect.
• 10 pt. $L$ is between $1$ and $100$ and the lines could be vertical, horizontal, or $45^{\circ}$ diagonal, $n$ could be very large so you CANNOT store the points in an array, and input could be incorrect.
Sample Input 1
1 5 6
2 5 4 0 4
Sample Output 1
111111
000000
000000
000000
000000
Sample Input 2
2 5 6
2 5 4 0 4
3 4 2 1 2 3 0
Sample Output 2
111111
000000
011110
001000
000100
Sample Input 3
3 5 6
2 5 4 0 4
3 4 2 1 2 3 1
2 5 4 5 0
Sample Output 3
ERROR 2 3
Sample Input 4
1 5 6
5 3 3 0 3 3 0 0 0 0 2
Sample Output 4
000000
111100
110000
101000
111100
Discussion
|
__label__pos
| 0.9741 |
#! /bin/sh # # mksysback # # Create a BOS boot and Installation/Maintenance 8mm tape. # AIX version 3.1.x if [ $# != 1 ] then echo " " echo "Usage : mksysback device" echo " " echo "E.G. : mksysback rmt0" echo " " exit fi PATH=/bin:/usr/bin:/etc:/usr/ucb export PATH # Directory with a lot of space in it (at least 4 Mb): bigdir=/tmp mkszfile -f chdev -l "$1" -a "block_size=512" lsattr -E -l "$1" tctl -f /dev/"$1" rewind /bin/rm -f $bigdir/tape.fs $bigdir/tape.image $bigdir/bosinst.image bosboot -m -p /usr/bin/tape.proto -d /dev/"$1" \ -f $bigdir/tape.fs -b $bigdir/tape.image dd if=$bigdir/tape.image of=/dev/"$1".5 bs=512 conv=sync /usr/lpp/bosinst/diskette/mkinstdskt $bigdir/bosinst.image dd if=$bigdir/bosinst.image of=/dev/"$1".5 bs=512 conv=sync dd if=/bootrec of=/dev/"$1".5 bs=512 conv=sync mksysb /dev/"$1".1
|
__label__pos
| 0.673229 |
805 OCPJP 6 Questions 788 SCJP 5 questions 650 OCPJWCD 5 questions 600 OCAJP 7 questions 610 OCPJP 7 questions 510 Upgrade to OCPJP 7 questions 900 Java J2EE Qns
Tutorials
SCJP : Serializable Interface
Develop code that serializes and/or de-serializes objects using the following APIs from java.io: DataInputStream, DataOutputStream, FileInputStream, FileOutputStream, ObjectInputStream, ObjectOutputStream and Serializable.
DataInputStream
A data input stream lets an application read primitive Java data types from an underlying input stream in a machine-independent way. An application uses a data output stream to write data that can later be read by a data input stream.
A DataInputStream takes as a constructor an InputStream.
Some methods:
public final boolean readBoolean() throws IOException
public final byte readByte() throws IOException
public final int readUnsignedByte() throws IOException
public final short readShort() throws IOException
public final int readUnsignedShort() throws IOException
public final char readChar() throws IOException
public final int readInt() throws IOException
public final long readLong() throws IOException
public final float readFloat() throws IOException
public final double readDouble() throws IOException
FileInputStream fis = new FileInputStream("test.txt");
DataInputStream dis = new DataInputStream(fis);
boolean value = dis.readBoolean();
DataOutputStream
A data output stream lets an application write primitive Java data types to an output stream in a portable way. An application can then use a data input stream to read the data back in.
A DataOutputStream takes as a constructor an OutputStream.
Some methods:
public final void writeBoolean(boolean v) throws IOException
public final void writeByte(int v) throws IOException
public final void writeShort(int v) throws IOException
public final void writeChar(int v) throws IOException
public final void writeInt(int v) throws IOException
public final void writeLong(long v) throws IOException
public final void writeFloat(float v) throws IOException
public final void writeDouble(double v) throws IOException
public final void writeBytes(String s) throws IOException
public final void writeChars(String s) throws IOException
FileInputStream
A FileInputStream obtains input bytes from a file in a file system. What files are available depends on the host environment.
FileInputStream is meant for reading streams of raw bytes such as image data. For reading streams of characters, consider using FileReader.
Some methods:
// Reads a byte of data from this input stream
public int read() throws IOException
public int read(byte[] b) throws IOException
public int read(byte[] b, int off, int len) throws IOException
FileInputStream fis = new FileInputStream("test.txt");
FileOutputStream
A file output stream is an output stream for writing data to a File or to a FileDescriptor. Whether or not a file is available or may be created depends upon the underlying platform. Some platforms, in particular, allow a file to be opened for writing by only one FileOutputStream (or other file-writing object) at a time. In such situations the constructors in this class will fail if the file involved is already open.
FileOutputStream is meant for writing streams of raw bytes such as image data. For writing streams of characters, consider using FileWriter.
Some methods:
// Writes the specified byte to this file output stream
public void write(int b) throws IOException
public void write(byte[] b) throws IOException
public void write(byte[] b, int off, int len) throws IOException
FileOutputStream fos = new FileOutputStream("testout.txt");
// append
FileOutputStream fos = new FileOutputStream("testout.txt", true);
ObjectInputStream
An ObjectInputStream deserializes primitive data and objects previously written using an ObjectOutputStream. ObjectOutputStream and ObjectInputStream can provide an application with persistent storage for graphs of objects when used with a FileOutputStream and FileInputStream respectively. ObjectInputStream is used to recover those objects previously serialized. Other uses include passing objects between hosts using a socket stream or for marshaling and unmarshaling arguments and parameters in a remote communication system.
ObjectInputStream ensures that the types of all objects in the graph created from the stream match the classes present in the Java Virtual Machine. Classes are loaded as required using the standard mechanisms.
Only objects that support the java.io.Serializable or java.io.Externalizable interface can be read from streams.
The method readObject is used to read an object from the stream. Java's safe casting should be used to get the desired type. In Java, strings and arrays are objects and are treated as objects during serialization. When read they need to be cast to the expected type.
Primitive data types can be read from the stream using the appropriate method on DataInput.
The default deserialization mechanism for objects restores the contents of each field to the value and type it had when it was written. Fields declared as transient or static are IGNORED by the deserialization process. References to other objects cause those objects to be read from the stream as necessary. Graphs of objects are restored correctly using a reference sharing mechanism. New objects are always allocated when deserializing, which prevents existing objects from being overwritten.
Reading an object is analogous to running the constructors of a new object. Memory is allocated for the object and initialized to zero (NULL). No-arg constructors are invoked for the non-serializable classes and then the fields of the serializable classes are restored from the stream starting with the serializable class closest to java.lang.Object and finishing with the object's most specific class:
public class A {
public int aaa = 111;
public A() {
System.out.println("A");
}
}
Class B extends non-serializable class A:
import java.io.Serializable;
public class B extends A implements Serializable {
public int bbb = 222;
public B() {
System.out.println("B");
}
}
The client code:
public class Client {
public static void main(String[] args) throws Exception {
B b = new B();
b.aaa = 888;
b.bbb = 999;
System.out.println("Before serialization:");
System.out.println("aaa = " + b.aaa);
System.out.println("bbb = " + b.bbb);
ObjectOutputStream save = new ObjectOutputStream(new FileOutputStream("datafile"));
save.writeObject(b); // Save object
save.flush(); // Empty output buffer
ObjectInputStream restore = new ObjectInputStream(new FileInputStream("datafile"));
B z = (B) restore.readObject();
System.out.println("After deserialization:");
System.out.println("aaa = " + z.aaa);
System.out.println("bbb = " + z.bbb);
}
}
The client's output:
A
B
Before serialization:
aaa = 888
bbb = 999
A
After deserialization:
aaa = 111
bbb = 999
For example to read from a stream as written by the example in ObjectOutputStream:
FileInputStream fis = new FileInputStream("test.tmp");
ObjectInputStream ois = new ObjectInputStream(fis);
int i = ois.readInt();
String today = (String) ois.readObject();
Date date = (Date) ois.readObject();
ois.close();
Classes control how they are serialized by implementing either the java.io.Serializable or java.io.Externalizable interfaces.
Implementing the Serializable interface allows object serialization to save and restore the entire state of the object and it allows classes to evolve between the time the stream is written and the time it is read. It automatically traverses references between objects, saving and restoring entire graphs.
Serializable classes that require special handling during the serialization and deserialization process should implement the following methods:
private void writeObject(java.io.ObjectOutputStream stream) throws IOException;
private void readObject(java.io.ObjectInputStream stream) throws IOException,
ClassNotFoundException;
The readObject method is responsible for reading and restoring the state of the object for its particular class using data written to the stream by the corresponding writeObject method. The method does not need to concern itself with the state belonging to its superclasses or subclasses. State is restored by reading data from the ObjectInputStream for the individual fields and making assignments to the appropriate fields of the object. Reading primitive data types is supported by DataInput.
Serialization does not read or assign values to the fields of any object that does not implement the java.io.Serializable interface. Subclasses of Objects that are not serializable can be serializable. In this case the non-serializable class must have a no-arg constructor to allow its fields to be initialized. In this case it is the responsibility of the subclass to save and restore the state of the non-serializable class. It is frequently the case that the fields of that class are accessible (public, package, or protected) or that there are get and set methods that can be used to restore the state.
Implementing the Externalizable interface allows the object to assume complete control over the contents and format of the object's serialized form. The methods of the Externalizable interface, writeExternal and readExternal, are called to save and restore the objects state. When implemented by a class they can write and read their own state using all of the methods of ObjectOutput and ObjectInput. It is the responsibility of the objects to handle any versioning that occurs.
Enum constants are deserialized differently than ordinary serializable or externalizable objects. The serialized form of an enum constant consists solely of its name; field values of the constant are not transmitted. To deserialize an enum constant, ObjectInputStream reads the constant name from the stream; the deserialized constant is then obtained by calling the static method Enum.valueOf(Class, String) with the enum constant's base type and the received constant name as arguments. Like other serializable or externalizable objects, enum constants can function as the targets of back references appearing subsequently in the serialization stream. The process by which enum constants are deserialized CANNOT be customized: any class-specific readObject, readObjectNoData, and readResolve methods defined by enum types are ignored during deserialization.
ObjectOutputStream
An ObjectOutputStream writes primitive data types and graphs of Java objects to an OutputStream. The objects can be read (reconstituted) using an ObjectInputStream. Persistent storage of objects can be accomplished by using a file for the stream. If the stream is a network socket stream, the objects can be reconstituted on another host or in another process.
Only objects that support the java.io.Serializable interface can be written to streams. The class of each serializable object is encoded including the class name and signature of the class, the values of the object's fields and arrays, and the closure of any other objects referenced from the initial objects.
The method writeObject is used to write an object to the stream. Any object, including Strings and arrays, is written with writeObject. Multiple objects or primitives can be written to the stream. The objects MUST be read back from the corresponding ObjectInputStream with the SAME types and in the SAME order as they were written.
Primitive data types can also be written to the stream using the appropriate methods from DataOutput. Strings can also be written using the writeUTF method.
The default serialization mechanism for an object writes the class of the object, the class signature, and the values of all non-transient and non-static fields. References to other objects (except in transient or static fields) cause those objects to be written also. Multiple references to a single object are encoded using a reference sharing mechanism so that graphs of objects can be restored to the same shape as when the original was written.
For example to write an object that can be read by the example in ObjectInputStream:
FileOutputStream fos = new FileOutputStream("test.tmp");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeInt(12345);
oos.writeObject("Today");
oos.writeObject(new Date());
oos.close();
Classes that require special handling during the serialization and deserialization process must implement special methods with these exact signatures:
private void readObject(java.io.ObjectInputStream stream)
throws IOException, ClassNotFoundException;
private void writeObject(java.io.ObjectOutputStream stream)
throws IOException
The writeObject method is responsible for writing the state of the object for its particular class so that the corresponding readObject method can restore it. The method does not need to concern itself with the state belonging to the object's superclasses or subclasses. State is saved by writing the individual fields to the ObjectOutputStream using the writeObject method or by using the methods for primitive data types supported by DataOutput.
Serialization does not write out the fields of any object that does not implement the java.io.Serializable interface. Subclasses of Objects that are not serializable can be serializable. In this case the non-serializable class must have a no-arg constructor to allow its fields to be initialized. In this case it is the responsibility of the subclass to save and restore the state of the non-serializable class. It is frequently the case that the fields of that class are accessible (public, package, or protected) or that there are get and set methods that can be used to restore the state.
Serializable
Serializability of a class is enabled by the class implementing the java.io.Serializable interface. Classes that do not implement this interface will not have any of their state serialized or deserialized. All subtypes of a serializable class are themselves serializable. The serialization interface has NO methods or fields and serves only to identify the semantics of being serializable.
To allow subtypes of non-serializable classes to be serialized, the subtype may assume responsibility for saving and restoring the state of the supertype's public, protected, and (if accessible) package fields. The subtype may assume this responsibility only if the class it extends has an accessible no-arg constructor to initialize the class's state. It is an error to declare a class Serializable if this is not the case. The error will be detected at runtime.
During deserialization, the fields of non-serializable classes will be initialized using the public or protected no-arg constructor of the class. A no-arg constructor must be accessible to the subclass that is serializable. The fields of serializable subclasses will be restored from the stream.
When traversing a graph, an object may be encountered that does not support the Serializable interface. In this case the NotSerializableException will be thrown and will identify the class of the non-serializable object.
Classes that require special handling during the serialization and deserialization process must implement special methods with these exact signatures:
private void writeObject(java.io.ObjectOutputStream out) throws IOException
private void readObject(java.io.ObjectInputStream in) throws IOException,
ClassNotFoundException;
Serialization
Imagine a graph of objects that lead from the object to be saved. The entire graph must be saved and restored:
Obj 1 --> Obj 2 --> Obj 3
\--> Obj 4 --> Obj 5
\ --> Obj 6
We need a byte-coded representation of objects that can be stored in a file external to Java programs, so that the file can be read later and the objects can be reconstructed. Serialization provides a mechanism for saving and restoring objects.
Serializing an object means to code it as an ordered series of bytes in such a way that it can be rebuilt (really a copy) from that byte stream. Deserialization generates a new live object graph out of the byte stream.
The serialization mechanism needs to store enough information so that the original object can be recreated including all objects to which it refers (the object graph).
Java has classes (in the java.io package) that allow the creation of streams for object serialization and methods that write to and read from these streams.
Only an object of a class that implements the EMPTY interface java.io.Serializable or a subclass of such a class can be serialized.
What is saved:
• The class of the object.
• The class signature of the object.
• All instance variables NOT declared transient.
• Objects referred to by non-transient instance variables.
If a duplicate object occurs when traversing the graph of references, only ONE copy is saved, but references are coded so that the duplicate links can be restored.
Saving an object (an array of Fruit)
1. Open a file and create an ObjectOutputStream object.
ObjectOutputStream save = new ObjectOutputStream(new FileOutputStream("datafile"));
2. Make Fruit serializable:
class Fruit implements Serializable
3. Write an object to the stream using writeObject().
Fruit [] fa = new Fruit[3];
// Create a set of 3 Fruits and place them in the array.
...
save.writeObject(fa); // Save object (the array)
save.flush(); // Empty output buffer
Restoring the object
1. Open a file and create an ObjectInputStream object.
ObjectInputStream restore = new ObjectInputStream(new FileInputStream("datafile"));
2. Read the object from the stream using readObject() and then cast it to its appropriate type.
Fruit[] newFa;
// Restore the object:
newFa = (Fruit[])restore.readObject();
or
Object ob = restore.readObject();
When an object is retrieved from a stream, it is validated to ensure that it can be rebuilt as the intended object. Validation may fail if the class definition of the object has changed.
A class whose objects are to be saved must implement interface Serializable, with no methods, or the Externalizable interface, with two methods. Otherwise, runtime exception will be thrown:
Exception in thread "main" java.io.NotSerializableException: Bag
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
at java.io.ObjectOutputStream.writeObject(Unknown Source)
at Client.main(Client.java:17)
The first superclass of the class (maybe Object) that is not serializable must have a no-parameter constructor.
The class must be visible at the point of serialization.
The implements Serializable clause acts as a tag indicating the possibility of serializing the objects of the class.
All primitive types are serializable.
Transient fields (with transient modifier) are NOT serialized, (i.e., not saved or restored).
A class that implements Serializable must mark transient fields of classes that do not support serialization (e.g., a file stream).
Because the deserialization process will create new instances of the objects. Comparisons based on the "==" operator MAY NO longer be valid.
Main saving objects methods:
public ObjectOutputStream(OutputStream out) throws IOException
public final void writeObject(Object obj) throws IOException
public void flush() throws IOException
public void close() throws IOException
Main restoring objects methods:
public ObjectInputStream(InputStream in) throws IOException, SecurityException
public final Object readObject() throws IOException, ClassNotFoundException
public void close() throws IOException
ObjectOutputStream and ObjectInputStream also implement the methods for writing and reading primitive data and Strings from the interfaces DataOutput and DataInput, for example:
writeBoolean(boolean b) <==> boolean readBoolean()
writeChar(char c) <==> char readChar()
writeInt(int i) <==> int readInt()
writeDouble(double d) <==> double readDouble()
No methods or class variables are saved when an object is serialized.
A class knows which methods and static data are defined in it.
Serialization example (client class):
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
public class Client {
public static void main(String ... aaa) throws IOException, ClassNotFoundException {
Bag b = new Bag();
Bag a = null;
ObjectOutputStream save = new ObjectOutputStream(new FileOutputStream("datafile"));
System.out.println("Before serialization:");
System.out.println(b);
save.writeObject(b); // Save object
save.flush(); // Empty output buffer
ObjectInputStream restore = new ObjectInputStream(new FileInputStream("datafile"));
a = (Bag) restore.readObject();
System.out.println("After deserialization:");
System.out.println(a);
}
}
If the Bag class does not implement Serializable:
import java.util.Arrays;
public class Bag {
Fruit[] fruits = new Fruit[3];
public Bag() {
fruits[0] = new Fruit("Orange");
fruits[1] = new Fruit("Apple");
fruits[2] = new Fruit("Pear");
}
public String toString() {
return "Bag of fruits :" + Arrays.toString(fruits);
}
}
We get the following runtime exception:
Before serialization:
Bag of fruits :[Fruit : Orange, Fruit : Apple, Fruit : Pear]
Exception in thread "main" java.io.NotSerializableException: Bag
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
at java.io.ObjectOutputStream.writeObject(Unknown Source)
at Client.main(Client.java:17)
Now, make Bag class implement Serializable, but Fruit class still is not Serializable:
public class Fruit {
String name = "";
public Fruit(String name) {
this.name = name;
}
public String toString() {
return "Fruit : " + name;
}
}
We get same exception but in class Fruit:
Before serialization:
Bag of fruits :[Fruit : Orange, Fruit : Apple, Fruit : Pear]
Exception in thread "main" java.io.NotSerializableException: Fruit
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
at java.io.ObjectOutputStream.writeArray(Unknown Source)
at java.io.ObjectOutputStream.writeObject0(Unknown Source)
...
We can ask not to serialize Fruit classes by making array of fruits transient:
import java.io.Serializable;
import java.util.Arrays;
public class Bag implements Serializable {
transient Fruit[] fruits = new Fruit[3]; // do not save to disk
public Bag() {
fruits[0] = new Fruit("Orange");
fruits[1] = new Fruit("Apple");
fruits[2] = new Fruit("Pear");
}
public String toString() {
return "Bag of fruits :" + Arrays.toString(fruits);
}
}
Now the program is running without exceptions, but fruits are not restoring in the bag after deserialization:
Before serialization:
Bag of fruits :[Fruit : Orange, Fruit : Apple, Fruit : Pear]
After deserialization:
Bag of fruits :null
Only when both Bag and Fruits are serializable, we get expected output:
import java.io.Serializable;
import java.util.Arrays;
public class Bag implements Serializable {
Fruit[] fruits = new Fruit[3];
public Bag() {
fruits[0] = new Fruit("Orange");
fruits[1] = new Fruit("Apple");
fruits[2] = new Fruit("Pear");
}
public String toString() {
return "Bag of fruits :" + Arrays.toString(fruits);
}
}
import java.io.Serializable;
public class Fruit implements Serializable {
String name = "";
public Fruit(String name) {
this.name = name;
}
public String toString() {
return "Fruit : " + name;
}
}
The client's output now will be:
Before serialization:
Bag of fruits :[Fruit : Orange, Fruit : Apple, Fruit : Pear]
After deserialization:
Bag of fruits :[Fruit : Orange, Fruit : Apple, Fruit : Pear]
NOTE, static and transient fields are NOT serialized:
import java.io.Serializable;
public class MyClass implements Serializable {
transient int one;
private int two;
static int three;
public MyClass() {
one = 1;
two = 2;
three = 3;
}
public String toString() {
return "one : " + one + ", two: " + two + ", three: " + three;
}
}
This code saves class instance:
...
MyClass b = new MyClass();
ObjectOutputStream save = new ObjectOutputStream(new FileOutputStream("datafile"));
save.writeObject(b); // Save object
save.flush(); // Empty output buffer
...
Deserialize:
...
MyClass a = null;
ObjectInputStream restore = new ObjectInputStream(new FileInputStream("datafile"));
a = (MyClass) restore.readObject();
System.out.println("After deserialization:");
System.out.println(a);
...
The output:
After deserialization:
one : 0, two: 2, three: 0
As you can see, static and transient fields got the default values for instance variables.
scjp 1.5 | scjp 1.6 | scwcd 1.5
Java Certifications
www.javacertifications.net
|
__label__pos
| 0.994045 |
If you want to make money online : Register now
Define bipartite graph. Also given an example of it, where do you use this type of graph.
, , No Comments
Bipartite Graphs are those which have two sets of vertices which are mutually disjoint . The Relation or edges in these graphs exists only between two vertices which are not from the same set of vertices.
Famous example :
Assigning n jobs to m persons can be modelled using a Bipartite Graph..
The edge exists if a job i can be allotted to a person p.
Obviously this requires a Bipartite model of graph, Since an edge here can not exist between a job to another job or a person to another person.
0 comments:
Post a Comment
|
__label__pos
| 0.849325 |
Dismiss
Announcing Stack Overflow Documentation
We started with Q&A. Technical documentation is next, and we need your help.
Whether you're a beginner or an experienced developer, you can contribute.
Sign up and start helping → Learn more about Documentation →
I am working with Oracle Service Bus and I want to create a script that will change an attribute in a specific mbean.
I have located the mbean:
com.bea:Name=OperationsConfig,
Location=AdminServer,
Type=com.bea.wli.sb.management.configuration.operations.OperationsConfigMBean
and the attribute i want to change is DomainSLAAlertingEnabled
Can anybody help me as to how i can change an attribute in this mbean using wlst (weblogic scripting tool)?
How do i navigate to this mbean and then how do i change it?
DomainSLAAlertingEnabled is boolean.
share|improve this question
Check the API related to that bean. You will find the solution. If you could tell us what exactly you are tying to do, we will be able to answer it more appropriately. – plkmthr Dec 13 '12 at 14:05
Here is a nice tutorial showing howto use WSLT to configure various Weblogic mbeans.
Essentially, you need to :
1. Connect to the admin : e.g. __connectToAdmin(properties)
2. Retrieve the Mbean: e.g.
SOAInfraConfigobj = ObjectName(soaconfigmbean+’:Location=’+locationinfo+’,
name=’+appname+’, type=SoaInfraConfig,Application=’+appname)
3. Set the desired atribute:
mbs.setAttribute(SOAInfraConfigobj, Attribute(auditlevel,auditlevelvalue))
Of course, you need to modify the values to your specific MBean and specific attributes.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.70467 |
Denis Bobrovnikov Denis Bobrovnikov - 1 year ago 71
MySQL Question
SQL - How To Order Using Count From Another Table
1. Bloggers
blogger_id
1
2
3
2. Posts
post_from_blogger_id
1
1
1
2
2
3
As you can see blogger №1 posted more than the others and blogger №3 less. The question is
how to build a query that selects all bloggers and sorts them by the number of their posts?
Answer Source
SELECT bloggers.*, COUNT(post_id) AS post_count
FROM bloggers LEFT JOIN blogger_posts
ON bloggers.blogger_id = blogger_posts.blogger_id
GROUP BY bloggers.blogger_id
ORDER BY post_count
(Note: MySQL has special syntax that lets you GROUP BY without aggregating all values, it's intended for exactly this situation).
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.987557 |
Data Normalization with SIMD Vectorization Share your comment!
Data sets often represent collected values that reflect real-world situations. For instance, census data might contain the ages of all residents within a certain township. Another example is when schools aggregate grade averages among classrooms. Many times, though, data collections have strong components with large magnitudes that tend to overwhelm all other components because their magnitude is so much greater. When this is the case, analysis of such a skewed data set may not provide usable results or conclusions. For this reason, many data sets are normalized before they are analyzed.
And this normalized data normalization can provide more accurate conclusions and correlations once the data has been analyzed. This article discusses how to normalize data using both the Intel Math Kernel Library and the Intel SIMD pragma, both of which are part of Intel’s Parallel Studio.
Let’s start by looking at data that defies accurate analysis until it is normalized. Suppose we have a data set containing the number of people who enjoy jelly beans of different colors. Maybe the data was gathered via an online questionnaire. As you can see in Figure 1, almost everyone likes red the best. But when analyzing the data the colors green, blue, and purple are so much smaller in comparison that their statistical significance is low. Figure 2 shows the data once it has been normalized.
040715_pic1.png
Figure 1: The magnitude of the red value is so much greater than green, blue, and purple that the statistical significance of those will be low.
040715_pic2.png
Figure 2: The normalized data gives the lower three values more statistical validity.
Looking at the initial data shows us that red is clearly dominant, but its dominance prevents a true statistical treatment of green, blue, and purple. The solution that will allow an analysis which has statistical integrity is to normalize the data before it is analyzed. We will now look at two ways to accomplish this with Intel’s Parallel Studio.
Intel Math Kernel Library
Intel’s Math Kernel Library provides hundreds of mathematical functions to help in calculations, analysis, and statistics. What we will use for our data normalization comes from the Basic Linear Algebra Subprograms (BLAS) section. BLAS includes vector operations, matrix-vector operations, and matrix-matrix operations. All of the BLAS functions are optimized to take advantage of Intel single instruction, multiple data (SIMD) technology. What we will use are two vector operations that act upon vectors of the double type. There are two steps to the process of data normalization for our example: obtain a Euclidean norm from the vector, and then adjust the vector so that its data is normalized.
A Euclidean norm is the square root of the sums of all items squared which are in the vector. A Euclidean norm is thought of as its length, or the distance from its endpoint to its origin. For instance, the following double vector’s Euclidean norm can be calculated by the formula that follows.
double dInputData[] = { 5, 6, 7, 8, 9, 10 };
double dEuclideanNorm = sqrt( 5 * 5 + 6 * 6 + 7 * 7 + 8 * 8 +
9 * 9 + 10 * 10);
We will discuss doing this using iterative C++ code in the next section. For now, though, let’s look at how we can get the Euclidean norm without writing any code except a single function call. The following shows how to use the cblas_dnrm2() function to obtain the norm.
double dInputData[] = { 5, 6, 7, 8, 9, 10 };
double dEuclideanNorm = cblas_dnrm2( 6, dInput, 1 );
Now we need to use the derived norm to normalize the vector data. To do this requires another simple function call to the cblas_dscal() function as shown below. After this function call, the data will be normalized as shown.
cblas_dscal( 6, dNorm, dInputData, 1 );
// dInputData normalized to:
// { 0.26537244621713763, 0.31844693546056513,
// 0.37152142470399269, 0.42459591394742019,
// 0.47767040319084769, 0.53074489243427525 }
Using the SIMD Pragma
There are times when you need more control over vectorization normalization. Such cases occur when you have knowledge about the data set that is important to use when normalizing it. In these cases, you can use the SIMD pragma to optimize vector operations. The following uses the SIMD pragma to let Parallel Studio know that you want to optimize the vector operation using SIMD technology. Note that the reduction directive causes the compiler to provide each simultaneous loop a separate copy of dEuclideanNorm that are all combined at the end of the loop in order to avoid race conditions.
double dEuclideanNorm = 0.0;
#pragma simd reduction (+:dEuclideanNorm)
for( int i=0; i<6; i++ )
{
dEuclideanNorm += ( dInputData[i] * dInputData[i] );
}
dEuclideanNorm = sqrt( dEuclideanNorm );
Now that we have the norm, we need to loop through and normalize the data as follows.
#pragma simd
for( int i=0; i<6; i++ )
{
dInputData[i] /= dEuclideanNorm;
}
Conclusion
As you can see, Intel Parallel Studio provides easy, fast, and efficient ways to normalize data. We will take a look at more of the Intel Math Kernel Library in the future since it offers an extensive arsenal of tools to bring your programs to a new level.
Posted on April 7, 2015 by Rick Leinecker, Slashdot Media Contributing Editor
|
__label__pos
| 0.951019 |
Geeks With Blogs
News JS Array .toString
Liam McLennan hackingon.net
This post is a message in a bottle. It cast it into the sea in the hope that it will one day return to me, stuffed to the cork with enlightenment. Yesterday I tweeted,
what is the name of the pattern where you replace a multi-way conditional with an associative array?
I said ‘pattern’ but I meant ‘refactoring’. Anyway, no one replied so I will describe the refactoring here.
Programmers tend to think imperatively, which leads to code such as:
public int GetPopulation(string country)
{
if (country == "Australia")
{
return 22360793;
} else if (country == "China")
{
return 1324655000;
} else if (country == "Switzerland")
{
return 7782900;
}
else
{
throw new Exception("What ain't no country I ever heard of. They speak English in what?");
}
}
which is horrid. We can write a cleaner version, replacing the multi-way conditional with an associative array, treating the conditional as data:
public int GetPopulation(string country)
{
if (!Populations.ContainsKey(country))
throw new Exception("The population of " + country + " could not be found.");
return Populations[country];
}
private Dictionary<string, int> Populations
{
get
{
return new Dictionary<string, int>
{
{"Australia", 22360793},
{"China", 1324655000},
{"Switzerland", 7782900}
};
}
}
Does this refactoring already have a name? Otherwise, I propose
Replace multi-way conditional with associative array
Posted on Thursday, May 27, 2010 3:18 AM | Back to top
Copyright © Liam McLennan | Powered by: GeeksWithBlogs.net
|
__label__pos
| 0.984211 |
how can I count the number of elements on a vector?
25 views (last 30 days)
Hello!!!!!
If I have a column lets imagine T=[105;105;105;106;106;106;107;107;107;107] how can I count the number of different elements in T? in this case 105 is 3 times, 106 appears 3 times and 107 is 4 times in T.
Thanks!!!
Accepted Answer
CS Researcher
CS Researcher on 7 May 2016
Edited: CS Researcher on 7 May 2016
Try this:
a = unique(T);
b = arrayfun(@(x)(x-T),a,'UniformOutput',false);
c = cell2mat(cellfun(@(x)(numel(find(x==0))),b,'UniformOutput',false));
Hope this helps!
3 Comments
Stephen23
Stephen23 on 7 May 2016
Edited: Stephen23 on 7 May 2016
Ahmet Cecen is correct: this is a very bizarre, indirect, and totally inefficient solution. Ahmet's solution is much better, neater, and faster (for 1000 iterations):
Elapsed time is 1.607303 seconds. % this answer
Elapsed time is 0.489061 seconds. % Ahmet's answer
Lets take a look at this code in detail. The first line uses unique, which is a good start ( although it ignores the other much more useful outputs):
a = unique(T);
Then the weirdness starts:
b = arrayfun(@(x)(x-T),a,'UniformOutput',false);
giving a relatively large cell array of numeric matrices: this is a total waste of memory, as b is going to have an effective size of numel(T)*numel(a). Ouch, this could get very large! Use whos to check the size or variables in memory: for this answer:
>> whos
Name Size Bytes Class
T 10x1 80 double
a 3x1 24 double
b 3x1 420 cell <- ouch!
c 3x1 24 double
While it might not cause a problem for small input vectors, this will be a total waste of memory for larger inputs, as it will expand quite quickly.
At this point the numeric matrices have value zero replacing the values of interest.
This is then followed by a slow cellfun call on this large intermediate variable b:
c = cell2mat(cellfun(@(x)(numel(find(x==0))),b,'UniformOutput',false));
which checks where the zeros are in the numeric matrices inside b. Even here things are indirect:
numel(find(x==0))
should really be
nnz(x==0)
which would be much faster and simpler. But then, as Ahmet correctly pointed out, the concept is very indirect anyway: why not simply sum the indices in the first arrayfun anyway? Why bother converting the points of interest to zero, storing them in a huge matrix, and then checking counting those zeros by using find ?
Here is perhaps what the author really intended to write:
z = arrayfun(@(x)nnz(x==T),unique(T));
which takes half the time than the authors answer (although still not as fast as Ahmet's), and simply counts how many values of T match each unique value using one arrayfun call.
Please read Ahmet's much better answer!
Sign in to comment.
More Answers (1)
Ahmet Cecen
Ahmet Cecen on 7 May 2016
Edited: Ahmet Cecen on 7 May 2016
[C,ia,ic] = unique(T);
c = hist(ic,1:max(ic));
CountArray = [T(ia) c'];
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Translated by
|
__label__pos
| 0.920933 |
Can someone guide me through SignalR integration in ASP.NET?
Can someone guide me through SignalR integration in ASP.NET? I am creating a newbie using ASP.NET Visual Basic with SignalR, it follows the Signalscript approach. The following is what Ive done that I believe it should stay way Then in Web.config of my solution I create the you can find out more set like False And in my application, I set the model like as public class MyService : IEquatable { public static DataContract DCEnt Ber { get { try { return DCEnt().FirstOrDefault(e => e.Email!= “”; e.Email!= “”)? new {e.Email}.Value.ToString(); } } catch (Exception ex) { } } Now I am trying to use my default model like as in any other. When I test that method, my application works well. I don’t know how I can convert it into my.csv file which I will need to do after I’ve run the SignalR command. The following are simply what Ive done Public Class MyService : IEquatable Public Property CurrentDocument As New Blob(“DefaultViewModel”) Public Property ModelStateBase NewModelStateBase As New Blob(“DefaultView model”) Public Property Initialize(CommandEventArgs e) As Property In Get Oh.
Is Online Class Help Legit
.. Well… we’re actually interested now……. Dim ReadData As New[] Dim GetData As New[] Dim Lookup As ModelBuilder In SelectContext Get For Each x As In vbDesignerServices.DataContracts.SelectionList(tProduct.SelectedItems) If x.Item Is Nothing Then ReadData = x.Item SelectModel = x.Item var tProduct = x.Item Dim LookupAsModel = new ModelBuilder() tProduct.
Easiest Class On Flvs
SetModel(new ModelName() { Can someone guide me through SignalR integration in ASP.NET? How to: Method Create to create an existing ASP.NET MVC web site. Troubleshooting, Reporting, and Security: The problem: I have an existing site and it is receiving updates every day in every customer. However, I add 2 tables to it. My current mvc controller is viewControllers/pages/FusionController which contains each key, it being a route to every customer. How can I avoid the error if I need to display the update events – What to do: Route will navigate to customerdetail. Controller needs to have factory defined in controllers/pages/FusionController by using a factory in their routes. Please guide me a bit through coming upon this and how to get it done. A: As the question states in “The How can I do this in SignR?”, (and as in a really you could try these out example below, some examples and references exist anyway!), do you have an existing ASP.X Library or Routing framework in your local folder (or as at least if you are using a more upvoted answer – such as on the post here) and in your source code what will be the Route base class of controller? (as far as I resource I suggest you try writing a new method or something like that: public ActionResult Form1(MvcModelCollectionModel svma) { var edit = model1.ToList(); edit.Add(srata1.Add(srata1.GetTemplatePathFromDomain(svma), Edit.ValueDict().Location)); return RedirectToAction(“View/edit/1”); } An example… public ActionResult Method1(MvcModelCollectionModel svma) { RoutingContext ctx = new RoutingContext(); //Saving the action to controller ctx.
Hire An Online Math Tutor Chat
Append( model1.ToList(), //Model1, new Action (srata1.GetTemplatePathFromDomain(svma), ActionSignal.Valid)) .ToList(); ViewBag.Map(ctx, new RouteValueCollection(ctx.ToArray())); if (ctx.GetValidRoute()) ActionSignal.Redirect(‘newView/1’); else ActionSignal.SelectedAction(ctx); return View(); } Can someone guide me through SignalR integration in ASP.NET? I have installed jQuery 0.12.7 on my test-system, and I’ve added jQuery 0.12.5 as a query string option by using the built in jQuery 1.11 (i.e. “return Doing Coursework
” => ““). But the problem with the jQuery you tell me is, is that there’s “script ” contained inside the ...
What We Do
Related Posts:
|
__label__pos
| 0.575066 |
Recent Question/Assignment
Practical 3
This practical is worth 10% of your topic grade and is divided into 5 checkpoints (worth 2% each).
The purpose of the checkpoints is to provide you with the skills necessary to complete the practical component of the web development assignment, which constitutes a significant portion of your marks for this topic. Please seek assistance from your tutors on any aspect of the tasks that you do not understand.
Whilst you are encouraged to discuss the concepts covered in this topic with your fellow students, please bear in mind that all work you produce and submit must be your own and that submissions will be subject to similarity checks. As a student, you should ensure you are familiar with the University’s Academic Integrity Policy and the penalties that can result from academic dishonesty and plagiarism. To that end, you may be asked (at any time) to explain your checkpoint solutions and the process you went through to develop them.
Submission and assessment
To submit your checkpoints for marking, upload a zip file of your practical directory (including all subdirectories) to the Practical 3 submission box on FLO by 11pm Friday in week 10. Ensure your zip file contains all of the source PHP, CSS, and JavaScript files for each of the checkpoint tasks. Your tutor will download your submission and mark it according to the specifications outlined in this handout. Your mark will then be uploaded to FLO.
Before you begin…
In order to develop PHP webpages, you will first need to download and install the AMPPs package. AMPPs provides a pre-configured web development stack that incorporates the Apache web server, MySQL DBMS, and PHP interpreter as well as the phpMyAdmin management interface for MySQL. Together, these components support back-end development.
Visit the resources section on FLO and click on the AMPPs package link for your platform. Once downloaded, install the package as you would any other application.
The Module 5 workshop videos cover an introduction to the AMPPs package.
Once AMPPs is installed, start Apache and MySQL and navigate to http://localhost/phpmyadmin to verify the installation.
Flinders University / College of Science and Engineering
Checkpoint 1
For this practical, you will build a simple task list ‘web app’ using PHP.
1. Similar to the previous practical, you should first establish a directory (folder) structure for the files you will be working on in the following checkpoints. Begin by creating a directory for Practical 3 somewhere on your machine.
2. Download p3-checkpoint1-starter.zip from FLO and extract the contents into your Practical 3 directory. You should now have a directory structure that looks like the following:
3. Open the checkpoint-1 directory in VS Code. Open the db.sql file and copy the contents. The file contains the SQL statements necessary to create the database you will be working with for the rest of the practical.
Switch to the browser window/tab where you have phpMyAdmin open and click on the SQL tab at the top. Paste the contents of your clipboard (the SQL) into the textbox and click Go in the bottom right corner (below the textbox). You should receive a series of status messages with green ticks indicating the successful execution of the statements.
4. Switch back to VS Code. The index.php file contains the template of a tasks page that you will complete by adding the necessary PHP code so that it retrieves the current list of tasks from the database when loaded. If you inspect the contents of the file, you will note the existence of PHP opening ( ?php) and closing (? ) tags that denote the block where PHP can be written. The require-once statement includes the contents of another PHP file, dbconn.inc.php, that will establish a connection to the database, so you should not delete it (you are encouraged to inspect it, though).
5. Within the PHP tags, below the require-once, create a variable called sql and assign it the following query string:
-SELECT id, name FROM Task WHERE completed=0;-
6. PHP includes native support for accessing various DBMSs. For this practical, you will use the procedural MySQLi interface to interact with the MySQL database and create/read/update records.
The mysqli_query function is used to submit a query to the database and retrieve the results. The function returns a result set on successful execution or a false value on failure, which makes it suitable for using within an if statement to ensure that result parsing only occurs when there are results available.
Declare an if statement using the following as the condition:
$result = mysqli_query($conn, $sql)
The $result variable will store the result set on success and will be available within the if statement for parsing. The $conn variable is declared within dbconn.inc.php and holds a connection to the database, and the $sql variable is as defined previously.
7. Within the if statement, declare another if statement and use the mysqli_num_rows function to check that the number of rows returned by the query is at least 1. The mysqli_num_rows function takes a result object as a parameter, which you have available as $results, and returns the number of rows.
8. You are now ready to parse the results and generate the HTML necessary to display them on the page. You can access each of the rows by using the mysqli_fetch_assoc function, which will return an associative array for the next result row or NULL if there are no more rows. As with the mysqli_num_rows function, mysqli_fetch_assoc takes a result object as its parameter.
Because there is likely to be more than one row of data, you will want to use a loop to iterate over each row. Within the if statement defined in step 7, declare a while loop using the mysqli_fetch_assoc function as the condition. The loop will cease when all rows in the result set have been iterated over.
while ($row = mysqli_fetch_assoc($result) { ... }
Within the loop, use one or more echo statements to output the name of each task. You can access the task name via the row key name; i.e., $row[-name-]. The tasks should be output as an unordered list, so you will need to surround the while loop with echo statements for the list tags and then incorporate the list item tags into your output of each task name within the loop — an example of this is as follows (shown in pseudo code):
echo opening list tag
while ($row = mysqli_fetch_assoc($result)) { echo the task name within list item tags
} echo closing list tag
It is good practice to free up resources after they are finished with, so call mysqli_free_result after the loop: mysqli_free_result($result);
9. Finally, close the connection to the database by calling the mysqli_close function as follows after the outer if statement you defined in step 6. The function takes in a reference to the current connection. mysqli_close($conn);
Navigate to index.php in a browser — remember, the page is hosted on a local web server instance, so the URL will be something like http://localhost/practical/checkpoint-1/index.php — and verify that you see a list of tasks.
——————————————————— End of Checkpoint 1 ———————————————————
Flinders University / College of Science and Engineering
Checkpoint 2
This checkpoint will involve adding a menu to your page and styling it with CSS.
1. Make a copy of your checkpoint-1 directory and name it checkpoint-2.
2. Open the checkpoint-2 directory in VS Code.
3. Uncomment the following two lines in your index.php page
!-- ?php require_once -inc/menu.inc.php-; ? --
!-- script src=-scripts/script.js- defer /script --
The first includes the contents of a menu file that is provided for you within the inc directory. The menu is composed of an unordered list of menu items (anchor elements). You will create the second and third pages in later checkpoints. Take note of the id attribute defined for the list, which you can use when styling it.
The second line references a script that has also been provided for you in the scripts directory. This script contains a JavaScript function that will apply a class called selected to the anchor element in the menu that corresponds to the current page. It does this by inspecting the current URL, via document.location, and checking for a match against each anchor element’s href attribute. As before, you can use the class when styling the menu.
4. Within the styles subdirectory, create a new file called style.css. All CSS you create must be defined in this file. There should be no internal (via style tags) or inline (via a style attribute) CSS used.
5. Add a link element in index.php that refers to the styles/style.css file.
6. Edit index.php and style.css to produce a page that looks similar to the image below.
Note that minor differences are fine as long as your page is a close replica.
Key style details:
• The margin for the entire page is 15px on all sides.
• The body font is Helvetica (fallback to Arial then sans-serif) and 14px in size.
• The heading font size is 16px and the font weight is bold. The heading also has a bottom margin of 10px.
• The paragraph/list item line height for each task in the list is 1.5em.
• Menu: o Each menu item has a right margin of 15px, a colour of black, and floats to the left of its container.
o When hovered over, menu items change colour to #1565c0. o A selected menu item has a colour of #1565c0 and a font weight of bold. o The menu border is a solid 1px line in the colour #ddd (bottom only).
o The menu has a bottom padding of 10px. o The menu has a bottom margin of 30px.
——————————————————— End of Checkpoint 2 —————————————————
Checkpoint 3
This checkpoint will involve creating a new PHP page to add tasks.
1. Make a copy of your checkpoint-2 directory and name it checkpoint-3.
2. Open your checkpoint-3 directory in VS Code. Create two new files in the root of the checkpoint-3 directory called add.php and add-task.php.
3. Edit add.php and styles/style.css to produce a page that looks like the image below. The provided HTML can be used as a starting point. Note that minor differences are fine as long as your page is a close replica.
• All input elements should have… o a display property of block;
o a 1px solid border in the colour #babdbe;
o a left/right padding of 6px and a top/bottom padding of 3px; o a bottom margin of 10px.
• The text input element has a width of 300px and should also define the required attribute;
• The submit button text changes colour to #1565C0 when hovered over.
4. Ensure your text input element has a name attribute as you will need to reference this when retrieving its value. In your form element, define the method and action attributes accordingly:
• Assign the action attribute a value of add-task.php.
• Assign the method attribute a value of POST.
5. Within add-task.php, add PHP code to process the submitted form and insert a new record into the database.
This file will process the form only, so it does not need any HTML and can therefore be a pure PHP file. At the top of the file, declare the PHP opening tag. As the form is submitted using the POST method rather than the GET method, the task name value will not be appended to the URL as a query string. Instead, the value will be stored in the body of the HTTP request. You can access the value via PHP’s ‘superglobal’ $_POST variable, which is an associative array of the current POST data. Key values in this array correspond to input element name attribute values.
You can verify that a value is available for a given key by using PHP’s isset function, and it is generally considered good practice to do so (using an if statement) before attempting to use it; e.g.,
if (isset($_POST[-task-name-])) { ... } // This assumes the input element is named 'task-name'
The next step is to include the dbconn.inc.php file to open a connection to the database. This can be achieved in the same way as in index.php; i.e., require_once -inc/dbconn.inc.php-;
Next, you will prepare the SQL statement. Declare a variable called sql and assign it the following SQL string:
-INSERT INTO Task(name) VALUES(?);-
The ? is a placeholder for the task name that will be extracted from the $_POST array. Declare another variable called statement and assign it the value returned by the function call mysqli_stmt_init($conn). This function will initialise a new SQL statement using the established database connection stored in $conn. Add the following calls to mysqli_stmt_prepare and mysqli_stmt_bind_param below to prepare the statement and bind the task name to the ? placeholder (change task-name to whatever you named your input element).
mysqli_stmt_prepare($statement, $sql);
mysqli_stmt_bind_param($statement, 's', htmlspecialchars($_POST[-task-name-])); // The 's' in mysqli_stmt_bind_param indicates a string parameter
The statement is now ready to execute. You may be wondering why the POST value was not simply concatenated to the SQL string above. Whilst this would work, it is open to SQL injection attacks as the value entered by the user could contain malicious SQL. Preparing a statement in the manner described protects against this as the DBMS is sent the statement structure before any user provided parameter values are inserted into it. The htmlspecialcharacters function wrapping $_POST[-task-name-] is a PHP function that provides similar protections by replacing special HTML characters to prevent unwanted HTML from being inserted — this is a form of input sanitisation. PHP’s filter_var function provides many more options for performing input sanitisation of form fields.
To execute the statement, pass it to the mysqli_stmt_execute function. This function returns a boolean that indicates whether the statement was executed successfully or not (i.e., if the task was successfully inserted into the database). If the return value is true, use the header function to redirect the user back to index.php. If the return value is false, you can output the error that occurred by echoing the value returned by a call to mysqli_error($conn). header(-location: index.php-);
Finally, as you did previously in step 9 of Checkpoint 1, close the connection to the database.
6. Verify that you can enter a new task into the add task page and that it appears at the bottom of the task list.
——————————————————— End of Checkpoint 3 ———————————————————
Checkpoint 4
This checkpoint will involve adding functionality to support completing tasks.
1. Make a copy of your checkpoint-3 directory and name it checkpoint-4.
2. Open your checkpoint-4 directory in VS Code. Edit your index.php file and modify the output of each task name (as completed in step 8 of Checkpoint 1) as follows:
• Embed the task name within an HTML anchor element.
• Set the href attribute of the anchor element to complete.php?id=XX where XX is the task’s id. You can get the task id value using the row key id; i.e., $row[-id-].
3. Modify your styles/style.css to style the task anchor elements as follows:
• The task name (now a link) should still appear in black and with no underline.
• When hovered over, the task name should change colour to #1565c0 and apply a line-through decoration
4. Create a new document called complete.php in the root of the checkpoint-4 directory. Edit this page so that it reads the id value from the URL query string (accessible in PHP using the $_GET array, which work similarly to $_POST) and then updates the associated task record in the database to mark the task as completed.
As with the add_task.php page, complete.php does not need to display any HTML and can therefore also be a pure PHP page. The PHP code required to update the database is essentially the same as the previous checkpoint. However, as noted above, you will need to use $_GET to retrieve the id value instead of $_POST (do not forget to modify the mysqli_stmt_bind_param function) and your sql variable value will need to change to the following:
-UPDATE Task SET completed=1, updated=now() WHERE id=?;- Remember to close the database connection.
5. Verify that clicking a task in the task list causes it to disappear off the task list. You may also wish to revisit phpMyAdmin and view the Task table to check that the task you selected has been updated in the database with a completed value of 1. Finally, if you have not already done so, mark the Checkpoint 1 to 4 tasks complete on your list.
——————————————————— End of Checkpoint 4 ——————————————————— Checkpoint 5
The final checkpoint will involve adding a new PHP page to display a history of completed tasks.
1. Make a copy of your checkpoint-4 directory and name it checkpoint-5.
2. Open your checkpoint-5 directory in VS Code. Create a new file called history.php in the root of the checkpoint-5
directory. Edit the page so that it displays a list of completed tasks, which can be retrieved from the database by using the following SQL query:
-SELECT name FROM Task WHERE completed=1 ORDER BY updated DESC;-
You may wish to refer to your index.php page as a template/reference. Note that completed tasks are not expected to be clickable (i.e., anchor elements/links).
3. Modify your styles/style.css file accordingly so that the page looks like the image below. Each completed task should appear in the colour #9e9e9e with a line-through decoration.
Note that minor differences are fine as long as your page is a close replica.
4. Finally, complete the Checkpoint 5 task and verify that it appears at the top of the history.php page.
——————————————————— End of Checkpoint 5 ——————————————————— Extension
Note that this is an optional extension exercise and is not a checkpoint. It does not contribute any marks.
Extend your index.php page so that it displays the number of tasks as illustrated in the images below. The formatting should display the correct grammar for singular and plural task counts (e.g., 1 task or 2 tasks). If there are no tasks in the list, display No tasks instead. The number of active tasks can be retrieved with the following SQL query:
-SELECT count(*) FROM Task WHERE completed=0;-
|
__label__pos
| 0.74489 |
Output
output 位于对象最顶级键(key),包括了一组选项,指示 webpack 如何去输出、以及在哪里输出你的「bundle、asset 和其他你所打包或使用 webpack 载入的任何内容」。
output.assetModuleFilename
string = '[hash][ext][query]'
output.filename 相同,不过应用于 Asset Modules
对从数据 URI 替换构建的静态资源,[name], [file], [query], [fragment], [base][path] 为空字符串。
output.asyncChunks
boolean = true
创建按需加载的异步 chunk。
webpack.config.js
module.exports = {
//...
output: {
//...
asyncChunks: true,
},
};
output.auxiliaryComment
string object
在和 output.libraryoutput.libraryTarget 一起使用时,此选项允许用户向导出容器(export wrapper)中插入注释。要为 libraryTarget 每种类型都插入相同的注释,将 auxiliaryComment 设置为一个字符串:
webpack.config.js
module.exports = {
//...
output: {
library: 'someLibName',
libraryTarget: 'umd',
filename: 'someLibName.js',
auxiliaryComment: 'Test Comment',
},
};
将会生成如下:
someLibName.js
(function webpackUniversalModuleDefinition(root, factory) {
// Test Comment
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory(require('lodash'));
// Test Comment
else if (typeof define === 'function' && define.amd)
define(['lodash'], factory);
// Test Comment
else if (typeof exports === 'object')
exports['someLibName'] = factory(require('lodash'));
// Test Comment
else root['someLibName'] = factory(root['_']);
})(this, function (__WEBPACK_EXTERNAL_MODULE_1__) {
// ...
});
对于 libraryTarget 每种类型的注释进行更细粒度地控制,请传入一个对象:
webpack.config.js
module.exports = {
//...
output: {
//...
auxiliaryComment: {
root: 'Root Comment',
commonjs: 'CommonJS Comment',
commonjs2: 'CommonJS2 Comment',
amd: 'AMD Comment',
},
},
};
output.charset
boolean = true
告诉 webpack 为 HTML 的 <script> 标签添加 charset="utf-8" 标识。
output.chunkFilename
string = '[id].js' function (pathData, assetInfo) => string
此选项决定了非初始(non-initial)chunk 文件的名称。有关可取的值的详细信息,请查看 output.filename 选项。
注意,这些文件名需要在运行时根据 chunk 发送的请求去生成。因此,需要在 webpack runtime 输出 bundle 值时,将 chunk id 的值对应映射到占位符(如 [name][chunkhash])。这会增加文件大小,并且在任何 chunk 的占位符值修改后,都会使 bundle 失效。
默认使用 [id].js 或从 output.filename 中推断出的值([name] 会被预先替换为 [id][id].)。
webpack.config.js
module.exports = {
//...
output: {
//...
chunkFilename: '[id].js',
},
};
Usage as a function:
webpack.config.js
module.exports = {
//...
output: {
chunkFilename: (pathData) => {
return pathData.chunk.name === 'main' ? '[name].js' : '[name]/[name].js';
},
},
};
output.chunkFormat
false string: 'array-push' | 'commonjs' | 'module' | <any string>
chunk 的格式(formats 默认包含 'array-push' (web/WebWorker)、'commonjs' (node.js)、'module' (ESM),还有其他情况可由插件添加)。
webpack.config.js
module.exports = {
//...
output: {
//...
chunkFormat: 'commonjs',
},
};
output.chunkLoadTimeout $#outputchunkLoadtimeout$
number = 120000
chunk 请求到期之前的毫秒数,默认为 120000。从 webpack 2.6.0 开始支持此选项。
webpack.config.js
module.exports = {
//...
output: {
//...
chunkLoadTimeout: 30000,
},
};
output.chunkLoadingGlobal
string = 'webpackChunkwebpack'
webpack 用于加载 chunk 的全局变量。
webpack.config.js
module.exports = {
//...
output: {
//...
chunkLoadingGlobal: 'myCustomFunc',
},
};
output.chunkLoading
false string: 'jsonp' | 'import-scripts' | 'require' | 'async-node' | 'import' | <any string>
加载 chunk 的方法(默认值有 'jsonp' (web)、'import' (ESM)、'importScripts' (WebWorker)、'require' (sync node.js)、'async-node' (async node.js),还有其他值可由插件添加)。
webpack.config.js
module.exports = {
//...
output: {
//...
chunkLoading: 'async-node',
},
};
output.clean
5.20.0+
boolean { dry?: boolean, keep?: RegExp | string | ((filename: string) => boolean) }
module.exports = {
//...
output: {
clean: true, // 在生成文件之前清空 output 目录
},
};
module.exports = {
//...
output: {
clean: {
dry: true, // 打印而不是删除应该移除的静态资源
},
},
};
module.exports = {
//...
output: {
clean: {
keep: /ignored\/dir\//, // 保留 'ignored/dir' 下的静态资源
},
},
};
// 或者
module.exports = {
//...
output: {
clean: {
keep(asset) {
return asset.includes('ignored/dir');
},
},
},
};
你也可以使用钩子函数:
webpack.CleanPlugin.getCompilationHooks(compilation).keep.tap(
'Test',
(asset) => {
if (/ignored\/dir\//.test(asset)) return true;
}
);
output.compareBeforeEmit
boolean = true
告知 webpack 在写入到输出文件系统时检查输出的文件是否已经存在并且拥有相同内容。
module.exports = {
//...
output: {
compareBeforeEmit: false,
},
};
output.crossOriginLoading
boolean = false string: 'anonymous' | 'use-credentials'
告诉 webpack 启用 cross-origin 属性 加载 chunk。仅在 target 设置为 'web' 时生效,通过使用 JSONP 来添加脚本标签,实现按需加载模块。
• 'anonymous' - 不带凭据(credential) 启用跨域加载
• 'use-credentials' - 携带凭据(credential) 启用跨域加载
output.devtoolFallbackModuleFilenameTemplate
string function (info)
当上面的模板字符串或函数产生重复时使用的备用内容。
查看 output.devtoolModuleFilenameTemplate
output.devtoolModuleFilenameTemplate
string = 'webpack://[namespace]/[resource-path]?[loaders]' function (info) => string
此选项仅在 「devtool 使用了需要模块名称的选项」时使用。
自定义每个 source map 的 sources 数组中使用的名称。可以通过传递模板字符串(template string)或者函数来完成。例如,当使用 devtool: 'eval',默认值是:
webpack.config.js
module.exports = {
//...
output: {
devtoolModuleFilenameTemplate:
'webpack://[namespace]/[resource-path]?[loaders]',
},
};
模板字符串(template string)中做以下替换(通过 webpack 内部的 ModuleFilenameHelpers):
TemplateDescription
[absolute-resource-path]绝对路径文件名
[all-loaders]自动和显式的 loader,并且参数取决于第一个 loader 名称
[hash]模块标识符的 hash
[id]模块标识符
[loaders]显式的 loader,并且参数取决于第一个 loader 名称
[resource]用于解析文件的路径和用于第一个 loader 的任意查询参数
[resource-path]不带任何查询参数,用于解析文件的路径
[namespace]模块命名空间。在构建成为一个 library 之后,通常也是 library 名称,否则为空
当使用一个函数,同样的选项要通过 info 参数并使用驼峰式(camel-cased):
module.exports = {
//...
output: {
devtoolModuleFilenameTemplate: (info) => {
return `webpack:///${info.resourcePath}?${info.loaders}`;
},
},
};
如果多个模块产生相同的名称,使用 output.devtoolFallbackModuleFilenameTemplate 来代替这些模块。
output.devtoolNamespace
string
此选项确定 output.devtoolModuleFilenameTemplate 使用的模块名称空间。未指定时的默认值为:output.uniqueName。在加载多个通过 webpack 构建的 library 时,用于防止 source map 中源文件路径冲突。
例如,如果你有两个 library,分别使用命名空间 library1library2,并且都有一个文件 ./src/index.js(可能具有不同内容),它们会将这些文件暴露为 webpack://library1/./src/index.jswebpack://library2/./src/index.js
output.enabledChunkLoadingTypes
[string: 'jsonp' | 'import-scripts' | 'require' | 'async-node' | <any string>]
允许入口点使用的 chunk 加载类型列表。将被 webpack 自动填充。只有当使用一个函数作为入口配置项并从那里返回 chunkLoading 配置项时才需要。
webpack.config.js
module.exports = {
//...
output: {
//...
enabledChunkLoadingTypes: ['jsonp', 'require'],
},
};
output.enabledLibraryTypes
[string]
入口点可用的 library 类型列表.
module.exports = {
//...
output: {
enabledLibraryTypes: ['module'],
},
};
output.enabledWasmLoadingTypes
[string]
用于设置入口支持的 wasm 加载类型的列表。
module.exports = {
//...
output: {
enabledWasmLoadingTypes: ['fetch'],
},
};
## `output.environment` $#outputenvironment$
告诉 webpack 在生成的运行时代码中可以使用哪个版本的 ES 特性。
```javascript
module.exports = {
output: {
environment: {
// The environment supports arrow functions ('() => { ... }').
arrowFunction: true,
// The environment supports BigInt as literal (123n).
bigIntLiteral: false,
// The environment supports const and let for variable declarations.
const: true,
// The environment supports destructuring ('{ a, b } = obj').
destructuring: true,
// The environment supports an async import() function to import EcmaScript modules.
dynamicImport: false,
// The environment supports 'for of' iteration ('for (const x of array) { ... }').
forOf: true,
// The environment supports ECMAScript Module syntax to import ECMAScript modules (import ... from '...').
module: false,
// The environment supports optional chaining ('obj?.a' or 'obj?.()').
optionalChaining: true,
// The environment supports template literals.
templateLiteral: true,
},
},
};
output.filename
string function (pathData, assetInfo) => string
此选项决定了每个输出 bundle 的名称。这些 bundle 将写入到 output.path 选项指定的目录下。
对于单个入口起点,filename 会是一个静态名称。
webpack.config.js
module.exports = {
//...
output: {
filename: 'bundle.js',
},
};
然而,当通过多个入口起点(entry point)、代码拆分(code splitting)或各种插件(plugin)创建多个 bundle,应该使用以下一种替换方式,来赋予每个 bundle 一个唯一的名称……
使用入口名称:
webpack.config.js
module.exports = {
//...
output: {
filename: '[name].bundle.js',
},
};
使用内部 chunk id
webpack.config.js
module.exports = {
//...
output: {
filename: '[id].bundle.js',
},
};
使用由生成的内容产生的 hash:
webpack.config.js
module.exports = {
//...
output: {
filename: '[contenthash].bundle.js',
},
};
结合多个替换组合使用:
webpack.config.js
module.exports = {
//...
output: {
filename: '[name].[contenthash].bundle.js',
},
};
使用函数返回 filename:
webpack.config.js
module.exports = {
//...
output: {
filename: (pathData) => {
return pathData.chunk.name === 'main' ? '[name].js' : '[name]/[name].js';
},
},
};
请确保已阅读过 指南 - 缓存 的详细信息。这里涉及更多步骤,不仅仅是设置此选项。
注意此选项被称为文件名,但是你还是可以使用像 'js/[name]/bundle.js' 这样的文件夹结构。
注意,此选项不会影响那些「按需加载 chunk」的输出文件。它只影响最初加载的输出文件。对于按需加载的 chunk 文件,请使用 output.chunkFilename 选项来控制输出。通过 loader 创建的文件也不受影响。在这种情况下,你必须尝试 loader 特定的可用选项。
Template strings
可以使用以下替换模板字符串(通过 webpack 内部的TemplatedPathPlugin):
可在编译层面进行替换的内容:
模板描述
[fullhash]compilation 完整的 hash 值
[hash]同上,但已弃用
可在 chunk 层面进行替换的内容:
模板描述
[id]此 chunk 的 ID
[name]如果设置,则为此 chunk 的名称,否则使用 chunk 的 ID
[chunkhash]此 chunk 的 hash 值,包含该 chunk 的所有元素
[contenthash]此 chunk 的 hash 值,只包括该内容类型的元素(受 optimization.realContentHash 影响)
可在模块层面替换的内容:
模板描述
[id]模块的 ID
[moduleid]同上,但已弃用
[hash]模块的 Hash 值
[modulehash]同上,但已弃用
[contenthash]模块内容的 Hash 值
可在文件层面替换的内容:
模板描述
[file]filename 和路径,不含 query 或 fragment
[query]带前缀 ? 的 query
[fragment]带前缀 # 的 fragment
[base]只有 filename(包含扩展名),不含 path
[filebase]同上,但已弃用
[path]只有 path,不含 filename
[name]只有 filename,不含扩展名或 path
[ext]带前缀 . 的扩展名(对 output.filename 不可用)
可在 URL 层面替换的内容:
模块描述
[url]URL
[hash][contenthash] 或者 [chunkhash] 的长度可以使用 [hash:16](默认为 20)来指定。或者,通过指定output.hashDigestLength 在全局配置长度。
当你要在实际文件名中使用占位符时,webpack 会过滤出需要替换的占位符。例如,输出一个文件 [name].js, 你必须通过在括号之间添加反斜杠来转义[name]占位符。 因此,[\name\] 生成 [name] 而不是 name
例如:[\id\] 生成 [id] 而不是 id
如果将这个选项设为一个函数,函数将返回一个包含上面表格中含有替换信息数据的对象。 替换也会被应用到返回的字符串中。 传递的对象将具有如下类型(取决于上下文的属性):
type PathData = {
hash: string;
hashWithLength: (number) => string;
chunk: Chunk | ChunkPathData;
module: Module | ModulePathData;
contentHashType: string;
contentHash: string;
contentHashWithLength: (number) => string;
filename: string;
url: string;
runtime: string | SortableSet<string>;
chunkGraph: ChunkGraph;
};
type ChunkPathData = {
id: string | number;
name: string;
hash: string;
hashWithLength: (number) => string;
contentHash: Record<string, string>;
contentHashWithLength: Record<string, (number) => string>;
};
type ModulePathData = {
id: string | number;
hash: string;
hashWithLength: (number) => string;
};
output.globalObject
string = 'self'
当输出为 library 时,尤其是当 libraryTarget'umd'时,此选项将决定使用哪个全局对象来挂载 library。为了使 UMD 构建在浏览器和 Node.js 上均可用,应将 output.globalObject 选项设置为 'this'。对于类似 web 的目标,默认为 self
入口点的返回值将会使用 output.library.name 赋值给全局对象。依赖于 target 配置项,全局对象将会发生对应的改变,例如:self, global 或者 globalThis
示例:
webpack.config.js
module.exports = {
// ...
output: {
library: 'myLib',
libraryTarget: 'umd',
filename: 'myLib.js',
globalObject: 'this',
},
};
output.hashDigest
string = 'hex'
在生成 hash 时使用的编码方式。支持 Node.js hash.digest 的所有编码。对文件名使用 'base64',可能会出现问题,因为 base64 字母表中具有 / 这个字符(character)。同样的,'latin1' 规定可以含有任何字符(character)。
output.hashDigestLength
number = 20
散列摘要的前缀长度。
output.hashFunction
string = 'md4' function
散列算法。支持 Node.JS crypto.createHash 的所有功能。从 4.0.0-alpha2 开始,hashFunction 现在可以是一个返回自定义 hash 的构造函数。出于性能原因,你可以提供一个不加密的哈希函数(non-crypto hash function)。
module.exports = {
//...
output: {
hashFunction: require('metrohash').MetroHash64,
},
};
确保 hash 函数有可访问的 updatedigest 方法。
output.hashSalt
一个可选的加盐值,通过 Node.JS hash.update 来更新哈希。
output.hotUpdateChunkFilename
string = '[id].[fullhash].hot-update.js'
自定义热更新 chunk 的文件名。可选的值的详细信息,请查看 output.filename 选项。
其中值唯一的占位符是 [id][fullhash],其默认为:
webpack.config.js
module.exports = {
//...
output: {
hotUpdateChunkFilename: '[id].[fullhash].hot-update.js',
},
};
output.hotUpdateGlobal
string
只在 target 设置为 'web' 时使用,用于加载热更新(hot update)的 JSONP 函数。
JSONP 函数用于异步加载(async load)热更新(hot-update) chunk。
欲了解详情,请查阅 output.chunkLoadingGlobal
output.hotUpdateMainFilename
string = '[runtime].[fullhash].hot-update.json' function
自定义热更新的主文件名(main filename)。[fullhash][runtime] 均可作为占位符。
output.iife
boolean = true
告诉 webpack 添加 IIFE 外层包裹生成的代码.
module.exports = {
//...
output: {
iife: true,
},
};
output.importFunctionName
string = 'import'
内部 import() 函数的名称. 可用于 polyfilling, 例如 通过 dynamic-import-polyfill.
webpack.config.js
module.exports = {
//...
output: {
importFunctionName: '__import__',
},
};
output.library
输出一个库,为你的入口做导出。
• 类型:string | string[] | object
一起来看一个简单的示例。
webpack.config.js
module.exports = {
// …
entry: './src/index.js',
output: {
library: 'MyLibrary',
},
};
假设你在 src/index.js 的入口中导出了如下函数:
export function hello(name) {
console.log(`hello ${name}`);
}
此时,变量 MyLibrary 将与你的入口文件所导出的文件进行绑定,下面是如何使用 webpack 构建的库的实现:
<script src="https://example.org/path/to/my-library.js"></script>
<script>
MyLibrary.hello('webpack');
</script>
在上面的例子中,我们为 entry 设置了一个入口文件,然而 webpack 可以接受 多个入口,例如一个 array 或者一个 object
1. 如果你将 entry 设置为一个 array,那么只有数组中的最后一个会被暴露。
module.exports = {
// …
entry: ['./src/a.js', './src/b.js'], // 只有在 b.js 中导出的内容才会被暴露
output: {
library: 'MyLibrary',
},
};
2. 如果你将 entry 设置为一个 object,所以入口都可以通过 libraryarray 语法暴露:
module.exports = {
// …
entry: {
a: './src/a.js',
b: './src/b.js',
},
output: {
filename: '[name].js',
library: ['MyLibrary', '[name]'], // name is a placeholder here
},
};
假设 a.jsb.js 导出名为 hello 的函数,这就是如何使用这些库的方法:
<script src="https://example.org/path/to/a.js"></script>
<script src="https://example.org/path/to/b.js"></script>
<script>
MyLibrary.a.hello('webpack');
MyLibrary.b.hello('webpack');
</script>
查看 示例 获取更多内容。
请注意,如果你打算在每个入口点配置 library 配置项的话,以上配置将不能按照预期执行。这里是如何 在每个入口点下 做的方法:
module.exports = {
// …
entry: {
main: {
import: './src/index.js',
library: {
// `output.library` 下的所有配置项可以在这里使用
name: 'MyLibrary',
type: 'umd',
umdNamedDefine: true,
},
},
another: {
import: './src/another.js',
library: {
name: 'AnotherLibrary',
type: 'commonjs2',
},
},
},
};
output.library.name
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
},
},
};
指定库的名称。
• 类型:
string | string[] | {amd?: string, commonjs?: string, root?: string | string[]}
output.library.type
配置将库暴露的方式。
• 类型:string
类型默认包括 'var''module''assign''assign-properties''this''window''self''global''commonjs''commonjs2''commonjs-module''commonjs-static''amd''amd-require''umd''umd2''jsonp' 以及 'system',除此之外也可以通过插件添加。
对于接下来的实例,我们将会使用 __entry_return_ 表示被入口点返回的值。
Expose a Variable
These options assign the return value of the entry point (e.g. whatever the entry point exported) to the name provided by output.library.name at whatever scope the bundle was included at.
type: 'var'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'var',
},
},
};
让你的库加载之后,入口起点的返回值 将会被赋值给一个变量:
var MyLibrary = _entry_return_;
// 在加载了 `MyLibrary` 的单独脚本中
MyLibrary.doSomething();
type: 'assign'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'assign',
},
},
};
这将生成一个隐含的全局变量,它有可能重新分配一个现有的值(请谨慎使用):
MyLibrary = _entry_return_;
请注意,如果 MyLibrary 没有在你的库之前定义,那么它将会被设置在全局作用域。
type: 'assign-properties'
5.16.0+
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'assign-properties',
},
},
};
type: 'assign' 相似但是更安全,因为如果 MyLibrary 已经存在的话,它将被重用:
// 仅在当其不存在是创建 MyLibrary
MyLibrary = typeof MyLibrary === 'undefined' ? {} : MyLibrary;
// 然后复制返回值到 MyLibrary
// 与 Object.assign 行为类似
// 例如,你像下面这样在你的入口导出一个 `hello` 函数
export function hello(name) {
console.log(`Hello ${name}`);
}
// 在另外一个已经加载 MyLibrary 的脚本中
// 你可以像这样运行 `hello` 函数
MyLibrary.hello('World');
Expose Via Object Assignment
这些配置项分配入口点的返回值(例如:无论入口点导出的什么内容)到一个名为 output.library.name 的对象中。
type: 'this'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'this',
},
},
};
入口起点的返回值 将会被赋值给 this 对象下的 output.library.name 属性。this 的含义取决于你:
this['MyLibrary'] = _entry_return_;
// 在一个单独的脚本中
this.MyLibrary.doSomething();
MyLibrary.doSomething(); // 如果 `this` 为 window 对象
type: 'window'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'window',
},
},
};
入口起点的返回值 将会被赋值给 window 对象下的 output.library.name
window['MyLibrary'] = _entry_return_;
window.MyLibrary.doSomething();
type: 'global'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'global',
},
},
};
入口起点的返回值 将会被复制给全局对象下的 output.library.name。取决于 target 值,全局对象可以分别改变,例如,selfglobal 或者 globalThis
global['MyLibrary'] = _entry_return_;
global.MyLibrary.doSomething();
type: 'commonjs'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'commonjs',
},
},
};
入口起点的返回值 将使用 output.library.name 赋值给 exports 对象。顾名思义,这是在 CommonJS 环境中使用。
exports['MyLibrary'] = _entry_return_;
require('MyLibrary').doSomething();
Module Definition Systems
这些配置项将生成一个带有完整 header 的 bundle,以确保与各种模块系统兼容。output.library.name 配置项在不同的 output.library.type 中有不同的含义。
type: 'module'
module.exports = {
// …
experiments: {
outputModule: true,
},
output: {
library: {
// do not specify a `name` here
type: 'module',
},
},
};
输出 ES 模块。
然而该特性仍然是实验性的,并且没有完全支持,所以请确保事先启用 experiments.outputModule。除此之外,你可以在 这里 追踪开发进度。
type: 'commonjs2'
module.exports = {
// …
output: {
library: {
// note there's no `name` here
type: 'commonjs2',
},
},
};
入口起点的返回值 将会被赋值给 module.exports。顾名思义,这是在 Node.js(CommonJS)环境中使用的:
module.exports = _entry_return_;
require('MyLibrary').doSomething();
如果我们指定 output.library.nametype: commmonjs2,你的入口起点的返回值将会被赋值给 module.exports.[output.library.name]
type: 'commonjs-static'
5.66.0+
module.exports = {
// …
output: {
library: {
// note there's no `name` here
type: 'commonjs-static',
},
},
};
单个导出将被设置为 module.exports 中的属性。名称中的 "static" 是指输出是静态可分析的,因此具名导出可以通过 Node.js 导入到 ESM 中:
输入:
export function doSomething() {}
输出:
function doSomething() {}
// …
exports.doSomething = __webpack_exports__.doSomething;
Consumption (CommonJS):
const { doSomething } = require('./output.cjs'); // doSomething => [Function: doSomething]
Consumption (ESM):
import { doSomething } from './output.cjs'; // doSomething => [Function: doSomething]
type: 'amd'
可以将你的库暴露为 AMD 模块。
AMD module 要求入口 chunk(例如,第一个通过 <script> 标签加载的脚本)使用特定的属性来定义, 例如 definerequire,这通常由 RequireJS 或任何兼容的 loader(如 almond)提供。否则,直接加载产生的 AMD bundle 将导致一个错误,如 define is not defined
按照下面的配置
module.exports = {
//...
output: {
library: {
name: 'MyLibrary',
type: 'amd',
},
},
};
生成的输出将被定义为 "MyLibrary",例如:
define('MyLibrary', [], function () {
return _entry_return_;
});
该 bundle 可以使用 script 标签引入,并且可以被这样引入:
require(['MyLibrary'], function (MyLibrary) {
// Do something with the library...
});
如果没有定义 output.library.name 的话,会生成以下内容。
define(function () {
return _entry_return_;
});
如果使用一个 <script> 标签直接加载。它只能通过 RequireJS 兼容的异步模块 loader 通过文件的实际路径工作,所以在这种情况下,如果 output.pathoutput.filename 直接在服务端暴露,那么对于这种特殊设置可能会变得很重要。
type: 'amd-require'
module.exports = {
//...
output: {
library: {
name: 'MyLibrary',
type: 'amd-require',
},
},
};
它会用一个立即执行的 AMD require(dependencies, factory) 包装器来打包输出。
'amd-require' 类型允许使用 AMD 的依赖,而不需要单独的后续调用。与 'amd' 类型一样,这取决于在加载 webpack 输出的环境中适当的 require 函数 是否可用。
使用该类型的话,不能使用库的名称。
type: 'umd'
这将在所有模块定义下暴露你的库, 允许它与 CommonJS、AMD 和作为全局变量工作。可以查看 UMD Repository 获取更多内容。
在这种情况下,你需要使用 library.name 属性命名你的模块:
module.exports = {
//...
output: {
library: {
name: 'MyLibrary',
type: 'umd',
},
},
};
最终的输出为:
(function webpackUniversalModuleDefinition(root, factory) {
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if (typeof define === 'function' && define.amd) define([], factory);
else if (typeof exports === 'object') exports['MyLibrary'] = factory();
else root['MyLibrary'] = factory();
})(global, function () {
return _entry_return_;
});
请注意,根据 对象赋值部分,省略 library.name 将导致入口起点返回的所有属性直接赋值给根对象。示例:
module.exports = {
//...
output: {
libraryTarget: 'umd',
},
};
输出将会是:
(function webpackUniversalModuleDefinition(root, factory) {
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if (typeof define === 'function' && define.amd) define([], factory);
else {
var a = factory();
for (var i in a) (typeof exports === 'object' ? exports : root)[i] = a[i];
}
})(global, function () {
return _entry_return_;
});
你可以为 library.name 指定一个对象,每个目标的名称不同:
module.exports = {
//...
output: {
library: {
name: {
root: 'MyLibrary',
amd: 'my-library',
commonjs: 'my-common-library',
},
type: 'umd',
},
},
};
type: 'system'
这将会把你的库暴露为一个 System.register 模块。这个特性最初是在 webpack 4.30.0 中发布。
System 模块要求当 webpack bundle 执行时,全局变量 System 出现在浏览器中。编译的 System.register 格式允许你在没有额外配置的情况下使用 System.import('/bundle.js'),并将你的 webpack bundle 加载到系统模块注册表中。
module.exports = {
//...
output: {
library: {
type: 'system',
},
},
};
输出:
System.register([], function (__WEBPACK_DYNAMIC_EXPORT__, __system_context__) {
return {
execute: function () {
// ...
},
};
});
除了设置 output.library.typesystem,还要将 output.library.name 添加到配置中,输出的 bundle 将以库名作为 System.register 的参数:
System.register(
'MyLibrary',
[],
function (__WEBPACK_DYNAMIC_EXPORT__, __system_context__) {
return {
execute: function () {
// ...
},
};
}
);
Other Types
type: 'jsonp'
module.exports = {
// …
output: {
library: {
name: 'MyLibrary',
type: 'jsonp',
},
},
};
这将把入口起点的返回值包装到 jsonp 包装器中。
MyLibrary(_entry_return_);
你的库的依赖将由 externals 配置定义。
output.library.export
指定哪一个导出应该被暴露为一个库。
• 类型:string | string[]
默认为 undefined,将会导出整个(命名空间)对象。下面的例子演示了使用 output.library.type: 'var' 配置项产生的作用。
module.exports = {
output: {
library: {
name: 'MyLibrary',
type: 'var',
export: 'default',
},
},
};
入口起点的默认导出将会被赋值为库名称:
// 如果入口有一个默认导出
var MyLibrary = _entry_return_.default;
你也可以向 output.library.export 传递一个数组,它将被解析为一个要分配给库名的模块的路径:
module.exports = {
output: {
library: {
name: 'MyLibrary',
type: 'var',
export: ['default', 'subModule'],
},
},
};
这里就是库代码:
var MyLibrary = _entry_return_.default.subModule;
output.library.auxiliaryComment
在 UMD 包装器中添加注释。
• 类型:string | { amd?: string, commonjs?: string, commonjs2?: string, root?: string }
为每个 umd 类型插入相同的注释,将 auxiliaryComment 设置为 string。
module.exports = {
// …
mode: 'development',
output: {
library: {
name: 'MyLibrary',
type: 'umd',
auxiliaryComment: 'Test Comment',
},
},
};
这将产生以下结果:
(function webpackUniversalModuleDefinition(root, factory) {
//Test Comment
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
//Test Comment
else if (typeof define === 'function' && define.amd) define([], factory);
//Test Comment
else if (typeof exports === 'object') exports['MyLibrary'] = factory();
//Test Comment
else root['MyLibrary'] = factory();
})(self, function () {
return __entry_return_;
});
对于细粒度控制,可以传递一个对象:
module.exports = {
// …
mode: 'development',
output: {
library: {
name: 'MyLibrary',
type: 'umd',
auxiliaryComment: {
root: 'Root Comment',
commonjs: 'CommonJS Comment',
commonjs2: 'CommonJS2 Comment',
amd: 'AMD Comment',
},
},
},
};
output.library.umdNamedDefine
boolean
当使用 output.library.type: "umd" 时,将 output.library.umdNamedDefine 设置为 true 将会把 AMD 模块命名为 UMD 构建。否则使用匿名 define
module.exports = {
//...
output: {
library: {
name: 'MyLibrary',
type: 'umd',
umdNamedDefine: true,
},
},
};
AMD module 将会是这样:
define('MyLibrary', [], factory);
output.libraryExport
string [string]
通过配置 libraryTarget 决定暴露哪些模块。默认情况下为 undefined,如果你将 libraryTarget 设置为空字符串,则与默认情况具有相同的行为。例如,如果设置为 '',将导出整个(命名空间)对象。下述 demo 演示了当设置 libraryTarget: 'var' 时的效果。
支持以下配置:
libraryExport: 'default' - 入口的默认导出将分配给 library target:
// if your entry has a default export of `MyDefaultModule`
var MyDefaultModule = _entry_return_.default;
libraryExport: 'MyModule' - 这个 确定的模块 将被分配给 library target:
var MyModule = _entry_return_.MyModule;
libraryExport: ['MyModule', 'MySubModule'] - 数组将被解析为要分配给 library target 的 模块路径
var MySubModule = _entry_return_.MyModule.MySubModule;
使用上述指定的 libraryExport 配置时,library 的结果可以这样使用:
MyDefaultModule.doSomething();
MyModule.doSomething();
MySubModule.doSomething();
output.libraryTarget
string = 'var'
配置如何暴露 library。可以使用下面的选项中的任意一个。注意,此选项与分配给 output.library 的值一同使用。对于下面的所有示例,都假定将 output.library 的值配置为 MyLibrary
暴露为一个变量
这些选项将入口起点的返回值(例如,入口起点的任何导出值),在 bundle 包所引入的位置,赋值给 output.library 提供的变量名。
libraryTarget: 'var' $#libraryTarget-var$
当 library 加载完成,入口起点的返回值将分配给一个变量:
var MyLibrary = _entry_return_;
// 在一个单独的 script...
MyLibrary.doSomething();
libraryTarget: 'assign' $#libraryTarget-assign$
这将产生一个隐含的全局变量,可能会潜在地重新分配到全局中已存在的值(谨慎使用):
MyLibrary = _entry_return_;
注意,如果 MyLibrary 在作用域中未在前面代码进行定义,则你的 library 将被设置在全局作用域内。
libraryTarget: 'assign-properties' $#libraryTarget-assign-properties$
5.16.0+
如果目标对象存在,则将返回值 copy 到目标对象,否则先创建目标对象:
// 如果不存在的话就创建目标对象
MyLibrary = typeof MyLibrary === 'undefined' ? {} : MyLibrary;
// 然后复制返回值到 MyLibrary
// 与 Object.assign 行为类似
// 例如,你在入口导出了一个 `hello` 函数
export function hello(name) {
console.log(`Hello ${name}`);
}
// 在另一个脚本中运行 MyLibrary
// 你可以像这样运行 `hello` 函数
MyLibrary.hello('World');
通过在对象上赋值暴露
这些选项将入口起点的返回值(例如,入口起点的任何导出值)赋值给一个特定对象的属性(此名称由 output.library 定义)下。
如果 output.library 未赋值为一个非空字符串,则默认行为是,将入口起点返回的所有属性都赋值给一个对象(此对象由 output.libraryTarget 特定),通过如下代码片段:
(function (e, a) {
for (var i in a) {
e[i] = a[i];
}
})(output.libraryTarget, _entry_return_);
libraryTarget: 'this'
入口起点的返回值将分配给 this 的一个属性(此名称由 output.library 定义)下,this 的含义取决于你:
this['MyLibrary'] = _entry_return_;
// 在一个单独的 script...
this.MyLibrary.doSomething();
MyLibrary.doSomething(); // 如果 this 是 window
libraryTarget: 'window'
入口起点的返回值将使用 output.library 中定义的值,分配给 window 对象的这个属性下。
window['MyLibrary'] = _entry_return_;
window.MyLibrary.doSomething();
libraryTarget: 'global'
入口起点的返回值将使用 output.library 中定义的值,分配给 global 对象的这个属性下。
global['MyLibrary'] = _entry_return_;
global.MyLibrary.doSomething();
libraryTarget: 'commonjs'
入口起点的返回值将使用 output.library 中定义的值,分配给 exports 对象。这个名称也意味着,模块用于 CommonJS 环境:
exports['MyLibrary'] = _entry_return_;
require('MyLibrary').doSomething();
模块定义系统
这些选项将使得 bundle 带有更完整的模块头,以确保与各种模块系统的兼容性。根据 output.libraryTarget 选项不同,output.library 选项将具有不同的含义。
libraryTarget: 'module'
输出 ES 模块。请确保事先启用 experiments.outputModule
需要注意的是,该功能还未完全支持,请在此处跟进进度。
libraryTarget: 'commonjs2'
入口起点的返回值将分配给 module.exports 对象。这个名称也意味着模块用于 CommonJS 环境:
module.exports = _entry_return_;
require('MyLibrary').doSomething();
注意,output.library 不能与 output.libraryTarget 一起使用,具体原因请参照此 issue
libraryTarget: 'amd'
将你的 library 暴露为 AMD 模块。
AMD 模块要求入口 chunk(例如使用 <script> 标签加载的第一个脚本)通过特定的属性定义,例如 definerequire,它们通常由 RequireJS 或任何兼容的模块加载器提供(例如 almond)。否则,直接加载生成的 AMD bundle 将导致报错,如 define is not defined
配置如下:
module.exports = {
//...
output: {
library: 'MyLibrary',
libraryTarget: 'amd',
},
};
生成的 output 名称将被定义为 "MyLibrary":
define('MyLibrary', [], function () {
return _entry_return_;
});
可以在 script 标签中,将 bundle 作为一个模块整体引入,并且可以像这样调用 bundle:
require(['MyLibrary'], function (MyLibrary) {
// Do something with the library...
});
如果 output.library 未定义,将会生成以下内容。
define([], function () {
return _entry_return_;
});
如果直接加载 <script> 标签,此 bundle 无法按预期运行,或者根本无法正常运行(在 almond loader 中)。只能通过文件的实际路径,在 RequireJS 兼容的异步模块加载器中运行,因此在这种情况下,如果这些设置直接暴露在服务器上,那么 output.pathoutput.filename 对于这个特定的设置可能变得很重要。
libraryTarget: 'amd-require'
这将使用立即执行的 AMD require(dependencies, factory) 包装器包装您的输出。
'amd-require' 目标(target)允许使用 AMD 依赖项,而无需单独的后续调用。与 'amd' 目标(target)一样, 这取决于在加载 webpack 输出的环境中适当可用的 require function
对于此 target,库名称将被忽略。
libraryTarget: 'umd'
将你的 library 暴露为所有的模块定义下都可运行的方式。它将在 CommonJS, AMD 环境下运行,或将模块导出到 global 下的变量。了解更多请查看 UMD 仓库
在这个例子中,你需要 library 属性来命名你的模块:
module.exports = {
//...
output: {
library: 'MyLibrary',
libraryTarget: 'umd',
},
};
最终的输出结果为:
(function webpackUniversalModuleDefinition(root, factory) {
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if (typeof define === 'function' && define.amd) define([], factory);
else if (typeof exports === 'object') exports['MyLibrary'] = factory();
else root['MyLibrary'] = factory();
})(typeof self !== 'undefined' ? self : this, function () {
return _entry_return_;
});
注意,省略 library 会导致将入口起点返回的所有属性,直接赋值给 root 对象,就像对象分配章节。例如:
module.exports = {
//...
output: {
libraryTarget: 'umd',
},
};
输出结果如下:
(function webpackUniversalModuleDefinition(root, factory) {
if (typeof exports === 'object' && typeof module === 'object')
module.exports = factory();
else if (typeof define === 'function' && define.amd) define([], factory);
else {
var a = factory();
for (var i in a) (typeof exports === 'object' ? exports : root)[i] = a[i];
}
})(typeof self !== 'undefined' ? self : this, function () {
return _entry_return_;
});
从 webpack 3.1.0 开始,你可以将 library 指定为一个对象,用于给每个 target 起不同的名称:
module.exports = {
//...
output: {
library: {
root: 'MyLibrary',
amd: 'my-library',
commonjs: 'my-common-library',
},
libraryTarget: 'umd',
},
};
libraryTarget: 'system'
这将暴露你的 library 作为一个由 System.register 的模块。此特性首次发布于 webpack 4.30.0
当 webpack bundle 被执行时,系统模块依赖全局的变量 System。编译为 System.register 形式后,你可以使用 System.import('/bundle.js') 而无需额外配置,并会将你的 webpack bundle 包加载到系统模块注册表中。
module.exports = {
//...
output: {
libraryTarget: 'system',
},
};
输出:
System.register([], function (_export) {
return {
setters: [],
execute: function () {
// ...
},
};
});
除了将 output.libraryTarget 设置为 system 之外,还可将 output.library 添加到配置中,输出 bundle 的 library 名将作为 System.register 的参数:
System.register('my-library', [], function (_export) {
return {
setters: [],
execute: function () {
// ...
},
};
});
你可以通过 __system_context__ 访问 SystemJS context
// 记录当前系统模块的 URL
console.log(__system_context__.meta.url);
// 导入一个系统模块,通过将当前的系统模块的 url 作为 parentUrl
__system_context__.import('./other-file.js').then((m) => {
console.log(m);
});
其他 Targets
libraryTarget: 'jsonp'
这将把入口起点的返回值,包裹到一个 jsonp 包装容器中
MyLibrary(_entry_return_);
你的 library 的依赖将由 externals 配置定义。
output.module
boolean = false
以模块类型输出 JavaScript 文件。由于此功能还处于实验阶段,默认禁用。
当启用时,webpack 会在内部将 output.iife 设置为 false,将 output.scriptType'module',并将 terserOptions.module 设置为 true
如果你需要使用 webpack 构建一个库以供别人使用,当 output.moduletrue 时,一定要将 output.libraryTarget 设置为 'module'
module.exports = {
//...
experiments: {
outputModule: true,
},
output: {
module: true,
},
};
output.path
string = path.join(process.cwd(), 'dist')
output 目录对应一个绝对路径
webpack.config.js
const path = require('path');
module.exports = {
//...
output: {
path: path.resolve(__dirname, 'dist/assets'),
},
};
注意,[fullhash] 在参数中被替换为编译过程(compilation)的 hash。详细信息请查看指南 - 缓存
output.pathinfo
boolean=true string: 'verbose'
告知 webpack 在 bundle 中引入「所包含模块信息」的相关注释。此选项在 development 模式时的默认值为 true,而在 production 模式时的默认值为 false。当值为 'verbose' 时,会显示更多信息,如 export,运行时依赖以及 bailouts。
webpack.config.js
module.exports = {
//...
output: {
pathinfo: true,
},
};
output.publicPath
• Type:
• function
• string
targets 设置为 webweb-workeroutput.publicPath 默认为 'auto',查看该指南获取其用例
对于按需加载(on-demand-load)或加载外部资源(external resources)(如图片、文件等)来说,output.publicPath 是很重要的选项。如果指定了一个错误的值,则在加载这些资源时会收到 404 错误。
此选项指定在浏览器中所引用的「此输出目录对应的公开 URL」。相对 URL(relative URL) 会被相对于 HTML 页面(或 <base> 标签)解析。相对于服务的 URL(Server-relative URL),相对于协议的 URL(protocol-relative URL) 或绝对 URL(absolute URL) 也可是可能用到的,或者有时必须用到,例如:当将资源托管到 CDN 时。
该选项的值是以 runtime(运行时) 或 loader(载入时) 所创建的每个 URL 为前缀。因此,在多数情况下,此选项的值都会以 / 结束
规则如下:output.path 中的 URL 以 HTML 页面为基准。
webpack.config.js
const path = require('path');
module.exports = {
//...
output: {
path: path.resolve(__dirname, 'public/assets'),
publicPath: 'https://cdn.example.com/assets/',
},
};
对于这个配置:
webpack.config.js
module.exports = {
//...
output: {
publicPath: '/assets/',
chunkFilename: '[id].chunk.js',
},
};
对于一个 chunk 请求,看起来像这样 /assets/4.chunk.js
对于一个输出 HTML 的 loader 可能会像这样输出:
<link href="/assets/spinner.gif" />
或者在加载 CSS 的一个图片时:
background-image: url(/assets/spinner.gif);
webpack-dev-server 也会默认从 publicPath 为基准,使用它来决定在哪个目录下启用服务,来访问 webpack 输出的文件。
注意,参数中的 [fullhash] 将会被替换为编译过程(compilation) 的 hash。详细信息请查看指南 - 缓存
示例:
module.exports = {
//...
output: {
// One of the below
publicPath: 'auto', // It automatically determines the public path from either `import.meta.url`, `document.currentScript`, `<script />` or `self.location`.
publicPath: 'https://cdn.example.com/assets/', // CDN(总是 HTTPS 协议)
publicPath: '//cdn.example.com/assets/', // CDN(协议相同)
publicPath: '/assets/', // 相对于服务(server-relative)
publicPath: 'assets/', // 相对于 HTML 页面
publicPath: '../assets/', // 相对于 HTML 页面
publicPath: '', // 相对于 HTML 页面(目录相同)
},
};
在编译时(compile time)无法知道输出文件的 publicPath 的情况下,可以留空,然后在入口文件(entry file)处使用自由变量(free variable) __webpack_public_path__,以便在运行时(runtime)进行动态设置。
__webpack_public_path__ = myRuntimePublicPath;
// 应用程序入口的其他部分
有关 __webpack_public_path__ 的更多信息,请查看此讨论
output.scriptType
string: 'module' | 'text/javascript' boolean = false
这个配置项允许使用自定义 script 类型加载异步 chunk,例如 <script type="module" ...>
module.exports = {
//...
output: {
scriptType: 'module',
},
};
output.sourceMapFilename
string = '[file].map[query]'
仅在 devtool 设置为 'source-map' 时有效,此选项会向硬盘写入一个输出文件。
可以使用 #output-filename 中的 [name], [id], [hash][chunkhash] 替换符号。除此之外,还可以使用 Template strings 在 Filename-level 下替换。
output.sourcePrefix
string = ''
修改输出 bundle 中每行的前缀。
webpack.config.js
module.exports = {
//...
output: {
sourcePrefix: '\t',
},
};
output.strictModuleErrorHandling
按照 ES Module 规范处理 module 加载时的错误,会有性能损失。
• 类型:boolean
• 可用版本:5.25.0+
module.exports = {
//...
output: {
strictModuleErrorHandling: true,
},
};
output.strictModuleExceptionHandling
boolean = false
如果一个模块是在 require 时抛出异常,告诉 webpack 从模块实例缓存(require.cache)中删除这个模块。
出于性能原因,默认为 false
当设置为 false 时,该模块不会从缓存中删除,这将造成仅在第一次 require 调用时抛出异常(会导致与 node.js 不兼容)。
例如,设想一下 module.js
throw new Error('error');
strictModuleExceptionHandling 设置为 false,只有第一个 require 抛出异常:
// with strictModuleExceptionHandling = false
require('module'); // <- 抛出
require('module'); // <- 不抛出
相反,将 strictModuleExceptionHandling 设置为 true,这个模块所有的 require 都抛出异常:
// with strictModuleExceptionHandling = true
require('module'); // <- 抛出
require('module'); // <- 仍然抛出
output.trustedTypes
boolean = false string object
5.37.0+
控制 Trusted Types 兼容性。启用时,webpack 将检测 Trusted Types 支持,如果支持,则使用 Trusted Types 策略创建它动态加载的脚本 url。当应用程序在 require-trusted-types-for 内容安全策略指令下运行时使用。
默认为 false(不兼容,脚本 URL 为字符串)。
• 设置为 true 时,webpack 将会使用 output.uniqueName 作为 Trusted Types 策略名称。
• 设置为非空字符串时,它的值将被用作策略名称。
• 设置为一个对象时,策略名称取自对象 policyName 属性。
webpack.config.js
module.exports = {
//...
output: {
//...
trustedTypes: {
policyName: 'my-application#webpack',
},
},
};
output.umdNamedDefine
boolean
当使用 libraryTarget: "umd" 时,设置 output.umdNamedDefinetrue 将命名由 UMD 构建的 AMD 模块。否则将使用一个匿名的 define
module.exports = {
//...
output: {
umdNamedDefine: true,
},
};
output.uniqueName
string
在全局环境下为防止多个 webpack 运行时 冲突所使用的唯一名称。默认使用 output.library 名称或者上下文中的 package.json 的 包名称(package name), 如果两者都不存在,值为 ''
output.uniqueName 将用于生成唯一全局变量:
webpack.config.js
module.exports = {
// ...
output: {
uniqueName: 'my-package-xyz',
},
};
output.wasmLoading
boolean = false string
此选项用于设置加载 WebAssembly 模块的方式。默认可使用的方式有 'fetch'(web/WebWorker),'async-node'(Node.js),其他额外方式可由插件提供。
其默认值会根据 target 的变化而变化:
• 如果 target 设置为 'web''webworker''electron-renderer''node-webkit' 其中之一,其默认值为 'fetch'
• 如果 target 设置为 'node''async-node''electron-main''electron-preload',其默认值为 'async-node'
module.exports = {
//...
output: {
wasmLoading: 'fetch',
},
};
output.workerChunkLoading
string: 'require' | 'import-scripts' | 'async-node' | 'import' | 'universal' boolean: false
新选项 workerChunkLoading 用于控制 workder 的 chunk 加载。
webpack.config.js
module.exports = {
//...
output: {
workerChunkLoading: false,
},
};
26 位贡献者
sokraskipjacktomasAlabesmattceirthfvgsdhurlburtusaMagicDuckfadysamirsadekbyzykmadhavarshneyharshwardhansingheemeliEugeneHlushkog-planesmelukovNeob91anikethsahajamesgeorge007hiroppychenxsansnitin315QC-LanshumanvmrzalyaulJakobJingleheimer
|
__label__pos
| 0.893635 |
木灵鱼儿
木灵鱼儿
阅读:2124
最后更新:2021/03/13/ 23:47:42
koa教程2 用户注册及Joi校验、成功信息返回
用户注册
用户数据表搭建完毕后,我们肯定是需要注册用户的,那么首先我们需要创建一个路由api了。
在根目录:/app/router/v1目录下创建一个user.js文件。
user.js:
const Router = require("koa-router");
const router = new Router({ prefix: "/v1/user" });
const { User } = require(`${process.cwd()}/models/user`);
//注册用户
router.post("/register", async (ctx, next) => {
});
module.exports = router;
注意:记得在app.js中删除上一节require("./models/user")代码,因为已经不需要了,我们这这里使用了。
假设客户端传来的数据如下:ctx.body
{
email: "[email protected]",
password1: "123456",
password2: "123456",
nickname: "木灵鱼儿"
}
那么我们要对这个参数进行校验
参数校验
通过ctx.body我们可以拿到客户端传过来的数据。为此在校验前我们需要安装Joi
yarn add joi --dev
joi如何校验数据?
首先我们需要通过Joi.object()创建一个参数校验对象A,然后通过这个参数校验对象A的validateAsync方法,将ctx.body传入,然后joi就会根据我们的设定进行参数校验,不通过将会抛出Error错误对象。
成功,则是会返回被检验的参数,注意,这个参数可以在校验时改动,比如一个参数首尾代空格,我们可以通过joi将首尾空格去除,然后在存入数据库中。
joi默认是自动创建Error对象的,这个错误最终会被我们全局错误捕获所捕获,但是这个Error并没有我们需要参数,所以,我们需要控制joi的错误对象为我们自定义的错误对象才行,好在这个行为我们是可以控制的。
创建校验失败的错误对象
http-error.js:
//基础错误对象
class HttpError extends Error {
constructor(msg = "服务器异常", errorCode = 10000, status = 400) {
super();
this.errorCode = errorCode;
this.status = status;
this.msg = msg;
}
};
//校验不通过的错误对象
class ValidateError extends HttpError {
constructor(msg, errorCode) {
super();
this.errorCode = errorCode || 10001;
this.status = 200;
this.msg = msg || "参数检验不通过!";
}
};
module.exports = {
HttpError,
ValidateError,
};
ValidateError 将是我们参数校验不通过所使用的错误对象,他的状态是200,访问是成功地。
创建创建用户校验方法
在utils目录下,创建一个validate目录,里面用于存放我们校验的方法,在validate中创建user.js文件
user.js:
const Joi = require("joi");
const { ValidateError } = require(`${process.cwd()}/core/http-error`);
const { User } = require(`${process.cwd()}/models/user`);
//验证数据库是否已经有相同邮箱
const isExistSameEmail = async (email) => {
const user = await User.findOne({
where: { email }
});
//查不到返回null
if (user) throw new ValidateError("邮箱已存在!");
return email; //可以在这改变返回值
};
//注册账号
const registerValidate = (data) => {
const joiObject = Joi.object({
//最低2位邮箱@后面的内容,例:@io
email: Joi.string().email({ minDomainSegments: 2 })
.required().external(isExistSameEmail, "是否存在相同邮箱")
.error(new ValidateError("邮箱不合法!")),
//password 密码必填 6-32位
password1: Joi.string().trim(true).min(6).max(32).required()
.error(new ValidateError("密码必须是1-32位!")),
//二次检验密码
password2: Joi.string().valid(Joi.ref("password1"))
.error(new ValidateError("两次密码必须相同!")),
//昵称 最低2位
nickname: Joi.string().trim(true).pattern(new RegExp("^[^\\s]+$")).min(2)
.error(new ValidateError("昵称长度最低2位且不能包含空格!")),
});
return joiObject.validateAsync(data);
};
module.exports = {
registerValidate
}
具体用法就自己查看joi的文档了,https://joi.dev/api/?v=17.4.0,目前最新是17版本
其中trim(true)是可以去除参数首尾空格。
external表示自定义异步校验,所以他被安排到了最后校验,所以如果在external中对参数进行操作,其他校验器是拿不到的,正因为他是异步的,所以我们在里面使用异步的请求,比如上面对数据进行查询,如果有相同的email就抛出错误。
external接受两个参数,一个要出触发的函数,另一个是说明文字。
如果是一个比较简单的校验,同步就能完成的,我们可以使用custom的方式,他的用法和external差不多,也是两个参数,函数和说明文字,只是他里面不用使用异步的方法。
开始校验
user.js:
const Router = require("koa-router");
const router = new Router({ prefix: "/v1/user" });
const { User } = require(`${process.cwd()}/models/user`);
const { registerValidate } = require(`${process.cwd()}/utils/validate/user`);
//注册用户
router.post("/register", async (ctx, next) => {
//验证
const { email, password1: password, nickname } = await registerValidate(ctx.request.body);
// 验证通过
console.log("验证通过")
});
module.exports = router;
当他能往下走的时候,说明验证已经通过了,我们就可以往数据库写入内容,写入内容我们也需要引入对象的模型。
const Router = require("koa-router");
const router = new Router({ prefix: "/v1/user" });
const { User } = require(`${process.cwd()}/models/user`);
const { registerValidate } = require(`${process.cwd()}/utils/validate/user`);
const { User } = require(`${process.cwd()}/models/user`);
//注册用户
router.post("/register", async (ctx, next) => {
//验证
const { email, password1: password, nickname } = await registerValidate(ctx.request.body);
//验证通过加入数据库
await User.create({ email, password, nickname });
});
module.exports = router;
写入数据后我们还需要返回给前端成功的信息。
成功信息返回
返回信息很多人可能是这样写:
ctx.body = {
msg: xxxx,
....
}
如果每个接口都这样写是不是太累了,对象的格式都是一样的,返回的一些状态值也是一样的。
我们可以创建一个用于返回成功信息的错误对象,这样就能被全局错误捕获所捕获,然后返回给前端。
于是乎:
http-error.js:
//基础错误对象
class HttpError extends Error {
constructor(msg = "服务器异常", errorCode = 10000, status = 400) {
super();
this.errorCode = errorCode;
this.status = status;
this.msg = msg;
}
};
//校验不通过的错误对象
class ValidateError extends HttpError {
constructor(msg, errorCode) {
super();
this.errorCode = errorCode || 10001;
this.status = 200;
this.msg = msg || "参数检验不通过!";
}
};
//成功的错误对象
class Success extends HttpError {
constructor(msg, errorCode) {
super();
this.errorCode = errorCode || 2000;
this.status = 201;
this.msg = msg || "参数检验不通过!";
}
}
module.exports = {
HttpError,
ValidateError,
Success
};
在路由里面使用:
const Router = require("koa-router");
const router = new Router({ prefix: "/v1/user" });
const { User } = require(`${process.cwd()}/models/user`);
const { registerValidate } = require(`${process.cwd()}/utils/validate/user`);
const { User } = require(`${process.cwd()}/models/user`);
const { Success } = require(`${process.cwd()}/core/http-error`);
//注册用户
router.post("/register", async (ctx, next) => {
//验证
const { email, password1: password, nickname } = await registerValidate(ctx.request.body);
//验证通过加入数据库
await User.create({ email, password, nickname });
throw new Success("账号创建成功!");
});
module.exports = router;
版权申明
本文系作者 @木灵鱼儿 原创发布在木灵鱼儿 - 有梦就能远航站点。未经许可,禁止转载。
关于作者
站点职位 博主
获得点赞 0
文章被阅读 2124
相关文章
|
__label__pos
| 0.988422 |
1
\$\begingroup\$
In my earlier question at this link , i asked a question on how to use minicom in linux to communicate between two Xbee's in API mode and got the answer. While i tried the same with XCTU on Windows one one side and AVR + Xbee on the other side,i was able to communicate between 2 Xbee.
On the AVR side, i had written this code to receive the char and store it in string.
#include <yuktix.h>
#include <debug.h>
#include <uart2560.h>
#include <util/delay.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <debug.h>
/*for sensor readings*/
#include <AVRI2C.h>
#include <spi.h>
//char Id;
//char cmd;
static char ch;
static char *frame;
static uint8_t index = 0;
int main(){
init_processor();
uart_init(0,B9600);
uart_init(1,B9600);
uart_init(2,B9600);
for(int j = 0; j < 1000 ; j++)
_delay_ms(1);
uart_puts(0, " \r \nstarting Xbee Test \r \n");
while(1){
while(uart_available(2) > 0){
KDEBUG("Inside uart_available");
KDEBUG("\n");
ch = uart_getc(2);
uart_putc(0,ch);
KDEBUG("Saving it in frame \n");
*frame++ = ch;
KDEBUG("saved in frame \n");
//if(ch == '\\');
}
KDEBUG("COming out of While loop");
*frame = '\0';
KDEBUG(frame);
}
}
As per my last question answer, i am receiving char on the AVR side. As every frame of XBEE API mode start with 7E, where 7E is equivalent to ~, i am receiving it, but some of the chars were unreadable and data which i send in the API frame is totally readable. My question is
1. How can i store received API frame in a string as my code don't seems to work.
2. Should i convert the string into HEX via some binary_to_hex code and then parse it , as i need to know from which address this data is coming, what kind of frame it is, what is the number of the bytes in the frame, so that after excluding the source address , i can take out the data.
3. Am i missing anything. Data on the AVR side will always be in Binary, but how can i process the data?
\$\endgroup\$
2
• \$\begingroup\$ You define a char pointer called "frame", but don't initiaize it to point anywhere, nor do you reserve any storage to save your received data. \$\endgroup\$ – Peter Bennett Aug 22 '15 at 15:47
• \$\begingroup\$ Other way could be to initialize a char array frame[100]; and then store frame[index++] = ch;?This was also not working. I am not good with coding. Can you help? \$\endgroup\$ – srj0408 Aug 24 '15 at 12:59
1
\$\begingroup\$
1) You need to intialize frame as array and it's length is the maximum frame length you may recieve. Now you are only making a pointer which point to one place then you write to bytes after that in RAM and it may cause errors. If you didn't see errors you only get lucky!!
2) Numbers is numbers 1 is the same as (0b00000001) in binary as the same as (0x01) in hex. you only need to compare as you want. For example to compare the start byte you could do one of the followings:
if(frame[0] == 0x7E){
// Here you are comparing with Hexadicimal
}
if(frame[0] == 126){
// Here you are comparing with Decimal
}
if(frame[0] == 0b01111110){
// Here you are comparing with Binary
}
Note that any conditions of the above have the same machine code in AVR. You are only represent the number to yourself.
Also i have some notes on that code:
1) Why are you add statickeyword to ch , *frame , index it has no meaning at all here.
2) For frame it is better to be unsigend char or uint8_t as char alone is signed. intialize as array like uint8_t frame[10];
\$\endgroup\$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.798371 |
Creating a XILO based Zap
Prior to following these steps, ensure you have been invited to use XILO within Zapier. If you are not sure whether you have been invited, please reach out to [email protected]
1. Click 'Create Zap'
2. Set up your Trigger
-Enter "XILO" in the Search apps field and select XILO (1.0.3)
-Select "New Form V2 Submission" under Trigger event and click continue
-Enter Your XILO ID. This 6-digit number is found at the top right of your screen when logged into XILO
-Select the form you wish to transfer lead information from
-Click Continue
3. Test Trigger
4. Once the trigger has been tested, Click continue. If the test fails, click 'Skip Test'. In most cases of a failed test, data is still being pulled into Zapier and can be seen in the next step.
5. Set up the action that should occur when this form is completed. Here you will need to search for the app which you are looking to transfer data into and follow the remaining steps to map that information into your selected apps account. If you are unable to see the form fields in this step, reach out to us at [email protected]
6. Once the action setup is complete, ensure that you have turned on the Zap to ensure it is triggered on your next lead submission on that form
Did this answer your question?
|
__label__pos
| 0.873179 |
升级版冒泡排序:快速排序
快速排序
快速排序算法是由图灵奖获得者 Tony Hoare 设计出来的,被列为 20 世纪十大算法之一;之所以说快速排序是我们一直认为最慢的冒泡排序的升级,是因为它们都属于交换排序类。也就是它们都是通过不断的比较和移动交换元素来实现排序的。
快速排序的基本思想是:通过一轮排序把待排序的元素分割成独立的两部分,其中一部分元素均小于另一部分元素;然后用相同规则继续递归分割这两部分元素,直到整个序列有序为止。根据指导思想便可以写出下列快速排序算法的主函数:
void quickSort(int arr[], int low, int high)
{
if (low < high) { // 当 low >= high 时,递归停止
int pivot;
pivot = partition(arr, low, high); // 调用分区函数,将待排序序列一分为二,pivot 为中轴元素下标
quickSort(arr, low, pivot - 1); // 分区左边进行递归分区
quickSort(arr, pivot + 1, high); // 分区右边进行地柜分区
}
}
先从宏观上写出这个函数,其实不是很复杂;基本思路是用分区函数对待排序的序列进行一分为二的分区,拿到中轴元素的下标后,对中轴元素左边的待排序序列继续递归分区,中轴元素右边相同。关键的是实现分区函数:
// 分区函数
int partition(int arr[], int low, int high)
{
int temp;
int pivotKey;
pivotKey = arr[low]; // 开始把 low 对应的元素设置为分区的中轴
while (low < high) {
while (low < high && arr[high] >= pivotKey) {
high--;
}
temp = arr[low];
arr[low] = arr[high];
arr[high] = temp;
// 执行到这里,high 下标指向的元素正是 pivotKey (中轴元素)
// 此时需要从低位开始分区
while (low < high && arr[low] <= pivotKey) {
low++;
}
temp = arr[low];
arr[low] = arr[high];
arr[high] = temp;
// 执行完后,pivotKey (中轴元素)又回到 low 下标指向的元素
}
return low;
}
仔细阅读分区函数代码,也不是很复杂;交换的关键,是用中轴元素与序列中的其他元素一一比较,进行换位操作;假设待排序的序列是 {50, 10, 20, 60, 40, 30, 70},开始 pivotKey 设置为 low 下标的元素,即 50,如下图:
快速排序
接下来,开始执行分区代码中的第一部分交换代码:
while (low < high && arr[high] >= pivotKey) {
high--;
}
temp = arr[low];
arr[low] = arr[high];
arr[high] = temp;
// 执行到这里,high 下标指向的元素正是 pivotKey (中轴元素)
执行后,high 下标指向 pivotKey,如下图:
快速排序
这时需要从 low 下标(低位)开始进行比较分区,即执行第二部分分区代码,
while (low < high && arr[low] <= pivotKey) {
low++;
}
temp = arr[low];
arr[low] = arr[high];
arr[high] = temp;
// 执行完后,pivotKey (中轴元素)又回到 low 下标指向的元素
分区后 pivotKey 又回到 low 下标指向的元素,如图:
快速排序
如此,再次外层 while 循环,直到 low >= hight,整个分区排序完成。
时间、空间复杂度
最好情况下,也就是分区函数每次都选择一个中间值作为中轴,时间复杂度为 O(nlogn);最坏情况,待排序的序列为正序或者逆序,时间复杂度为 O(n^2);综合分析,平均时间复杂度为: O(nlogn)。
常用时间复杂度从低阶到高阶:O(1) < O(logn) < O(n) < O(nlogn) < O(n^2)
就空间复杂度来说,主要是递归调用栈的开销;平均情况的递归调用栈为 O(logn);
快速排序优化
1: 优化分区函数
对于分区函数来说,如果我们每次选取的中轴数 pivotKey 都趋向于中间位置,那么复杂度就趋向于最优情况。那么问题的关键就是选取这个中轴数字,一般会采用 三数取中(如果序列整体基数大,可以适当的扩大”三数”) 的办法,选序列中最左、最右、中间的三个数中的中间数字作为中轴数(或者随机选序列中的三个数,但是注意,随机生成数也有开销,所以一般选左、中、右三数),这样至少能保证这个中轴数肯定不是整体序列中最小的或者最大的数字;以三数取中为例,代码如下:
int partition(int arr[], int low, int high)
{
int pivotKey;
int mid = low + (high - low) / 2;
if (arr[low] > arr[high]) {
swap(arr, low, high);
}
if (arr[mid] > arr[high]) {
swap(arr, mid, high);
}
if (arr[mid] > arr[low]) {
swap(arr, mid, low);
}
pivotKey = arr[low]; // 开始把 low 对应的元素设置为分区的中轴
while (low < high) {
while (low < high && arr[high] >= pivotKey) {
high--;
}
swap(arr, low, high);
// 执行到这里,high 下标指向的元素正是 pivotKey (中轴元素)
// 此时需要从低位开始分区
while (low < high && arr[low] <= pivotKey) {
low++;
}
swap(arr, low, high);
// 执行完后,pivotKey (中轴元素)又回到 low 下标指向的元素
}
return low;
}
这里由于代码中多处用到了交换元素位置的功能功能,所以抽离了一个数据交换函数 swap ,实现也很简单,就不贴示例代码;上述示例中,待排序的序列基数是比较小的(共 7 个元素排序),pivotKey 的取值采用了三数取中的办法;如果待排序序列有 70 个元素,那么为了更好的找到一个合适的 pivotKey ,我们也可以采用九数取中甚至更大的数字来确定 pivotKey,但是也不宜过大,因为即使是 9 数取中,我们也要先对这 9 个数进行排序,而这又涉及到另外一次排序的开销了。
2:优化元素的不必要交换
观察分区函数 partition 中的外层 while 循环代码:
while (low < high) {
while (low < high && arr[high] >= pivotKey) {
high--;
}
swap(arr, low, high);
// 执行到这里,high 下标指向的元素正是 pivotKey (中轴元素)
// 此时需要从低位开始分区
while (low < high && arr[low] <= pivotKey) {
low++;
}
swap(arr, low, high);
// 执行完后,pivotKey (中轴元素)又回到 low 下标指向的元素
}
代码中,共执行了两次元素交换,但其实,我们大可不必交换元素,因为一次循环下来, 中轴元素又落到了 low 下标位置,我们只需要在循环结束后,重新给 low 下标元素赋值回 pivotKey 即可,中间交换值的代码直接改为赋值即可。那么循环体部分代码实现如下:
while (low < high) {
while (low < high && arr[high] >= pivotKey) {
high--;
}
arr[low] = arr[high]; // 改为直接赋值
while (low < high && arr[low] <= pivotKey) {
low++;
}
arr[high] = arr[low]; // 改为直接赋值
}
arr[low] = pivotKey; // while 循环结束,low 和 high 会和,把中轴元素值赋值给 low 下标位置
因为代码少了多次数据交换,在性能上也得到了提高。
3: 优化递归操作代码
递归对性能是有一定影响的,每次递归调用都要耗费一定的栈空间,函数的参数越多。优化递归,就需要优化外层函数 quickSort,这里贴出 quickSort 优化前后的代码:
// 优化前函数
void quickSort(int arr[], int low, int high)
{
if (low < high) { // 当 low >= high 时,递归停止
int pivot;
pivot = partition(arr, low, high);
quickSort(arr, low, pivot - 1);
quickSort(arr, pivot + 1, high);
}
}
// 优化后函数
void quickSort1(int arr[], int low, int high)
{
while (low < high) { // 改为 while 循环
int pivot;
pivot = partition(arr, low, high);
quickSort(arr, low, pivot - 1);
low = pivot + 1; // 尾递归,把双重递归改为迭代
}
}
这样修改代码后,原来的 quickSort(arr, pivot + 1, high) 会在低位子表递归完成后执行,这样用迭代的方式来缩减堆栈深度,从而提高性能。
到这里,单独对快速排序的优化已经差不多了,贴出整体代码:
void swap (int arr[], int i, int j)
{
if (i != j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
int partition(int arr[], int low, int high)
{
int pivotKey;
int mid = low + (high - low) / 2;
if (arr[low] > arr[high]) {
swap(arr, low, high);
}
if (arr[mid] > arr[high]) {
swap(arr, mid, high);
}
if (arr[mid] > arr[low]) {
swap(arr, mid, low);
}
pivotKey = arr[low]; // 开始把 low 对应的元素设置为分区的中轴
while (low < high) {
while (low < high && arr[high] >= pivotKey) {
high--;
}
arr[low] = arr[high];
while (low < high && arr[low] <= pivotKey) {
low++;
}
arr[high] = arr[low];
}
arr[low] = pivotKey; // low 和 high 回合,把中轴元素值赋值给 low 下标位置
return low;
}
void quickSort(int arr[], int low, int high)
{
while (low < high) { // 当 low >= high 时,递归停止
int pivot;
pivot = partition(arr, low, high);
quickSort(arr, low, pivot - 1);
low = pivot + 1; // 尾递归,把双重递归改为迭代
}
}
最后,多种排序方案混合使用,来提高排序元素较少情况下的效率
快速排序算法,对于一个基数很大的排序序列来说,效率很好;但是如果待排序序列基数很小,快速排序反而不如直接插入排序来的高效,所以我们可以设定某个基准值(元素个数)作为分割,来选择不同的排序算法,所以就需要改造 quickSort 函数,代码中包含部分伪代码:
void quickSort2(int arr[], int low, int high)
{
if ((high - low) > MAX_Length_INSERT) { // MAX_Length_INSERT 为某个基准值
while (low < high) { // 当 low >= high 时,递归停止
int pivot;
pivot = partition(arr, low, high);
quickSort(arr, low, pivot - 1);
low = pivot + 1; // 尾递归,把双重递归改为迭代
}
} else {
insertSort(arr, high - low + 1); // 此时采用直接插入排序
}
}
MAX_Length_INSERT 作为分割的基准值(有资料显示 7 比较合适,也有说 50 的),当元素个数大于这个数时候,采用快速排序,反之,采用直接插入排序。这样就能结合两种排序算法的共同优势来提高整体的排序效率和性能。
(完)
点赞
发表评论
邮箱地址不会被公开。 必填项已用*标注
|
__label__pos
| 0.995774 |
Can we have a Word?
Even though the adage is “A Picture is Worth a Thousand Words”, sometimes it is useful to supplement a Visio drawing with text. For simple reports, Visio has a Reports feature that will produce tables of information, but occassionaly I want something more. For example, Visio does not have a feature to compare what is different between two drawings, so I have created a routine that will generate a very verbose Word document (think Print ShapeSheet on steroids) that contains minute details of the drawing(s) and then used Word’s Compare Document feature to highlight the changes.
In the past I have written directly to the Word object model from Visio, but this tended to be painfully slow and would occassionally die (or be comatose). This was to be expected because the verbose Print ShapeSheet could end up being over two hundred pages long.
The next option was to just write a plain text file, but I missed the readability of having a formatted document, even if the formatting was just headers and page breaks. So I would intersperse the text with markers that would indicate the headers and where the page breaks belonged. I could then let Word open the text file and then run a macro to format the headers, add page breaks and remove extraneous lines. I did find that I needed to save the Word document before running the macro because running the macro against a raw text file was extremely slow.
At the first Microsoft Visio summit, my fellow MVP Chris Roth gave a demonstartion of how little XML code you needed to create a Visio drawing. I tried something similar with Word XML, but was never able to get an acceptable minimal set of XML tags. So creating an XML file that Word could consume was not a viable solution.
With Word 2007 and Excel 2007 came a new file format, a zipped directory of XML files. The OOXML format looked promising, but there were still way to many hoops to jump through to get a simple Word document.
Now with the release of the OOXML SDK 2.0, the actual creation of a Word docment is relatively simple. So if you need a way to create a Word document from Visio take a look at
How to: Create a Word Processing Document by Providing a Filename http://msdn.microsoft.com/en-us/library/ff478190.aspx
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.860172 |
AnsweredAssumed Answered
How do you move the rootfs to flash?
Question asked by skstrobel on Jan 25, 2013
Latest reply on Jan 27, 2013 by Aaronwu
The Blackfin uClinux projects I have worked on have always loaded the kernel and rootfs into an initramfs or similar, then run everything from RAM. This has at least two drawbacks; it takes about 7 seconds for U-Boot to copy the combined image from SPI flash into RAM on startup (longer than we would like) and it uses RAM to store even files that aren't being used. So I am looking at moving the rootfs to JFFS2 on the SPI flash. It isn't very clear to me what steps are needed to make that work. It seems like they would include:
• Use U-Boot to store the kernel image in flash (vmImage? vmlinux? some other file?).
• Use U-Boot to store the JFFS2 rootfs to flash (rootfs.initframfs.gz?)
• Modify the target's board file to split the MTD partition into two pieces, one for the kernel and one for the rootfs.
• Change the kernel args supplied by U-Boot to tell the kernel where to find the rootfs.
• Anything else?
Looking through the online docs about fast booting from SPI flash, I noticed that the rc file sets up ramfs for /var and tmpfs for /tmp. Could someone point me to an explanation for why they use different filesystems?
Thanks for any pointers.
Steve
Outcomes
|
__label__pos
| 0.941938 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.