content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
You need to enable JavaScript to run this app.
导航
购买增强型防护 EIP 时绑定 DDoS 原生防护实例
最近更新时间:2024.01.26 20:49:21首次发布时间:2024.01.26 20:49:21
您在购买公网 IP 实例时,可以将 EIP 资源绑定到已有的 DDoS 原生防护实例上,快速为实例添加防护。本文为您介绍如何通过 API 调用实现 DDoS 原生防护实例的购买和公网 IP 的绑定。
步骤一:购买原生防护实例
您可以参考PurchaseInstanceInOneStep-购买原生防护实例,购买符合您实际需求的 DDoS 原生防护实例。购买前,请确保您的账户余额充足。
例如,通过以下请求在华北2(北京)地域创建一个 DDoS 原生防护企业版实例。
POST https://open.volcengineapi.com/?Action=PurchaseInstanceInOneStep&Version=2022-01-01
X-Date: 20230711T035055Z
Authorization: HMAC-SHA256 Credential=AKLTNGU0OGEyMT***/20230711/cn-beijing/origin_defence/request, SignedHeaders=content-type;x-content-sha256;x-date, Signature=da4fac562d00f41ad8***
{
"DefBand": 300,
"IpNum": 10,
"BizBand": 100,
"ElaBand": 500,
"PurchaseNum": 1,
"MonthNum": 12,
"AutoRenew": "on",
"AccessMode": 2,
"BandwidthType": 1,
"Region": "cn-beijing",
"Spec": "Enterprise",
}
步骤二:查询原生防护实例 ID
由于 EIP 通过 DDoS 原生防护实例的 ID 完成绑定,因此需要先获取对应实例的 ID。您可以调用DescInstanceList-查询原生防护实例列表,直接查询已创建的防护实例信息。
请求示例
GET https://open.volcengineapi.com/?Action=DescInstanceList&Version=2022-01-01
X-Date: 20230711T035055Z
Authorization: HMAC-SHA256 Credential=AKLTNGU0OGEyMT***/20230711/cn-beijing/origin_defence/request, SignedHeaders=content-type;x-content-sha256;x-date, Signature=da4fac562d00f41ad8***
返回示例
以下返回示例中可以得出,名称为ddos-test1的实例对应的 ID 为20
{
"ResponseMetadata": {
"RequestId": "202311162124068D795C2273F41FA0****",
"Action": "DescInstanceList",
"Version": "2022-01-01",
"Service": "origin_defence",
"Region": "cn-beijing"
},
"Result": {
"InsList": [
{
"Id": 20,
"AttackCount": 0,
"AutoRenew": true,
"DefBand": 300,
"ElaBand": 500,
"UsedIpNum": 2,
"IpNum": 10,
"BizBand": 50,
"Name": "ddos-test1",
"PeakAttackBand": 0,
"Status": 1,
"StartAt": 1681462614,
"StopAt": 1702569599,
"Region": "cn-beijing",
"Spec": "Enterprise",
"ElaBandStatus": 1,
"IsDemoted": false
}
]
},
"PageInfo": {
"TotalCount": 2,
"PageSize": 10,
"CurrentPage": 1,
"Count": 2
}
}
步骤三:购买 EIP 实例时绑定 DDoS 原生防护实例
如果您希望在购买 EIP 实例时,直接绑定步骤二中 ID 为20的实例,在申请公网 IP 时将SecurityProtectionInstanceId参数的值设置为20即可。更多 EIP 购买操作,可参考 EIP 的接口调用说明AllocateEipAddress
(可选)步骤四:查看防护实例的攻击事件列表信息
在 EIP 与原生防护实例绑定一段时间后,您可以参考DescAttackEvent-查询攻击事件列表,查看相关资产的攻击事件信息。
例如,您可以通过以下示例查看 ID 为44的防护实例攻击事件列表信息。
请求示例
POST https://open.volcengineapi.com/?Action=DescAttackEvent&Version=2022-01-01
X-Date: 20230711T035055Z
Authorization: HMAC-SHA256 Credential=AKLTNGU0OGEyMT***/20230711/cn-beijing/origin_defence/request, SignedHeaders=content-type;x-content-sha256;x-date, Signature=da4fac562d00f41ad8***
{
"CurrPage": 1,
"PageSize": 10,
"InsId": 44,
"BeginTime": 1693560540,
"EndTime": 1698744600,
"IpList": [
"1.1.1.1"
]
}
返回示例
{
"ResponseMetadata": {
"RequestId": "202311161734246BEE1DD4ED914596****",
"Action": "DescAttackEvent",
"Version": "2022-01-01",
"Service": "origin_defence",
"Region": "cn-beijing"
},
"Result": {
"AttackedIpNum": 1,
"TotalIpNum": 3,
"AttackCount": 3,
"Attacks": [
{
"Id": 606,
"AttackType": "SYN Flood",
"Ip": "1.1.1.1",
"Status": 2,
"PeakDropKbps": 80817,
"PeakDropPps": 152646,
"Duration": 4269,
"StartTime": 1693920180,
"EndTime": 1693924449
},
{
"Id": 604,
"AttackType": "SYN Flood",
"Ip": "1.1.1.1",
"Status": 2,
"PeakDropKbps": 32796,
"PeakDropPps": 61950,
"Duration": 1869,
"StartTime": 1693915020,
"EndTime": 1693916890
},
{
"Id": 603,
"AttackType": "SYN Flood",
"Ip": "1.1.1.1",
"Status": 2,
"PeakDropKbps": 32998,
"PeakDropPps": 62005,
"Duration": 14319,
"StartTime": 1693832970,
"EndTime": 1693847290
}
]
},
"PageInfo": {
"TotalCount": 3,
"PageSize": 10,
"CurrentPage": 1,
"Count": 3
}
}
|
__label__pos
| 0.997355 |
Photo by JJ Ying / Unsplash
Upgrading VyOS from Version 1.3 (Equuleus) to Version 1.4 (Sagitta): A Walkthrough of Issues and Resolutions
Linux May 20, 2023
I've been using VyOS 1.3 for some time now in my Homelab and it has been working great overall. Recently though I encountered some issues with BGP Peer Groups, required me to reboot VyOS on almost every configuration change. I found the related Bug report and the issue is related to the legacy code, used in version 1.3. The issue has been fixed by the newer replacement Python code in Version 1.4.
While new software versions usually bring advancements and improvements, sometimes they can introduce unanticipated changes that may cause issues with an existing configuration. This blog post documents my personal experiences and observations from the upgrade process, providing insights and solutions to issues encountered.
Interface-Route removed
The first issue I came across during the upgrade was the disappearance of static interface-routes. I previously used these to route my VPN traffic to my Wireguard interface.
After the upgrade, I noticed that my Wireguard VPN tunnel wasn't working and I found that my routes were missing.
It turned out that the syntax was changed and no migration to the newer syntax happened during the upgrade process.
The old syntax that I had looked like this:
interface-route 10.0.10.0/24 {
next-hop-interface wg0 {
}
}
This was altered in VyOS 1.4 (Sagitta) to the following new syntax:
route 10.0.10.0/24 {
interface wg0 {
}
}
Once I manually re-added my routes using the syntax, the interface-route issue was successfully resolved.
I apprechiate the change overall, as it feels more logical to me, but a proper migration would have been nice.
QoS / Traffic Shaping
While delving further into the upgraded system, I noticed that my DSL-OUT traffic policy was not functioning correctly. It seemed like the mere existence of the policy was reducing the bandwidth to about 1/20 of its original capacity. However, the LAN-OUT seemed to be unaffected and operating normally.
I had to make the following changes to the policies to make them work correctly again:
shaper DSL-OUT {
bandwidth 40mbit
default {
bandwidth 40mbit // Explicitly set, no longer a percentage
// burst 4k // Removed, default of 15k used instead to improve connection speed
codel-quantum 300
queue-limit 1000
queue-type fq-codel
// target 30ms // Removed, default of 5ms used to reduce throughput spikes
}
}
shaper LAN-OUT {
bandwidth 190mbit
default {
bandwidth 190mbit // Explicitly set, no longer a percentage
// burst 4k // Removed, default of 15k used instead to improve connection speed
codel-quantum 570
queue-limit 1000
queue-type fq-codel
// target 30ms // Removed, default of 5ms used to reduce throughput spikes
}
}
During this process, I encountered several issues:
1. The bandwidth 100% command I used previously triggered this error: WARNING: Interface speed cannot be determined (assuming 10 Mbit/s). Thus I'm now setting the bandwidth explicitly in the default block.
2. The burst 4k configuration was limiting the connection speed. The default is 15k and seems to work better with the new version. (Also recommended for my speed in the VyOS docs)
3. The target 30ms configuration also caused peculiar throughput spikes. The default setting in VyOS 1.4 is 5ms, which seems to work better. (VyOS recommends turning it to a maximum of 15ms for speeds less than 3Mbit.)
After adjusting these parameters, the QoS policies started functioning correctly.
LAN Interface missing
On my second VyOS installation on my Hetzner Server, VyOS seemed to have forgotten the configuration of my LAN interface.
I re-added it exactly like in the old config and it worked again.
Dynamic DNS
The last issue I encountered was with the Dynamic DNS service, which stopped updating after the upgrade. Here's what the old configuration looked like:
interface pppoe0 {
service cloudflare {
host-name dyn.jsteuernagel.de
login [email protected]
password xxx
protocol cloudflare
zone jsteuernagel.de
}
use-web {
skip "Current IP Address: "
url http://checkip.dyndns.com/
}
}
To fix this issue, I had to remove the "use-web" part, as it seems to be unnecessary in VyOS 1.4.
I had previously used it due to a bug in VyOS 1.3, where it couldn't pick up the IP address of the pppoe0 interface directly.
Conclusion
While the upgrade process to VyOS 1.4 was somewhat challenging due to the above-mentioned issues, troubleshooting and resolving them helped deepen my understanding of the changes in the new version. It was a learning experience that I hope others can benefit from. As with all network changes, remember to have a backup of the existing configuration, as otherwise spotting the things that didn't migrate properly wouldn't be as easy.
Tags
|
__label__pos
| 0.743043 |
blob: 3c4ceae9788ea5f37c8aefde4557395008e1de32 [file] [log] [blame]
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from fontTools.cu2qu import curve_to_quadratic
from fontTools.pens.basePen import AbstractPen, decomposeSuperBezierSegment
from fontTools.pens.reverseContourPen import ReverseContourPen
from fontTools.pens.pointPen import BasePointToSegmentPen
from fontTools.pens.pointPen import ReverseContourPointPen
class Cu2QuPen(AbstractPen):
""" A filter pen to convert cubic bezier curves to quadratic b-splines
using the FontTools SegmentPen protocol.
Args:
other_pen: another SegmentPen used to draw the transformed outline.
max_err: maximum approximation error in font units. For optimal results,
if you know the UPEM of the font, we recommend setting this to a
value equal, or close to UPEM / 1000.
reverse_direction: flip the contours' direction but keep starting point.
stats: a dictionary counting the point numbers of quadratic segments.
ignore_single_points: don't emit contours containing only a single point
NOTE: The "ignore_single_points" argument is deprecated since v1.3.0,
which dropped Robofab subpport. It's no longer needed to special-case
UFO2-style anchors (aka "named points") when using ufoLib >= 2.0,
as these are no longer drawn onto pens as single-point contours,
but are handled separately as anchors.
"""
def __init__(self, other_pen, max_err, reverse_direction=False,
stats=None, ignore_single_points=False):
if reverse_direction:
self.pen = ReverseContourPen(other_pen)
else:
self.pen = other_pen
self.max_err = max_err
self.stats = stats
if ignore_single_points:
import warnings
warnings.warn("ignore_single_points is deprecated and "
"will be removed in future versions",
UserWarning, stacklevel=2)
self.ignore_single_points = ignore_single_points
self.start_pt = None
self.current_pt = None
def _check_contour_is_open(self):
if self.current_pt is None:
raise AssertionError("moveTo is required")
def _check_contour_is_closed(self):
if self.current_pt is not None:
raise AssertionError("closePath or endPath is required")
def _add_moveTo(self):
if self.start_pt is not None:
self.pen.moveTo(self.start_pt)
self.start_pt = None
def moveTo(self, pt):
self._check_contour_is_closed()
self.start_pt = self.current_pt = pt
if not self.ignore_single_points:
self._add_moveTo()
def lineTo(self, pt):
self._check_contour_is_open()
self._add_moveTo()
self.pen.lineTo(pt)
self.current_pt = pt
def qCurveTo(self, *points):
self._check_contour_is_open()
n = len(points)
if n == 1:
self.lineTo(points[0])
elif n > 1:
self._add_moveTo()
self.pen.qCurveTo(*points)
self.current_pt = points[-1]
else:
raise AssertionError("illegal qcurve segment point count: %d" % n)
def _curve_to_quadratic(self, pt1, pt2, pt3):
curve = (self.current_pt, pt1, pt2, pt3)
quadratic = curve_to_quadratic(curve, self.max_err)
if self.stats is not None:
n = str(len(quadratic) - 2)
self.stats[n] = self.stats.get(n, 0) + 1
self.qCurveTo(*quadratic[1:])
def curveTo(self, *points):
self._check_contour_is_open()
n = len(points)
if n == 3:
# this is the most common case, so we special-case it
self._curve_to_quadratic(*points)
elif n > 3:
for segment in decomposeSuperBezierSegment(points):
self._curve_to_quadratic(*segment)
elif n == 2:
self.qCurveTo(*points)
elif n == 1:
self.lineTo(points[0])
else:
raise AssertionError("illegal curve segment point count: %d" % n)
def closePath(self):
self._check_contour_is_open()
if self.start_pt is None:
# if 'start_pt' is _not_ None, we are ignoring single-point paths
self.pen.closePath()
self.current_pt = self.start_pt = None
def endPath(self):
self._check_contour_is_open()
if self.start_pt is None:
self.pen.endPath()
self.current_pt = self.start_pt = None
def addComponent(self, glyphName, transformation):
self._check_contour_is_closed()
self.pen.addComponent(glyphName, transformation)
class Cu2QuPointPen(BasePointToSegmentPen):
""" A filter pen to convert cubic bezier curves to quadratic b-splines
using the RoboFab PointPen protocol.
Args:
other_point_pen: another PointPen used to draw the transformed outline.
max_err: maximum approximation error in font units. For optimal results,
if you know the UPEM of the font, we recommend setting this to a
value equal, or close to UPEM / 1000.
reverse_direction: reverse the winding direction of all contours.
stats: a dictionary counting the point numbers of quadratic segments.
"""
def __init__(self, other_point_pen, max_err, reverse_direction=False,
stats=None):
BasePointToSegmentPen.__init__(self)
if reverse_direction:
self.pen = ReverseContourPointPen(other_point_pen)
else:
self.pen = other_point_pen
self.max_err = max_err
self.stats = stats
def _flushContour(self, segments):
assert len(segments) >= 1
closed = segments[0][0] != "move"
new_segments = []
prev_points = segments[-1][1]
prev_on_curve = prev_points[-1][0]
for segment_type, points in segments:
if segment_type == 'curve':
for sub_points in self._split_super_bezier_segments(points):
on_curve, smooth, name, kwargs = sub_points[-1]
bcp1, bcp2 = sub_points[0][0], sub_points[1][0]
cubic = [prev_on_curve, bcp1, bcp2, on_curve]
quad = curve_to_quadratic(cubic, self.max_err)
if self.stats is not None:
n = str(len(quad) - 2)
self.stats[n] = self.stats.get(n, 0) + 1
new_points = [(pt, False, None, {}) for pt in quad[1:-1]]
new_points.append((on_curve, smooth, name, kwargs))
new_segments.append(["qcurve", new_points])
prev_on_curve = sub_points[-1][0]
else:
new_segments.append([segment_type, points])
prev_on_curve = points[-1][0]
if closed:
# the BasePointToSegmentPen.endPath method that calls _flushContour
# rotates the point list of closed contours so that they end with
# the first on-curve point. We restore the original starting point.
new_segments = new_segments[-1:] + new_segments[:-1]
self._drawPoints(new_segments)
def _split_super_bezier_segments(self, points):
sub_segments = []
# n is the number of control points
n = len(points) - 1
if n == 2:
# a simple bezier curve segment
sub_segments.append(points)
elif n > 2:
# a "super" bezier; decompose it
on_curve, smooth, name, kwargs = points[-1]
num_sub_segments = n - 1
for i, sub_points in enumerate(decomposeSuperBezierSegment([
pt for pt, _, _, _ in points])):
new_segment = []
for point in sub_points[:-1]:
new_segment.append((point, False, None, {}))
if i == (num_sub_segments - 1):
# the last on-curve keeps its original attributes
new_segment.append((on_curve, smooth, name, kwargs))
else:
# on-curves of sub-segments are always "smooth"
new_segment.append((sub_points[-1], True, None, {}))
sub_segments.append(new_segment)
else:
raise AssertionError(
"expected 2 control points, found: %d" % n)
return sub_segments
def _drawPoints(self, segments):
pen = self.pen
pen.beginPath()
last_offcurves = []
for i, (segment_type, points) in enumerate(segments):
if segment_type in ("move", "line"):
assert len(points) == 1, (
"illegal line segment point count: %d" % len(points))
pt, smooth, name, kwargs = points[0]
pen.addPoint(pt, segment_type, smooth, name, **kwargs)
elif segment_type == "qcurve":
assert len(points) >= 2, (
"illegal qcurve segment point count: %d" % len(points))
offcurves = points[:-1]
if offcurves:
if i == 0:
# any off-curve points preceding the first on-curve
# will be appended at the end of the contour
last_offcurves = offcurves
else:
for (pt, smooth, name, kwargs) in offcurves:
pen.addPoint(pt, None, smooth, name, **kwargs)
pt, smooth, name, kwargs = points[-1]
if pt is None:
# special quadratic contour with no on-curve points:
# we need to skip the "None" point. See also the Pen
# protocol's qCurveTo() method and fontTools.pens.basePen
pass
else:
pen.addPoint(pt, segment_type, smooth, name, **kwargs)
else:
# 'curve' segments must have been converted to 'qcurve' by now
raise AssertionError(
"unexpected segment type: %r" % segment_type)
for (pt, smooth, name, kwargs) in last_offcurves:
pen.addPoint(pt, None, smooth, name, **kwargs)
pen.endPath()
def addComponent(self, baseGlyphName, transformation):
assert self.currentPath is None
self.pen.addComponent(baseGlyphName, transformation)
|
__label__pos
| 0.994553 |
1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Does making money with torrents and ppd still work?
Discussion in 'Torrents' started by niceguy36, Jun 13, 2014.
1. niceguy36
niceguy36 Junior Member
Joined:
Jun 13, 2014
Messages:
183
Likes Received:
94
I used torrent password method a year ago, Does this still work?
2. beasty513
beasty513 Junior Member
Joined:
Jan 27, 2014
Messages:
119
Likes Received:
43
Location:
In Gordon Gekko's office...
Home Page:
The answer is: It's depend on what torrent site you upload to.
Some them do not allow password protected files.
3. CPAManager
CPAManager Newbie
Joined:
Jun 17, 2014
Messages:
5
Likes Received:
0
Yeah, it's hard on most torrent sites. I tried it long time ago, and my torrents almost got instantly removed. Try to find a list of torrent sites, and try each one of them and see what happends.
4. EbayMeanNowIGetSmart
EbayMeanNowIGetSmart Newbie
Joined:
Jun 23, 2014
Messages:
21
Likes Received:
4
xbox media center (xbmc) is blowing up on ebay, everybody is streaming ppv movies and shows for free, it is only going to get bigger and bigger
if I was gonna try what you guys are talking about then I would not do movies
pc programs or video games might be better, I only go to piratebay or kickass, I remember the good old days of demoniod
isohunt and mininova went from having some valid downloads to 100% just the torrent password method
only beginner torrent hunters would ever click on anything over there
I can say that a long time ago seemed like torrent password methods could slip in undetected for awhile in valid torrent sites but the past few years I never see it anymore unless I try the other 99% of torrent sites that only computer illiterate people would ever go to
it is not a mix of torrent password method and valid torrents anymore it is just either a real torrent site or a fake torrent web site with nothing but fake torrents like torrent password method
no more mix just one or the other
|
__label__pos
| 0.620357 |
AbstractResource Members
Spring.NET 1.3.0 RC1 for .NET 2.0 API Reference
AbstractResource Members
AbstractResource overview
Protected Static (Shared) Fields
DefaultBasePathPlaceHolder The default special character that denotes the base (home, or root) path.
Protected Static (Shared) Methods
GetResourceNameWithoutProtocol Strips any protocol name from the supplied resourceName.
Public Instance Properties
BasePathPlaceHolder The special character that denotes the base (home, or root) path.
Description Returns a description for this resource.
Exists Does this resource actually exist in physical form?
File Returns a FileInfo handle for this resource.
InputStream Return an Stream for this resource.
IsOpen Does this resource represent a handle with an open stream?
Protocol Returns the protocol associated with this resource (if any).
Uri Returns the Uri handle for this resource.
Public Instance Methods
CreateRelative Creates a new resource that is relative to this resource based on the supplied resourceName.
Equals Determines whether the specified Object is equal to the current Object.
GetHashCode Serves as a hash function for a particular type, suitable for use in hashing algorithms and data structures like a hash table.
GetType (inherited from Object)Gets the Type of the current instance.
ToString This implementation returns the Description of this resource.
Protected Instance Constructors
AbstractResource Overloaded. Initializes a new instance of the AbstractResource class.
Protected Instance Properties
PathSeparatorChars Gets those characters that are valid path separators for the resource type.
ResourcePath Gets the current path of the resource.
RootLocation Gets the root location of the resource.
SupportsRelativeResources Does this IResource support relative resource retrieval?
Protected Instance Methods
CreateResourceInstance Factory Method. Create a new instance of the current resource type using the given resourceName
Finalize (inherited from Object)Allows an Object to attempt to free resources and perform other cleanup operations before the Object is reclaimed by garbage collection.
GetResourceLoader The ResourceLoader to be used for resolving relative resources
IsRelativeResource Does the supplied resourceName relative ?
MemberwiseClone (inherited from Object)Creates a shallow copy of the current Object.
ResolveBasePathPlaceHolder Resolves the presence of the basePathPlaceHolder value in the supplied resourceName into a path.
ResolveResourceNameWithoutProtocol Resolves the supplied resourceName to its value sans any leading protocol.
See Also
AbstractResource Class | Spring.Core.IO Namespace | IResource
|
__label__pos
| 0.967055 |
bouzuya.hatenablog.com
ぼうずやのにっき
ABC045 の A, B, C, D を解いた
ABC045 : AtCoder Beginner Contest 045 の A, B, C, D を解いた。
use std::collections::HashMap;
use proconio::input;
fn main() {
input! {
h: usize,
w: usize,
n: usize,
ab: [(i64, i64); n],
};
let mut map = HashMap::new();
for (a, b) in ab.iter().copied() {
for y in (a - 1).max(2)..=(a + 1).min(h as i64 - 1) {
for x in (b - 1).max(2)..=(b + 1).min(w as i64 - 1) {
*map.entry((y, x)).or_insert(0) += 1;
}
}
}
let mut ans = vec![0_usize; 10];
for count in map.values().copied() {
ans[count] += 1;
}
ans[0] = (h - 2) * (w - 2) - map.len();
for a in ans {
println!("{}", a);
}
}
bouzuya/tsukota 。アプリ向けのアイコンを描いた。
今日のコミット。
|
__label__pos
| 0.998445 |
In this post I will describe how to signal for your application to stop accepting new work and wait for any ongoing work to complete.
Graceful shutdown is useful for applications that, when possible, should not stop abruptly. For example a web server should attempt to finish serving any ongoing requests before shutting down.
Tokio has a great article on Graceful Shutdown with a low-level design; I assume you’re familiar with it & show how to build a high-level construct that leverages the building blocks from the article.
The Recipe
Graceful shutdown can be implemented using two of Tokio’s synchronization primitives:
1. tokio::sync::broadcast to signal shutdown
2. tokio::sync::mpsc to wait for all tasks to complete
As we will see, the Drop mechanics for these types will be useful for coordinating shutdown within a program.
The gist is that the program will know when to signal shutdown when it receives a message on the broadcast channel & it will know when all tasks have completed when all tokio::sync::mpsc::Senders have been dropped.
Drop
When objects go out of scope in Rust, they are dropped automatically.
It is possible to implement the Drop trait for a type to customize the behavior when it is dropped.
The Tokio channels are designed to close when either the sending or receiving sides are completely dropped (e.g., when the last Sender or Receiver is dropped).
Using Drop for Coordination
Now that we know that the Tokio channels will close when they are dropped, we can leverage them to coordinate shutdown.
Let’s look at both of the channels we will be using to see how they behave.
tokio::sync::broadcast
The broadcast channel is used to tell all ongoing tasks that shutdown has started.
From the tokio::sync::broadcast docs:
When all Sender handles have been dropped, no new values may be sent. At this point, the channel is “closed”. Once a receiver has received all values retained by the channel, the next call to recv will return with RecvError::Closed.
Every task will be provided with a receiver that it can use to check if shutdown has started.
When shutdown has started, the send half of the channel will be dropped, causing the tokio::sync::broadcast::Receiver to return Err(RecvError::Closed) when it is polled.
Clients will use this to determine if they should continue processing work or if they should finish.
tokio::sync::mpsc
The mpsc channel is used to wait for all ongoing tasks to complete.
From the tokio::sync::mpsc:Receiver::recv docs:
This method returns None if the channel has been closed and there are no remaining messages in the channel’s buffer. The channel is closed when all senders have been dropped, or when close is called.
Every task will be provided with a tokio::sync::mpsc::Sender that it can use to implicitly signal completion.
When all tasks are done, they will drop their tokio::sync::mpsc::Sender which will cause the tokio::sync::mpsc::Receiver to return None when it is polled.
High Level Design
We will hide the complexity of the low-level Graceful Shutdown implementation behind a high-level API.
Doing so will make it cleaner to integrate this functionality into a program since the method signatures will be simpler and clients who use this will have less to keep track of.
To accomplish this, we wrap the low-level logic inside two types:
pub struct ShutdownController { ... }
pub struct ShutdownMonitor { ... }
The program will create a ShutdownController instance when it starts up. It will be used to create listeners, signal shutdown, and wait for completion.
Clients will accept a ShutdownMonitor instance when they are created & use it to determine if they should continue processing work.
Example Usage
use shutdown_async::ShutdownController;
use tokio::task::spawn;
#[tokio::main]
async fn main() {
let shutdown = ShutdownController::new();
let t = spawn({
let mut monitor = shutdown.subscribe();
assert!(!monitor.is_shutdown());
async move {
monitor.recv().await;
assert!(monitor.is_shutdown());
}
});
shutdown.shutdown().await;
}
Signaling Shutdown
The implementation follows directly from the Graceful Shutdown article.
pub struct ShutdownController {
/// Used to tell all [`ShutdownMonitor`] instances that shutdown has started.
notify_shutdown: broadcast::Sender<()>,
/// Implicitly used to determine when all [`ShutdownMonitor`] instances have been dropped.
task_tracker: mpsc::Sender<()>,
/// Used to determine when all tasks have finished. Calling `recv()` on this channel
/// will return when all of the send halves of the `task_tracker` channel have been dropped.
task_waiter: mpsc::Receiver<()>,
}
impl ShutdownController {
pub fn new() -> Self {
let (notify_shutdown, _) = broadcast::channel::<()>(1);
let (task_tracker, task_waiter) = mpsc::channel::<()>(1);
Self {
notify_shutdown,
task_tracker,
task_waiter,
}
}
pub async fn shutdown(mut self) {
// Notify all tasks that shutdown has started
drop(self.notify_shutdown);
// Destroy our mpsc::Sender so that the mpsc::Receiver::recv() will return immediately
// once all tasks have completed (i.e. dropped their mpsc::Sender)
drop(self.task_tracker);
// Wait for all tasks to finish
let _ = self.task_waiter.recv().await;
}
pub fn subscribe(&self) -> ShutdownMonitor {
ShutdownMonitor::new(self.notify_shutdown.subscribe(), self.task_tracker.clone())
}
}
Waiting for Completion
Again, the implementation follows directly from the Graceful Shutdown article.
pub struct ShutdownMonitor {
/// `true` if the shutdown signal has been received
shutdown_received: bool,
/// The receive half of the channel used to listen for shutdown.
shutdown_notifier: broadcast::Receiver<()>,
/// Implicitly used to help [`ShutdownController`] understand when the program
/// has completed shutdown.
_task_tracker: mpsc::Sender<()>,
}
impl ShutdownMonitor {
fn new(
shutdown_notifier: broadcast::Receiver<()>,
_task_tracker: mpsc::Sender<()>,
) -> ShutdownMonitor {
ShutdownMonitor {
shutdown_received: false,
shutdown_notifier,
_task_tracker,
}
}
pub fn is_shutdown(&self) -> bool {
self.shutdown_received
}
pub async fn recv(&mut self) {
// If the shutdown signal has already been received, then return
// immediately.
if self.shutdown_received {
return;
}
// Cannot receive a "lag error" as only one value is ever sent.
let _ = self.shutdown_notifier.recv().await;
// Remember that the signal has been received.
self.shutdown_received = true;
}
}
Conclusion
We did not introduce any new concepts in this article. Instead, we simply wrapped the low-level Graceful Shutdown implementation in a high-level API.
See shutdown_async::ShutdownController on docs.rs to view the documentation for the API.
I personally found the high-level approach much easier to grok than the low-level approach. I hope you do too!
|
__label__pos
| 0.9375 |
Multiplication Table Worksheet 12
Math, especially multiplication, forms the cornerstone of countless academic self-controls and real-world applications. Yet, for several learners, mastering multiplication can position a difficulty. To resolve this difficulty, teachers and moms and dads have actually embraced an effective device: Multiplication Table Worksheet 12.
Introduction to Multiplication Table Worksheet 12
Multiplication Table Worksheet 12
Multiplication Table Worksheet 12
Multiplication Table Worksheet 12 -
This page has printable multiplication tables Includes tables that are completely filled in partly filled in and blank and games on this page to help your students master basic facts with 12 as a factor Multi Digit Multiplication Multiplication 2 Digits Times 1 Digit On this page you have a large selection of 2 digit by 1 digit
It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers
Significance of Multiplication Practice Understanding multiplication is pivotal, laying a solid structure for innovative mathematical concepts. Multiplication Table Worksheet 12 offer structured and targeted practice, cultivating a much deeper understanding of this basic math procedure.
Evolution of Multiplication Table Worksheet 12
12 Times Tables Worksheets Pdf 12 multiplication table
12-times-tables-worksheets-pdf-12-multiplication-table
12 Times Tables Worksheets Pdf 12 multiplication table
Grade 5 multiplication worksheets Multiply by 10 100 or 1 000 with missing factors Multiplying in parts distributive property Multiply 1 digit by 3 digit numbers mentally Multiply in columns up to 2x4 digits and 3x3 digits Mixed 4 operations word problems
For the 12 times table worksheet you can choose between three different sorts of exercise In the first exercise you have to draw a line from the sum to the correct answer In the second exercise you have to enter the missing number to complete the sum correctly In the third exercise you have to answer the sums which have been shuffled
From traditional pen-and-paper exercises to digitized interactive formats, Multiplication Table Worksheet 12 have developed, dealing with diverse knowing styles and choices.
Types of Multiplication Table Worksheet 12
Basic Multiplication Sheets Basic exercises concentrating on multiplication tables, assisting learners build a strong math base.
Word Issue Worksheets
Real-life scenarios incorporated right into troubles, boosting vital reasoning and application abilities.
Timed Multiplication Drills Tests made to improve speed and precision, helping in fast psychological mathematics.
Benefits of Using Multiplication Table Worksheet 12
MULTIPLICATION TABLE WORKSHEET Learningenglish esl
multiplication-table-worksheet-learningenglish-esl
MULTIPLICATION TABLE WORKSHEET Learningenglish esl
Here you will find all the times tables exercises on worksheets For instance there are tables worksheets for 3th grade that you can print here Multiplication table worksheets 1 times table worksheets 2 times table worksheets 11 and 12 times tables You can also use the worksheet generator to create your own multiplication facts
Download Free 12 times table worksheets 12 times table worksheet PDF is an awesome tool that encourages kids to efficiently develop a perfect learning skill An excellent knowledge in multiplying by 12 activities enables kids to get along with any tricky multiplication task other than x12 As a result this worksheet offers well designed
Improved Mathematical Abilities
Consistent method hones multiplication proficiency, improving general mathematics capabilities.
Boosted Problem-Solving Abilities
Word problems in worksheets create logical thinking and method application.
Self-Paced Learning Advantages
Worksheets accommodate specific understanding rates, fostering a comfy and versatile discovering atmosphere.
Exactly How to Develop Engaging Multiplication Table Worksheet 12
Incorporating Visuals and Colors Vivid visuals and shades catch focus, making worksheets visually appealing and engaging.
Including Real-Life Situations
Relating multiplication to everyday circumstances adds relevance and practicality to exercises.
Tailoring Worksheets to Various Skill Levels Personalizing worksheets based upon varying effectiveness degrees makes certain inclusive discovering. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Gamings Technology-based resources supply interactive understanding experiences, making multiplication engaging and satisfying. Interactive Web Sites and Apps On the internet systems offer diverse and accessible multiplication practice, supplementing standard worksheets. Tailoring Worksheets for Numerous Understanding Styles Aesthetic Students Aesthetic help and diagrams aid understanding for students inclined toward aesthetic knowing. Auditory Learners Spoken multiplication issues or mnemonics cater to students who grasp principles with acoustic ways. Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic students in recognizing multiplication. Tips for Effective Execution in Discovering Consistency in Practice Regular technique enhances multiplication skills, promoting retention and fluency. Balancing Repetition and Selection A mix of recurring workouts and varied trouble layouts preserves passion and comprehension. Giving Useful Responses Responses help in determining areas of improvement, urging ongoing progress. Challenges in Multiplication Technique and Solutions Motivation and Involvement Obstacles Dull drills can cause uninterest; ingenious strategies can reignite motivation. Overcoming Anxiety of Math Adverse understandings around math can hinder development; producing a favorable discovering atmosphere is essential. Effect of Multiplication Table Worksheet 12 on Academic Performance Research Studies and Study Searchings For Research shows a favorable relationship between consistent worksheet use and boosted mathematics performance.
Verdict
Multiplication Table Worksheet 12 become functional devices, promoting mathematical effectiveness in learners while suiting varied knowing styles. From basic drills to interactive on-line sources, these worksheets not just improve multiplication skills yet additionally promote critical reasoning and analytic capacities.
1 12 Multiplication Worksheet Learning Printable
1-12-multiplication-worksheet-learning-printable
12 Multiplication Table Worksheet 12 Times Table Worksheets
12-multiplication-table-worksheet-12-times-table-worksheets
Check more of Multiplication Table Worksheet 12 below
Multiplication Table Without Answers Free Printable
multiplication-table-without-answers-free-printable
Worksheet On Multiplication Table Of 12 Word Problems On 12 Times Table
worksheet-on-multiplication-table-of-12-word-problems-on-12-times-table
Free Printable Times Table Worksheets
free-printable-times-table-worksheets
Multiplication Table Multiplication Chart 0 12 Pdf PrintableMultiplication We Can
multiplication-table-multiplication-chart-0-12-pdf-printablemultiplication-we-can
Multiplication Printable 12 PrintableMultiplication
multiplication-printable-12-printablemultiplication
Printable Multiplication Tables No Answers Printable Multiplication Flash Cards
printable-multiplication-tables-no-answers-printable-multiplication-flash-cards
Free Printable Blank Multiplication Chart 1 12 Free Printable
Multiplication Facts Worksheets Math Drills
https://www.math-drills.com/multiplication.php
It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers
12 Times Tables Worksheets Pdf 12 multiplication table
Free 12 times table worksheets at Timestables Multiplication Tables
https://www.timestables.com/12-times-table-worksheets.html
These free 12 times table worksheets provide you with an excellent tool to practice and memorise the tables The 12 times table is probably the hardest multiplication table to memorise However there are several tips to help you learn this table quicker Let s take a look at some of the sums 1 x 12 12 alternatively this is 1 x 10 1 x 2
It is quite likely that there are students who have mastered all of the multiplication facts up to the 12 times tables In case they want need an extra challenge this sections includes multiplication facts worksheets above 12 with the expectation that students will use mental math or recall to calculate the answers
These free 12 times table worksheets provide you with an excellent tool to practice and memorise the tables The 12 times table is probably the hardest multiplication table to memorise However there are several tips to help you learn this table quicker Let s take a look at some of the sums 1 x 12 12 alternatively this is 1 x 10 1 x 2
multiplication-table-multiplication-chart-0-12-pdf-printablemultiplication-we-can
Multiplication Table Multiplication Chart 0 12 Pdf PrintableMultiplication We Can
worksheet-on-multiplication-table-of-12-word-problems-on-12-times-table
Worksheet On Multiplication Table Of 12 Word Problems On 12 Times Table
multiplication-printable-12-printablemultiplication
Multiplication Printable 12 PrintableMultiplication
printable-multiplication-tables-no-answers-printable-multiplication-flash-cards
Printable Multiplication Tables No Answers Printable Multiplication Flash Cards
printable-12x12-multiplication-table-printablemultiplication
Printable 12X12 Multiplication Table PrintableMultiplication
worksheet-on-multiplication-table-of-12-word-problems-on-12-times-table
Multiplication By 12 Worksheets
multiplication-by-12-worksheets
Multiplication By 12 Worksheets
the-12-tables-reading-2019-01-22
The 12 Tables Reading 2019 01 22
FAQs (Frequently Asked Questions).
Are Multiplication Table Worksheet 12 appropriate for every age teams?
Yes, worksheets can be tailored to various age and ability degrees, making them versatile for numerous learners.
Exactly how frequently should trainees exercise utilizing Multiplication Table Worksheet 12?
Regular method is crucial. Normal sessions, preferably a few times a week, can produce considerable renovation.
Can worksheets alone enhance math skills?
Worksheets are an important device but needs to be supplemented with diverse understanding methods for comprehensive skill growth.
Exist on-line systems offering cost-free Multiplication Table Worksheet 12?
Yes, several instructional web sites offer free access to a wide range of Multiplication Table Worksheet 12.
How can parents support their kids's multiplication technique in the house?
Urging regular technique, supplying support, and developing a positive discovering environment are helpful actions.
|
__label__pos
| 0.965342 |
How to create a Jamstack pet store app using Stripe, Gatsbyjs, and Netlify functions
How to create a Jamstack pet store app using Stripe, Gatsbyjs, and Netlify functions
Tapas Adhikary
Published on Jan 13, 2021
13 min read
Subscribe to my newsletter and never miss my upcoming articles
Listen to this article
Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. One of the aspects of Jamstack is, it is practically serverless. To put it more clearly, we do not maintain any server-side applications. Rather, sites use existing services (like email, media, payment platform, search, and so on).
Did you know, 70% - 80% of the features that once required a custom back-end can now be done entirely without it? In this article, we will learn to build a Jamstack e-commerce application that includes,
What are we building today?
I love Cats 🐈. We will build a pet store app called Happy Paws for our customers to purchase some adorable Cats. Customers can buy cats by adding their details to the cart 🛒 and then finally checkout by completing the payment process 💳.
Here is a quick glimpse of the app we intend to build(This is my first ever youtube video with voice. 😍)
TL;DR
In case you want to look into the code or try out the demo in advance, please find them here,
Please note, Stripe is NOT available in all countries. Please check if Stripe is available in your country. The Demo setup uses a test Stripe account created from the India region. Hence, it is guaranteed to work when accessed from India, and I hope it works elsewhere. However, that doesn't stop you from following the rest of the tutorial.
Create the Project Structure
We will use a Gatsby starter to create the initial project structure. First, we need to install the Gatsby CLI globally. Open a command prompt and run this command.
npm install -g gatsby-cli
After this, use this command to create a gatsby project structure,
gatsby new happy-paws https://github.com/gatsbyjs/gatsby-starter-default
Once done, you will see a project folder called happy-paws has been created. Try these commands next,
cd happy-paws
gatsby develop
You should be able to access the interface using http://localhost:8000/
Gatsby Default Starter.png
Setup Netlify Functions
To set up netlify functions, stop the gatsby develop command if running. Install the netlify-cli tool to run these functions locally.
npm install -g netlify-cli
Create a file called netlify.toml at the root of the project folder with the following content,
[build]
functions = "functions"
[[redirects]]
from = "/api/*"
to = "/.netlify/functions/:splat"
status = 200
The above file will tell the Netlify tool to pick up the functions from the functions folder at the build time. By default, netlify functions will be available as an API and accessible using a URL prefix, /.netlify/functions. This may not be very user friendly. Hence we want to use a redirect URL as, /api/*. It means, a URL like /.netlify/functions/getProducts can now be accessed like, /api/getProducts.
Next, create a folder called functions at the root of the project folder and create a data folder inside it. Create a file called products.json inside the data folder with the following content.
[
{
"sku": "001",
"name": "Brownie",
"description": "She is adorable, child like. The cover photo is by Dorota Dylka from Unsplash.",
"image": {
"url": "https://res.cloudinary.com/atapas/image/upload/v1604912361/cats/dorota-dylka-_VX-6amHgDY-unsplash_th9hg9.jpg",
"key": "brownie.jpg"
},
"amount": 2200,
"currency": "USD"
},
{
"sku": "002",
"name": "Flur",
"description": "Flur is a Queen. The cover photo is by Milada Vigerova from Unsplash.",
"image": {
"url": "https://res.cloudinary.com/atapas/image/upload/v1604829841/cats/milada-vigerova-7E9qvMOsZEM-unsplash_etgmbe.jpg",
"key": "flur.jpg"
},
"amount": 2000,
"currency": "USD"
}
]
Here we have added information about two pet cats. You can add as many as you want. Each of the cats is a product for us to sell. It contains information like SKU(a unique identifier common for product inventory management), name, description, image, amount, and the currency.
Next, create a file called, get-products.js inside the functions folder with the following content,
const products = require('./data/products.json');
exports.handler = async () => {
return {
statusCode: 200,
body: JSON.stringify(products),
};
};
This is our first Netlify Serverless function. It is importing the products from the products.json file and returning a JSON response. This function will be available as API and accessible using /api/get-products.
Execute these commands from the root of the project to access this function,
netlify login
This will open a browser tab to help you create an account with Netlify and log in using the credentials.
netlify dev
To run netlify locally on port 8888 by default. Now the API will be accessible at http://localhost:8888/api/get-products. Open a browser and try this URL.
localhost_8888_api_get-products.png
The beauty of it is, the gatsby UI is also available on the http://localhost:8888 URL. We will not access the user interface on the 8000 port, and rather we will use the 8888 port to access both the user interface and APIs.
Fetch products into the UI
Let us now fetch these products(cats) into the UI. Use this command from the root of the project folder to install a few dependencies first(you can use the npm install command as well),
yarn add axios dotenv react-feather
Now create a file called, products.js inside src/components with the following content,
import React, { useState, useEffect } from 'react';
import axios from "axios";
import { ShoppingCart } from 'react-feather';
import Image from './image';
import './products.css';
const Products = () => {
const [products, setProducts] = useState([]);
const [loaded, setLoaded] = useState(false);
const [cart, setCart] = useState([]);
useEffect(() => {
axios("/api/get-products").then(result => {
if (result.status !== 200) {
console.error("Error loading shopnotes");
console.error(result);
return;
}
setProducts(result.data);
setLoaded(true);
});
}, []);
const addToCart = sku => {
// Code to come here
}
const buyOne = sku => {
// Code to come here
}
const checkOut = () => {
// Code to come here
}
return (
<>
<div className="cart" onClick={() => checkOut()}>
<div className="cart-icon">
<ShoppingCart
className="img"
size={64}
color="#ff8c00"
/>
</div>
<div className="cart-badge">{cart.length}</div>
</div>
{
loaded ? (
<div className="products">
{products.map((product, index) => (
<div className="product" key={`${product.sku}-image`}>
<Image fileName={product.image.key}
style={{ width: '100%' }}
alt={product.name} />
<h2>{product.name}</h2>
<p className="description">{product.description}</p>
<p className="price">Price: <b>${product.amount}</b></p>
<button onClick={() => buyOne(product.sku)}>Buy Now</button>
{' '}
<button onClick={() => addToCart(product.sku)}>Add to Cart</button>
</div>
))
}
</div>
) :
(
<h2>Loading...</h2>
)
}
</>
)
};
export default Products;
Note, we are using the axios library to make an API call to fetch all the products. On fetching all the products, we loop through and add the information like image, description, amount, etc. Please note, we have kept three empty methods. We will add code for them a little later.
Add a file called products.css inside the src/components folder with the following content,
header {
background: #ff8c00;
padding: 1rem 2.5vw;
font-size: 35px;
}
header a {
color: white;
font-weight: 800;
text-decoration: none;
}
main {
margin: 2rem 2rem 2rem 2rem;
width: 90vw;
}
.products {
display: grid;
gap: 2rem;
grid-template-columns: repeat(3, 1fr);
margin-top: 3rem;
}
.product img {
max-width: 100%;
}
.product button {
background: #ff8c00;
border: none;
border-radius: 0.25rem;
color: white;
font-size: 1.25rem;
font-weight: 800;
line-height: 1.25rem;
padding: 0.25rem;
cursor: pointer;
}
.cart {
position: absolute;
display: block;
width: 48px;
height: 48px;
top: 100px;
right: 40px;
cursor: pointer;
}
.cart-badge {
position: absolute;
top: -11px;
right: -13px;
background-color: #FF6600;
color: #ffffff;
font-size: 14px;
font-weight: bold;
padding: 5px 14px;
border-radius: 19px;
}
Now, replace the content of the file, index.js with the following content,
import React from "react";
import Layout from "../components/layout";
import SEO from "../components/seo";
import Products from '../components/products';
const IndexPage = () => (
<Layout>
<SEO title="Happy Paws" />
<h1>Hey there 👋</h1>
<p>Welcome to the Happy Paws cat store. Get a Cat 🐈 and feel awesome.</p>
<small>
This is in test mode. That means you can check out using <a href="https://stripe.com/docs/testing#cards" target="_blank" rel="noreferrer">any of the test card numbers.</a>
</small>
<Products />
</Layout>
)
export default IndexPage;
At this stage, start the netlify dev if it is not running already. Access the interface using http://localhost:8888/. You should see the page like this,
Initial_product_info.png
It seems we have some problems with the Cat images. However, all other details of each of the cat products seem to be fine. To fix that, add two cat images of your choice under the src/images folder. The images' names should be the same as the image key mentioned in the functions/data/products.json file. In our case, the names are brownie.jpg and flur.jpg.
Edit the src/components/Image.js file and replace the content with the following,
import React from 'react'
import { graphql, useStaticQuery } from 'gatsby'
import Img from 'gatsby-image';
const Image = ({ fileName, alt, style }) => {
const { allImageSharp } = useStaticQuery(graphql`
query {
allImageSharp {
nodes {
fluid(maxWidth: 1600) {
originalName
...GatsbyImageSharpFluid_withWebp
}
}
}
}
`)
const fluid = allImageSharp.nodes.find(n => n.fluid.originalName === fileName)
.fluid
return (
<figure>
<Img fluid={fluid} alt={alt} style={style} />
</figure>
)
}
export default Image;
Here we are using Gatsby’s sharp plugin to prebuilt the images. Now rerun the netlify dev command and access the user interface to see the correct images.
with_cat.png
A few more things, open the src/components/Header.js file and replace the content with this,
import { Link } from "gatsby"
import PropTypes from "prop-types"
import React from "react"
const Header = ({ siteTitle }) => (
<header>
<Link to="/">
{siteTitle}
</Link>
</header>
)
Header.propTypes = {
siteTitle: PropTypes.string,
}
Header.defaultProps = {
siteTitle: ``,
}
export default Header
Now the header should look much better like,
header-initial.png
But, we want to change that default header text to something meaningful. Open the file gatsby-config.js and edit the title and description of the siteMetaData object as
siteMetadata: {
title: `Happy Paws - Cats love you!`,
description: `Cat store is the one point solution for your Cat`,
},
This will restart the Gatsby server. Once the server is up, you should see the header text changed to,
header-final.png
Next, let us do the required set up for the Netlify and Stripe integration.
Setup Stripe
Browse to the functions folder and initialize a node project,
npm init -y
This will create a file called package.json. Install dependencies using the command,
yarn add stripe dotenv
This command will install stripe and dotenv library, which is required to manage the environment variables locally.
Get your Stripe test credentials
• Log into Stripe at https://dashboard.stripe.com/login
• Make sure the “Viewing test data” switch is toggled on
• Click “Developers” in the left-hand menu
• Click “API keys”.
• Copy both the publishable key and secret key from the “Standard keys” panel
Stripe_API_Keys.png
Create a file called .env at the root of the project with the following content,
STRIPE_PUBLISHABLE_KEY= YOUR_STRIPE_PUBLISHABLE_KEY STRIPE_SECRET_KEY= YOUR_STRIPE_SECRET_KEY
Note to replace the YOUR_STRIPE_PUBLISHABLE_KEY and YOUR_STRIPE_SECRET_KEY with the actual values got from the Stripe dashboard, respectively.
Create a Checkout Function
Next is to create a checkout function using netlify serverless and stripe. Create a file called create-checkout.js with the following content under the function folder.
require("dotenv").config();
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
const inventory = require('./data/products.json');
const getSelectedProducts = skus => {
let selected = [];
skus.forEach(sku => {
const found = inventory.find((p) => p.sku === sku);
if (found) {
selected.push(found);
}
});
return selected;
}
const getLineItems = products => {
return products.map(
obj => ({
name: obj.name,
description: obj.description,
images:[obj.image.url],
amount: obj.amount,
currency: obj.currency,
quantity: 1
}));
}
exports.handler = async (event) => {
const { skus } = JSON.parse(event.body);
const products = getSelectedProducts(skus);
const validatedQuantity = 1;
const lineItems = getLineItems(products);
console.log(products);
console.log(lineItems);
const session = await stripe.checkout.sessions.create({
payment_method_types: ['card'],
billing_address_collection: 'auto',
shipping_address_collection: {
allowed_countries: ['US', 'CA', 'IN'],
},
success_url: `${process.env.URL}/success`,
cancel_url: process.env.URL,
line_items: lineItems,
});
return {
statusCode: 200,
body: JSON.stringify({
sessionId: session.id,
publishableKey: process.env.STRIPE_PUBLISHABLE_KEY,
}),
};
};
Note here we are expecting a payload with the selected product's SKU information. Upon getting that, we will take out other relevant information of the selected products from the inventory, i.e., products.json file. Next, we create the line item object and pass it to the stripe API for creating a Stripe session. We also specify to delegate to a page called success.html once the payment is successful.
UI Changes for Checkout
The last thing we need to do now is to call the new serverless function from the UI. First, we need to install the stripe library for clients. Execute this command from the root of the project folder,
yarn add @stripe/stripe-js
Create a folder called utils under the src folder. Create a file named stripejs.js under src/utils with the following content,
import { loadStripe } from '@stripe/stripe-js';
let stripePromise;
const getStripe = (publishKey) => {
if (!stripePromise) {
stripePromise = loadStripe(publishKey);
}
return stripePromise;
}
export default getStripe;
This is to get the stripe instance globally at the client-side using a singleton method. Now open the products.js file under src/components to make the following changes,
Import the getStripe function fromutils/stripejs’,
Time to add code for the functions addToCart, byuOne, and checkOut as we left them empty before.
const addToCart = sku => {
setCart([...cart, sku]);
}
const buyOne = sku => {
const skus = [];
skus.push(sku);
const payload = {
skus: skus
};
performPurchase(payload);
}
const checkOut = () => {
console.log('Checking out...');
const payload = {
skus: cart
};
performPurchase(payload);
console.log('Check out has been done!');
}
Last, add the function performPurchase, which will actually make the API call when the Buy Now or Checkout buttons are clicked.
const performPurchase = async payload => {
const response = await axios.post('/api/create-checkout', payload);
console.log('response', response);
const stripe = await getStripe(response.data.publishableKey);
const { error } = await stripe.redirectToCheckout({
sessionId: response.data.sessionId,
});
if (error) {
console.error(error);
}
}
Now restart netlify dev and open the app in the browser, http://localhost:8888
You can start the purchase by clicking on the Buy Now button or add the products to the cart and click on the cart icon at the top right of the page. Now the stripe session will start, and the payment page will show up,
payment_details.png
Provide the details and click on the Pay button. Please note, you can get the test card information from here. The payment should be successful, and you are supposed to land on a success page as we have configured previously. But we have not created a success page yet. Let’s create one.
Create a file called success.js under the src/pages folder with the following content,
import React from 'react';
import Layout from "../components/layout"
import SEO from "../components/seo"
const Success = () => {
return (
<Layout>
<SEO title="Cat Store - Success" />
<h1>Yo, Thank You!</h1>
<img src="https://media.giphy.com/media/b7ubqaIl48xS8/giphy.gif" alt="dancing cat"/>
</Layout>
)
}
export default Success;
Complete the payment to see this success page in action after a successful payment,
success.png
Great, we have the Jamstack pet store app running using the Netlify serverless functions, Stripe Payment API, and Gatsby framework. But it is running locally. Let us deploy it using Netlify Hosting to access it publicly.
Deploy and Host on Netlify CDN
First, commit and push all the code to your GitHub repository. Login to your netlify account from the browser and click on the ‘New site from Git’ button. Select the option GitHub from the next page,
netlify_1.png
Search and select your GitHub repository to deploy and host,
netlify_2.png
Finally, provide the build options as shown below and click on the ‘Deploy Site’ button.
netlify_3.png
That’s all, and you should have the site live with the app.
Congratulations 🎉 !!! You have successfully built a Jamstack pet shop application with Netlify Serverless functions, Stripe APIs, Gatsby framework, and deployed it on Netlify CDN.
Before we end...
Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow. Please like/share this article so that it reaches others as well.
Do not forget to check out my previous articles on Jamstack,
Did you find this article valuable?
Support Tapas Adhikary by becoming a sponsor. Any amount is appreciated!
See recent sponsors Learn more about Hashnode Sponsors
Share this
Impressum
|
__label__pos
| 0.95837 |
Hide Table of Contents
View Editor widget with simple toolbar sample in sandbox
Editor widget with simple toolbar
Description
This sample shows how to edit multiple feature layers using the Editor widget. The editor widget is highly customizable, allowing developers to add and remove functionality depending on their editing use case. In this sample, you can modify existing hydrologic features or create new ones.
The editor widget has an optional toolbar that lets the developer define additional details they want to include in the editing application. In this sample the freehand polygon and polyline drawing tools are added to provide end users the ability to sketch features as if they were drawing them with a pencil. The reshape tool is also added so end-users can easily adjust the shape of existing features.
Note that a custom template picker is defined in the settings object. This is useful when you want to modify the default appearance of the template picker. In this case the number of columns is set and the template picker determines the appropriate number of rows.
This sample enables snapping, added at version 2.3, using the
map.enableSnapping()
method. Snapping can be disabled using
map.disableSnapping()
.
Code
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="viewport" content="initial-scale=1, maximum-scale=1,user-scalable=no">
<title>Edit rivers and waterbodies</title>
<link rel="stylesheet" href="https://js.arcgis.com/3.43/dijit/themes/claro/claro.css">
<link rel="stylesheet" href="https://js.arcgis.com/3.43/esri/css/esri.css">
<style>
html,body{height:100%;width:100%;margin:0;overflow:hidden;}
#map{
padding:0;
}
#header{
font-size: 1.1em;
font-family: sans-serif;
padding-left: 1em;
padding-top:4px;
color:#660000;
}
.templatePicker {
border: none;
}
.dj_ie .infowindow .window .top .right .user .content { position: relative; }
.dj_ie .simpleInfoWindow .content { position: relative; }
</style>
<script src="https://js.arcgis.com/3.43/"></script>
<script>
var map;
require([
"esri/map",
"esri/tasks/GeometryService",
"esri/layers/ArcGISTiledMapServiceLayer",
"esri/layers/FeatureLayer",
"esri/Color",
"esri/symbols/SimpleMarkerSymbol",
"esri/symbols/SimpleLineSymbol",
"esri/dijit/editing/Editor",
"esri/dijit/editing/TemplatePicker",
"esri/config",
"dojo/i18n!esri/nls/jsapi",
"dojo/_base/array", "dojo/parser", "dojo/keys",
"dijit/layout/BorderContainer", "dijit/layout/ContentPane",
"dojo/domReady!"
], function(
Map, GeometryService,
ArcGISTiledMapServiceLayer, FeatureLayer,
Color, SimpleMarkerSymbol, SimpleLineSymbol,
Editor, TemplatePicker,
esriConfig, jsapiBundle,
arrayUtils, parser, keys
) {
parser.parse();
// snapping is enabled for this sample - change the tooltip to reflect this
jsapiBundle.toolbars.draw.start = jsapiBundle.toolbars.draw.start + "<br>Press <b>ALT</b> to enable snapping";
//This service is for development and testing purposes only. We recommend that you create your own geometry service for use within your applications.
esriConfig.defaults.geometryService = new GeometryService("https://utility.arcgisonline.com/ArcGIS/rest/services/Geometry/GeometryServer");
map = new Map("map", {
basemap: "satellite",
center: [-96.541, 38.351],
zoom: 14,
slider: false
});
map.on("layers-add-result", initEditor);
//add boundaries and place names
var labels = new ArcGISTiledMapServiceLayer("https://server.arcgisonline.com/ArcGIS/rest/services/Reference/World_Boundaries_and_Places/MapServer");
map.addLayer(labels);
var responsePoints = new FeatureLayer("https://sampleserver6.arcgisonline.com/arcgis/rest/services/Wildfire/FeatureServer/0", {
mode: FeatureLayer.MODE_ONDEMAND,
outFields: ['*']
});
var responsePolys = new FeatureLayer("https://sampleserver6.arcgisonline.com/arcgis/rest/services/Wildfire/FeatureServer/2", {
mode: FeatureLayer.MODE_ONDEMAND,
outFields: ['*']
});
map.addLayers([responsePolys, responsePoints]);
function initEditor(evt) {
var templateLayers = arrayUtils.map(evt.layers, function(result){
return result.layer;
});
var templatePicker = new TemplatePicker({
featureLayers: templateLayers,
grouping: true,
rows: "auto",
columns: 3
}, "templateDiv");
templatePicker.startup();
var layers = arrayUtils.map(evt.layers, function(result) {
return { featureLayer: result.layer };
});
var settings = {
map: map,
templatePicker: templatePicker,
layerInfos: layers,
toolbarVisible: true,
createOptions: {
polylineDrawTools:[ Editor.CREATE_TOOL_FREEHAND_POLYLINE ],
polygonDrawTools: [ Editor.CREATE_TOOL_FREEHAND_POLYGON,
Editor.CREATE_TOOL_CIRCLE,
Editor.CREATE_TOOL_TRIANGLE,
Editor.CREATE_TOOL_RECTANGLE
]
},
toolbarOptions: {
reshapeVisible: true
}
};
var params = { settings: settings };
var myEditor = new Editor(params, 'editorDiv');
//define snapping options
var symbol = new SimpleMarkerSymbol(
SimpleMarkerSymbol.STYLE_CROSS,
15,
new SimpleLineSymbol(
SimpleLineSymbol.STYLE_SOLID,
new Color([255, 0, 0, 0.5]),
5
),
null
);
map.enableSnapping({
snapPointSymbol: symbol,
tolerance: 20,
snapKey: keys.ALT
});
myEditor.startup();
}
});
</script>
</head>
<body class="claro">
<div id="main" data-dojo-type="dijit/layout/BorderContainer" data-dojo-props="design:'headline'" style="height:width:100%;height:100%;">
<div data-dojo-type="dijit/layout/ContentPane" id="header" data-dojo-props="region:'top'">
Edit Hydrography
</div>
<div data-dojo-type="dijit/layout/ContentPane" data-dojo-props="region:'left'" style="width: 300px;overflow:auto;">
<div id="templateDiv"></div>
<div id="editorDiv"></div>
</div>
<div data-dojo-type="dijit/layout/ContentPane" id="map" data-dojo-props="region:'center'"></div>
</div>
</body>
</html>
Show Modal
|
__label__pos
| 0.956816 |
Understanding Callback Function
When we say callback, we always say okay, this is asynchronous call and it's not going to produce the result immediately but it will take some time to produce the result.
Usually callback mechanism used in case of expensive operation e.g. downloading file or performing I/O operations.
Generally callback in JavaScript is a function that is passed to another function (say it as anotherFunc) as a parameter and the callback function executed inside the anotherFunc. A callback function is also known as callback pattern.
Common callback code you might be seeing in JQuery.
$("buttonId").click(function(){
alert("Button is clicked");
});
As you can see we are passing function inside click function and this function get executed by click method on click of button. The above example is typical use of callback function and widely being used in JQuery.
The typical example of callback is when we do Ajax call to server with method $.get() or $.post() method.
$("buttonId").click(function(){
$.get("any.jsp", function(data, status){
// The function(data, status) is callback function here.
alert("Data: " + data + "\nStatus: " + status);
});
});
//Here is the Syntax for doing post call.
$.post(URL,data,callback);
Creating your own callback function
<!DOCTYPE html>
<html>
<head>
<title>Callback function</title>
</head>
<body>
<div id="divAdd" onclick="getInformation.add(10, 10);"
style="height:10px; display:block;">; Add Click </div>
</body>
<script>
var getInformation = {
"names": function () {
alert("get the names");
},
"add" : function(x, y){
alert(x+y);
}
};
/**
names and add methods are being used as a callback function.
names function will get called at the time of page load
and add function called on click of the div element.
*/
window.addEventListener("load", getInformation.names, false);
</script>
</html>
// Let's define an object with properties fullName and a method setFullName.
// We will later pass the method as a callback function to another function.
var formatName = {
fullName: "",
// set fullName by appending firstName and lastName.
setFullName: function (firstName, lastName) {
// this refers to the fullName property in this object?
this.fullName = firstName + " " + lastName;
}
}
function getNameInfo(firstName, lastName, callback, callbackObj) {
// Do your application logic here.
//Here we are doing a call to the function being passed
//as a parameter. Also it's a good practice to always check
//for type before calling any callback method.
if (typeof callback === "function")
callback.apply(callbackObj, [firstName, lastName]);
}
getNameInfo("Deepak", "Singh", formatName.setFullName, formatName);
console.log (formatName.fullName);// Deepak Singh
Hope this is useful and have given you an understanding of callback function.
|
__label__pos
| 0.810756 |
Sonsuz Ardışık Sayılar Toplamı
Etiketler :
Matematikte ilginç teoremler bulunduğu bazen iddia ediliyor ve bunlar bir şekilde ispatlanıyor. İşte bu duruma güzel bir örnek olarak sayılabilecek bir teoremi sizinle paylaşıyorum. 1+2+3+4+5........=-1/12 Bu teoremin neden böyle olduğu konusunda videoda bir açıklama yapılmış işin tuhaf tarafı pozitif olarak ilerleyen ardışık sayıların toplamının sonucu -1/12 gibi negatif bir sayıya eşit olacağı gösteriliyor. İşlemler dikkatlice incelendiğinde üç farklı değişken kullanılarak matematiksel olarak doğru işlemler yapılarak sonuca gidiliyor.
Öncül: S1=1-1+1-1+1-1+1-1+.........=1/2
S2=1-2+3-4+5-6+................=
S=1+2+3+4+5......................=?
Burada S2 öncülünün 2 katı alınırsa 2S2 alt alta yazılıp toplanırsa
2S2=1-1+1-1........=S1 olur ki bu da 1/2 toplamını verir.
S2=1/4 bulunur.
S-S2 ifadesi incelendiğinde alt alta yazılıp çıkarma işlemi yapılırsa
0+4+0+8+0+12+0+..........şeklinde bir toplam elde edilir. Burada 4+8+12+....... toplamı için 4 ortak parantezine alınırsa 4(1+2+3+.......) toplamı S'e eşit olacağından şöyle bir durum çıkar.
4S=S-S2 bulunur ki burada önceden bulduğumuz S2y yerine S2=1/4 yazılırsa
4S=S-1/4
3S=-1/4
S=-1/12
S=1+2+3+4+5......................= -1/12 bulunur.
Bu ispatlama yönteminde göze çarpan bir sıkıntı olarak 2S2 ifadesinin bulunurken 1 basamak yana kaydırılması sonucu elde edilen toplam gösterilebilir. Çünkü burada sayılar adet olarak birbirleri ile eşleştirildiğinde sonsuza kadar birbirinin karşısında yer alacaktır. Bir basamak yana kaydırılarak toplanması aslında yukarıda yazılan sonsuz serinin altta yazılan sonsuz seriye göre basamak sayılarının bir tane eksik olduğunu gösterir ki bu durum bizlere sayılabilir sonsuzluk kavramınına karşı şüphe ile bakmamıza sebep olur. S1=1/2 öncülü de doğru değildir.
Aşağıda bununla ilgili bir açıklama alıntılanmıştır. (ingilizce)
"Recently a very strange result has been making the rounds. It says that when you add up all the natural numbers
1+2+3+4+... then the answer to this sum is -1/12. The idea featured in a Numberphile video (see below), which claims to prove the result and also says that it's used all over the place in physics. People found the idea so astounding that it even made it into the New York Times. So what does this all mean?
The maths
First of all, the infinite sum of all the natural number is not equal to -1/12. You can easily convince yourself of this by tapping into your calculator the partial sums
$S_1 = 1$
$S_2 = 1+2 = 3$
$S_3 = 1+2+3=6$
$S_4 = 1+2+3+4=10$
$...$
$S_ n = 1+2+3+4+ ... +n,$
and so on. The $S_ n$ get larger and larger the larger $n$ gets, that is, the more natural numbers you include. In fact, you can make $S_ n$ as large as you like by choosing $n$ large enough. For example, for $n=1000$ you get
\[ S_ n = 500,500, \]
and for $n = 100,000$ you get
\[ S_ n = 5,000,050,000. \]
This is why mathematicians say that the sum
\[ 1+2+3+4+ ... \]
diverges to infinity. Or, to put it more loosely, that the sum is equal to infinity.
So where does the -1/12 come from? The wrong result actually appeared in the work of the famous Indian mathematician Srinivasa Ramanujan in 1913. But Ramanujan knew what he was doing and had a reason for writing it down. He had been working on what is called the Euler zeta function. To understand what that is, first consider the infinite sum
$S = 1+1/4+1/9+1/16+ ... .$
You might recognise this as the sum you get when you take each natural number, square it, and then take the reciprocal:
$S = 1+1/2^2+1/3^2+1/4^2... .$
Now this sum does not diverge. If you take the sequence of partial sums as we did above,
$S_1 = 1$
$S_2 = 1+1/2^2 = 5/4=1.25$
$S_3 = 1+1/2^2+1/3^2 = 49/36 = 1.361...$
$...$
$S_ n =1+1/2^2+1/3^2 + ... + 1/n^2,$
then the results you get get arbitrarily close, without ever exceeding, the number $\pi ^2/6 = 1.644934... .$ Mathematicians say the sum converges to $\pi ^2/6$, or more loosely, that it equals $\pi ^2/6.$ Now what happens when instead of raising those natural numbers in the denominator to the power of 2, you raise it to some other power $x$? It turns out that the corresponding sum
\[ S(x) = 1+1/2^ x+1/3^ x+1/4^ x... \]
converges to a finite value as long as the power $x$ is a number greater than $1$. For every $x > 1$, the expression $S(x)$has a well-defined, finite value. $S(x)$ is what’s called a function, and it’s called the Euler zeta function after the prolific 17th century mathematician Leonhard Euler. So far, so good. But what happens when you plug in a value of $x$ that is less than 1? For example, what if you plug in $x=-1$? Let’s see.
\[ S(-1) = 1+1/2^{-1}+1/3^{-1}+1/4^{-1}... \]
\[ = 1+2+3+4+ ... . \]
So you recover our original sum, which, as we know, diverges. The same is true for any other values of $x$ less than or equal to 1: the sum diverges.
Extending the Euler zeta function
As it stands the Euler zeta function S(x) is defined for real numbers x that are greater than 1. The real numbers are part of a larger family of numbers called the complex numbers. And while the real numbers correspond to all the points along an infinitely long line, the complex numbers correspond to all the points on a plane, containing the real number line. That plane is called the complex plane. Just as you can define functions that take real numbers as input you can define functions that take complex numbers as input.
One amazing thing about functions of complex numbers is that if you know the function sufficiently well for some set of inputs, then (up to some technical details) you can know the value of the function everywhere else on the complex plane. This method of extending the definition of a function is known as analytic continuation. The Euler zeta function is defined for real numbers greater than 1. Since real numbers are also complex numbers, we can regard it as a complex function and then apply analytic continuation to get a new function, defined on the whole plane but agreeing with the Euler zeta function for real numbers greater than 1. That's the Riemann zeta function. But there is also another thing you can do. Using some high-powered mathematics (known as complex analysis, see the box) there is a way of extending the definition of the Euler zeta function to numbers $x$less than or equal to 1 in a way that gives you finite values. In other words, there is a way of defining a new function, call it $\zeta (x),$ so that for $x>1$
$\zeta (x) = S(x) = 1+1/2^ x+1/3^ x+1/4^ x...,$
and for $x\leq 1$ the function $\zeta (x)$ has well-defined, finite values. This method of extension is called analytic continuation and the new function you get is called the Riemann zeta function, after the 18th cenury mathematician Bernhard Riemann. (Making this new function give you finite values for $x \leq -1$ involves cleverly subtracting another divergent sum, so that the infinity from the first divergent sum minus the infinity from the second divergent sum gives you something finite.) OK. So now we have a function $\zeta (s)$ that agrees with Euler’s zeta function $S(x)$ when you plug in values $x>1$. When you plug in values $x \leq 1$, the zeta function gives you a finite output. What value do you get when you plug $x=-1$ into the zeta function? You’ve guessed it:
\[ \zeta (-1)=-1/12. \]
If you now make the mistake of believing that $\zeta (x) = S(x)$ for $x=-1$, then you get the (wrong) expression
\[ S(-1) = 1+2+3+4+ ... = \zeta (-1) = -1/12. \]
This is one way of making sense of Ramanujan’s mysterious expression.
The trick
So how did the people in the Numberphile video "prove" that the natural numbers all add up to -1/12? The real answer is that they didn’t. Watching the video is like watching a magician and trying to spot them slipping the rabbit into the hat. Step one of the "proof" tries to persuade you of something rather silly, namely that the infinite sum
\[ 1-1+1-1+1-.... \]
is equal to $1/2.$
The video doesn’t dwell long on this and seems to imply it’s obvious. But let’s look at it a little closer to see if it makes sense at all. Suppose that the sum $1-1+1-1+1-1....$ has a finite value and call it $Z$. Adding $Z$ to itself you get the infinite sum
\[ Z+Z = 1-1+1-1+1-....+1-1+1-1+1-... . \]
But this is just the original sum, implying
\[ Z+Z=2Z = Z. \]
Since $Z=1/2,$ it follows that $1/2=1,$ which is nonsense. So the assertion that the infinite sum $1-1+1-1+1-....$ can be taken to equal to 1/2 is not correct. In fact, you can derive all sorts of results messing around with infinite sums that diverge. It’s a trick!
The physics
But how did this curious, wrong result make it into a physics textbook, as shown in the video? Here is where things really get interesting. Suppose you take two conducting metallic plates and arrange them in a vacuum so that they are parallel to each other. According to classical physics, there shouldn't be any net force acting between the two plates.Illustration of the Casimir effect.
But classical physics doesn't reckon with the weird effects you see when you look at the world at very small scales. To do that, you need quantum physics, which tells us many very strange things. One of them is that the vacuum isn't empty, but seething with activity. So-called virtual particlespop in and out of existence all the time. This activity gives a so called zero point energy: the lowest energy something can have is never zero. When you try to calculate the total energy density between the two plates using the mathematics of quantum physics, you get the infinite sum
\[ 1 + 8 + 27 + 64 +... . \]
This infinite sum is also what you get when you plug the value $x=-3$ into the Euler zeta function:
$S(-3) = 1 + 1/2^{-3} + 1/3^{-3} + 1/4^{-3} + ... = 1+ 8 + 27 + 64 +... .$
That’s unfortunate, because the sum diverges (it does so even quicker than than $S(-1)$), which would imply an infinite energy density. That’s obviously nonsense. But what if you cheekily assume that the infinite sum equals the Riemann zeta function, rather than the Euler zeta function, evaluated at $x=-3$? Well, then you get a finite energy density. That means there should be an attractive force between the metallic plates, which also seems ludicrous, since classical physics suggests there should be no force. But here’s the surprise. When physicists made the experiment they found that the force did exist — and it corresponded to an energy density exactly equal to $\zeta (-3)$! This surprising physical result is known as the Casimir effect, after the Dutch physicist Hendrik Casimir. Take a moment to take this in. Quantum physics says the energy density should be
\[ S(-3) = 1+8+27+64+... . \]
That’s nonsense, but experiments show that if you (wrongly) regard this sum as the zeta function $\zeta (x)$ evaluated at $x=-3$, you get the correct answer. So it seems that nature has followed the ideas we explained above. It extended the Euler zeta function to include values for $x$ that are less than 1, by cleverly subtracting infinity, and so came up with a finite value. That’s remarkable! The reason why we see $\zeta (-1)$ and $S(-1)$ in the Numberphile video and the physics textbook, rather than $\zeta (-3)$ and $S(-3),$ is that when you imagine the Casimir effect as happening in one dimension (along a line rather than in 3D), the energy density you calculate is $\zeta (-1)$ rather than $\zeta (-3)$.So why did the Numberphile people publicise this strange "result"? They certainly know about the analytic continuation that makes the function well-defined, but that was something that was a little too technical for their video. Knowing they had the analytic continuation method, that would make the final result OK, hidden in their back pocket, they went ahead with their sleight of hand. In doing so they got over a million hits and had the world talking about zeta functions and mathematics. For this they should be congratulated. The mathematics of zeta functions is fantastic and what we described here is just the start of a long list of amazing mathematical properties. In bringing mathematics and physics to the public we always have to make choices about what we leave out and what we explain. Where to draw that line is something we all have to leave to our consciences.
0 yorum:
Popüler Yayınlar
Son Yorumlar
|
__label__pos
| 0.958968 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
PerlMonks
perlfunc:redo
by gods
on Aug 24, 1999 at 22:42 UTC ( #201=perlfunc: print w/replies, xml ) Need Help??
redo
See the current Perl documentation for redo.
Here is our local, out-dated (pre-5.6) version:
redo - start this loop iteration over again
redo LABEL
redo
The redo command restarts the loop block without evaluating the conditional again. The continue block, if any, is not executed. If the LABEL is omitted, the command refers to the innermost enclosing loop. This command is normally used by programs that want to lie to themselves about what was just input:
# a simpleminded Pascal comment stripper
# (warning: assumes no { or } in strings)
LINE: while (<STDIN>) {
while (s|({.*}.*){.*}|$1 |) {}
s|{.*}| |;
if (s|{.*| |) {
$front = $_;
while (<STDIN>) {
if (/}/) { # end of comment?
s|^|$front\{|;
redo LINE;
}
}
}
print;
}
See also continue for an illustration of how last, next, and redo work.
Log In?
Username:
Password:
What's my password?
Create A New User
Chatterbox?
[ambrus]: GotToBTru: wait, you tell only the atom totals of what you want? Is that like ordering food in a restaurant by telling only the nutrient amounts you need, or
[ambrus]: like when a medieval scientist supposedly proves his priority inventing something by having previously published an anagram of a thousand letters long summary of the invention?
[choroba]: Progress! Nowadays, you can write a Perl script to generate such an anagram!
[GotToBTru]: i like to order my chemicals a-la-carte ambrus ;)
How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (9)
As of 2016-12-06 13:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
On a regular basis, I'm most likely to spy upon:
Results (104 votes). Check out past polls.
|
__label__pos
| 0.982523 |
项目结构:
【后端开发】php开发留言板的CRUD(增,删,改,查)操作_php实例
添加页面: 说明:这里只注重操作,对界面的美工没有下工夫,希望大家理解......
【后端开发】php开发留言板的CRUD(增,删,改,查)操作_php实例
列表页面:
【后端开发】php开发留言板的CRUD(增,删,改,查)操作_php实例
修改页面:
【后端开发】php开发留言板的CRUD(增,删,改,查)操作_php实例项目中所需的sql:
复制代码 代码如下:
create database form;
use form;
CREATE TABLE `message` (
`id` tinyint(1) NOT NULL auto_increment,
`user` varchar(25) NOT NULL,
`title` varchar(50) NOT NULL,
`content` tinytext NOT NULL,
`lastdate` date NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=gbk AUTO_INCREMENT=1 ;
conn.php
复制代码 代码如下:
$conn = @ mysql_connect("localhost", "root", "") or die("数据库链接错误");
mysql_select_db("form", $conn);
mysql_query("set names 'gbk'");
?>
add.php
复制代码 代码如下:
include 'conn.php';
if($_POST['submit']){
$sql="INSERT INTO message(id,user,title,content,lastdate) VALUES (NULL, '$_POST[user]', '$_POST[title]', '$_POST[content]', now())";
mysql_query($sql);
//页面跳转,实现方式为javascript
$url = "list.php";
echo " ";
}
?>
用户:
标题:
内容:
list.php
复制代码 代码如下:
include 'conn.php';
?>
echo " 继续添加 ";
?>
$sql="select * from message order by id"; $query=mysql_query($sql); while ($row=mysql_fetch_array($query)){ ?>
标题: 用户: 编辑 | 删除
内容:
发表日期:
delete.php
复制代码 代码如下:
include 'conn.php';
$id = $_GET['id'];
$query="delete from message where id=".$id;
mysql_query($query);
?>
//页面跳转,实现方式为javascript
$url = "list.php";
echo " ";
?>
preEdit.php
复制代码 代码如下:
include 'conn.php';
$id=$_GET[id];
$query="SELECT * FROM message WHERE id =".$id;
$result=mysql_query($query);
while ($rs=mysql_fetch_array($result)){
?>
用户:
标题:
内容: <?=$rs[content]?>
postEdit.php
复制代码 代码如下:
include 'conn.php';
$query="update message set user='$_POST[user]',title='$_POST[title]',content='$_POST[content]' where id='$_POST[id]'";
mysql_query($query);
?>
//页面跳转,实现方式为javascript
$url = "list.php";
echo " ";
?>
|
__label__pos
| 0.993151 |
Export (0) Print
Expand All
0 out of 1 rated this helpful - Rate this topic
Movetree Syntax
Updated: March 28, 2003
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
MoveTree Syntax
MoveTree uses the following syntax:
Art Image movetree {/start | /continue | /check} /s SrcDSA /d DstDSA /sdn SrcDN /ddn DstDN [/u [Domain\]UserName /p Password] [/verbose]
Parameters
/start
Starts a MoveTree operation. This command includes a /check operation by default. MoveTree tries to continue the operation until it is complete. If there is a network fault or if the destination domain controller becomes unavailable, then MoveTree pauses the operation. If an operation has been paused, it can be continued by using the /continue command.
/continue
Continues the execution of a previously paused or failed MoveTree operation. This allows the MoveTree operation to continue even if a network fault or a domain controller error has interrupted the initial operation. Specifying /sdn SrcDN is optional for this command.
/check
Performs a test run of the MoveTree operation checking the whole tree without moving any objects. This enables the administrator to determine if there is sufficient disk space on the destination server, if there are any conflicts with object names or if there are any objects that could not be moved (such as Domain Local or Global groups). The administrator can then take remedial action before performing the actual move.
The /check command returns an error if any of the following conditions are met:
• The user does not have the necessary permissions to create objects in the destination container.
• The destination server does not have sufficient disk space to continue the operation.
• A relative distinguished name conflict exists on the destination server.
• There is a samAccountName conflict for any object that would be moved.
• Any objects cannot be moved because they are built-in accounts, or they are either a Domain Local or a Global group.
• Any computer objects would be moved. To move computer accounts and join the computers to the domain, use Netdom Overview, a Windows Support Tool.
/s SrcDSA
Specifies the fully qualified primary DNS name of the source server in the domain from which the objects are being moved (for example, server1.marketing.microsoft.com). Required for all MoveTree commands.
/d DstDSA
Specifies the fully qualified primary DNS name of the destination server in the domain to which the objects are being moved (for example, server2.sales.microsoft.com). Required for all MoveTree commands.
/sdn SrcDN
Specifies the distinguished name of the source subtree (the object being moved) (for example, OU=Promotions,DC=Marketing,DC=Microsoft,DC=Com). Required for the /start, /startnocheck, and /check commands; optional for /continue.
/ddn DstDN
Specifies the distinguished name of the destination subtree (to which the object is being moved) (for example, OU=Promotions,DC=Sales,DC=Microsoft,DC=Com). Required for all MoveTree commands.
/u [Domain\]UserName /p Password
Runs MoveTree under the credentials of an alternate user name and password. Optionally, Domain can be specified as well. If these optional arguments are not provided, MoveTree uses the credentials of the currently logged-on user.
/verbose
Runs MoveTree in verbose mode, which displays more details about the operation as it runs. Optional.
/? or /help
Displays this information on a command-line syntax screen.
See Also
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Community Additions
ADD
Show:
© 2014 Microsoft. All rights reserved.
|
__label__pos
| 0.857619 |
AWS, Cloud Computing
3 Mins Read
Seamless Data Integration: AWS Glue Crawlers with Snowflake
Voiced by Amazon Polly
Introduction
AWS Glue Crawlers is a popular technique to scan data in the background for Data Lake clients that need to discover petabytes of data so you can concentrate on using the data to make more data-driven decisions. If you have data in data warehouses like Snowflake, you might also want to be able to find it there and mix it with data from Data Lakes to gain insights. With the addition of AWS Glue Crawlers, it is now more straightforward for you to comprehend Snowflake schema updates and extract valuable insights.
Developers can construct using their favorite languages while retaining high control over integration procedures and structures due to the coding options. The difficulty has been that hand-coding choices are frequently more intricate and expensive to maintain.
Developers now can more quickly design and manage their data preparation and loading processes with generated code that is flexible, reusable, and portable without having to acquire, set up, or maintain infrastructure when connecting to Snowflake to AWS, thanks to AWS Glue Crawler as it now Support Snowflake.
Steps to Connect Snowflake using AWS Glue Crawler
Step 1 – Create an AWS Glue connection to Snowflake.
The following screenshot shows the configuration used to create a connection to the Snowflake cluster.
step1
Initially, the same steps are followed to create AWS Glue Crawler. Following are the changes we need to make to integrate with Snowflake.
You can establish and schedule an AWS Glue Crawler with a JDBC URL and credentials from AWS Secrets Manager to crawl a Snowflake database. Specify the configuration option, whether you want the Crawler to crawl the entire database or just the tables you want by adding the schema or table path and excluding patterns. The AWS Glue Data Catalog’s Snowflake tables, external tables, views, and materialized views are just a few of the data the Crawler examines and catalogs with each pass. The Crawler extracts data from Snowflake columns with non-Hive compatible types, like geography or geometry, and makes it accessible in the Data Catalog.
Step 2 – Choose Add a JDBC data source
step2
Once the Crawler is created and run, we can go to advanced properties and table properties. We can see the highlighted portion where the classification is a snowflake, and the typeOfData is view.
step2b
Any data warehousing project must include the extraction, transformation, and load (ETL) process. Customers also benefit from the alternate extraction, load, and transformation (ELT) method, where data processing is pushed to the database, thanks to advancements in cloud data warehouse designs.
The argument over whether to employ a hand-coded method or one of the several ETL or ELT data integration tools remains with either strategy. While both have benefits, and some may opt for a “one or the other” strategy, many organizations use hand coding and a data integration tool.
Pioneers in Cloud Consulting & Migration Services
• Reduced infrastructural costs
• Accelerated application deployment
Get Started
Benefits of AWS Glue with Snowflake
Now that Spark clusters, servers, and the usual continuous maintenance for these systems are no longer necessary, Snowflake users can easily manage their programmatic data integration operations. Snowflake’s data warehouse as a service is readily integrated with AWS Glue’s fully controlled environment. Customers can now manage their data intake and transformation pipelines with greater ease and flexibility due to these two technologies working together.
Customers who use AWS Glue and Snowflake gain access to the query pushdown feature of Snowflake, which automatically pushes Spark workloads that have been converted to SQL into Snowflake. Customers don’t need to worry about improving Spark performance; they can concentrate on building their code and instrumenting their pipelines. Customers may benefit from optimal ELT processing that is affordable, simple to use, and easy to maintain with the help of AWS Glue and Snowflake.
Conclusion
Getting started and managing your programmatic data integration procedures with AWS Glue and Snowflake is simple. AWS Glue can be used alone or in conjunction with a data integration solution without significantly increasing overhead. This method optimizes time and cost for genuine ELT processing with native query pushdown through the Snowflake Spark connector. Customers get a fully managed, optimized platform with AWS Glue and Snowflake to handle various data integration needs.
Making IT Networks Enterprise-ready – Cloud Management Services
• Accelerated cloud migration
• End-to-end view of the cloud environment
Get Started
About CloudThat
CloudThat is also the official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner and Microsoft gold partner, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best in industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.
Drop a query if you have any questions regarding AWS Glue, Snowflake and I will get back to you quickly.
To get started, go through our Consultancy page and Managed Services Package that is CloudThat’s offerings.
FAQs
1. What does it mean that AWS Glue Crawlers now support Snowflake?
ANS: – AWS Glue Crawlers can now discover and catalog metadata about data stored in Snowflake. This makes building ETL pipelines that move data between Snowflake and other data stores easier.
2. What are the benefits of using AWS Glue Crawlers with Snowflake?
ANS: – The benefits of using AWS Glue Crawlers with Snowflake include the faster discovery of data schema, simplified ETL pipeline creation, and better data governance.
3. Can I use AWS Glue Crawlers to move data between Snowflake and other data stores?
ANS: – Yes, you can use AWS Glue Crawlers to move data between Snowflake and other data stores. Once the Crawler has created the metadata tables, you can use AWS Glue Jobs to create ETL pipelines that move data between Snowflake and other data stores.
Share
Comments
Click to Comment
Get The Most Out Of Us
Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!
|
__label__pos
| 0.652554 |
Known implementations: twisted.names.dns._OPTHeader, twisted.names.dns._OPTVariableOption, twisted.names.dns.Charstr, twisted.names.dns.Name, twisted.names.dns.Query, twisted.names.dns.Record_A, twisted.names.dns.Record_A6, twisted.names.dns.Record_AAAA, twisted.names.dns.Record_AFSDB, twisted.names.dns.Record_HINFO, twisted.names.dns.Record_MINFO, twisted.names.dns.Record_MX, twisted.names.dns.Record_NAPTR, twisted.names.dns.Record_NULL, twisted.names.dns.Record_RP, twisted.names.dns.Record_SOA, twisted.names.dns.Record_SRV, twisted.names.dns.Record_TXT, twisted.names.dns.Record_WKS, twisted.names.dns.RRHeader, twisted.names.dns.SimpleRecord, twisted.names.dns.UnknownRecord
Interface for something which can be encoded to and decoded from a file object.
Method encode Write a representation of this object to the given file object.
Method decode Reconstruct an object from data read from the given file object.
def encode(strio, compDict=None): (source)
Write a representation of this object to the given file object.
ParametersstrioThe stream to which to write bytes (type: File-like object)
compDictA dictionary of backreference addresses that have already been written to this stream and that may be used for compression. (type: dict or None)
def decode(strio, length=None): (source)
Reconstruct an object from data read from the given file object.
ParametersstrioThe stream from which bytes may be read (type: File-like object)
lengthThe number of bytes in this RDATA field. Most implementations can ignore this value. Only in the case of records similar to TXT where the total length is in no way encoded in the data is it necessary. (type: int or None)
API Documentation for Twisted, generated by pydoctor at 2017-09-23 19:45:03.
|
__label__pos
| 0.933013 |
Original topic:
Firmware Update Removed Notification Sounds?
(Topic created: 11-20-2021 08:40 PM)
WaitWhatNow
Astronaut
Options
Galaxy S21
image
I went to go change the notification sound for my discord app and the option has vanished. Clicking on the "Allow sounds and vibrations" will select the radial but nothing more. Both Text and Phone support have told me that the option has been removed and that I need a 3rd party app to control it now. To be clear, my signal app is still the sound I set it to for notifications, so it didn't default everything back, it just removed access. My girlfriend's phone is the same way.
0 Likes
1 Reply
WaitWhatNow
Astronaut
Options
Galaxy S21
Found the answer, apparently you go into notification categories and select it from there, thanks
|
__label__pos
| 0.518885 |
1. 23
1. 11
I think that talking about ais523 without acknowledging their contributions to esoteric language design is skipping over their most important work. I like computability theory, so their languages Three Star Programmer and The Waterfall Model are essential contributions, in my view.
1. 3
This is the kind of inspired insanity I can get behind.
1. 3
I played with this the last time it was posted. The thing that I hadn’t realised makes an efficient Fizzbuzz implementation hard is that 3 is not a factor of 10. If the game were to say Fizz on multiples of 2, rather than 3, then you could fill a buffer with the correct Fizz / Buzz / FizzBuzz permutations for 10, 100, 1000, or whatever power of 10 you wanted. You’d then write this to the terminal and then go and increment the next highest digit. There’s no division anywhere near your fast path, it’s purely memory + I/O.
Because Fizz is on multiples of 3, you can generate this pattern for any multiple of 15, but 15 is not a power of 10 or an integer factor of any power of 10, so you have to handle the corners. This probably means that a solution involving actually calculating the sequence (rather than precomputing it and just updating the indexes) is likely to be optimal.
This makes me think that it would be an interesting interview question to ask a candidate how changing the rules from 3 and 5 to 2 and 5 would change how they would approach building a high-performance FizzBuzz implementation.
|
__label__pos
| 0.94621 |
DARTS/Astro Query System User Guide
[ English | Japanese ]
How to use "SQL Search" and "ADQL Search"
In SQL Search and ADQL Search, you can search database directly by entering SQL queries to specify search conditions. You can create your own search conditions and/or sort the results in any orders.
Basic syntax of SQL / ADQL
In Darts/Astro Query System, we adopt PostgreSQL for the RDBMS, as well as PgSphere as an extension. You can use any functions defined in these systems.
Standard of ADQL, which is an extension of the standard SQL92, is defined in IVOA Recommendation. In ADQL search, the input ADQL statements are converted into the SQL statements which are interpreted with PostgreSQL and PgSphere.
Here is the basic syntax of SQL and ADQL, where "{value1|value2}" means either of "value1" or "value2", and "[phrase]" means that the "phrase" may be omitted:
Syntax of SQL powered by PostgreSQL: SELECT {*|column1,column2 column3,...} FROM table_name [ WHERE conditions ] [ ORDER BY column1, column4, ... ] [ LIMIT {number|ALL} [ OFFSET number ] ;
Syntax of ADQL SELECT TOP number {*|column1,column2,column3,...} FROM table_name [ WHERE conditions ] [ ORDER BY column1, column4, ... ] ;
SELECT clause
From the tables specified with FROM clause, the database items used to search are specified with SELECT (comma-separated). This is a required item. To specify all the items in the tables, use "*". Please refer to the following for database items in all the tables.
FROM clause
Specify tables to search within. This is a required item.
WHERE clause
Specify search conditions. You can combine two or more search conditions using AND or OR, where AND has a higher priority than OR. If you want to give a higher priority to OR, use parentheses, ( ).
ORDER BY clause
Specify database items based on which the search results are ordered. The output is displayed in the ascending order if ASC is put after the database item, or descending order if DESC is put. Default is the ascending order when ASC is omitted. If two or more database items separated by "," are specified, the second (third, fourth, and so on) database item is used to judge the order when the first (second, third, and so on) item has the same order.
LIMIT / TOP clause
LIMIT / TOP specifies the maximum number of search results, as well as the number to skip. While this is an optional item, we would recommend to specify a reasonable upper-limit, because response of the browser can be very slow when a large amount of the data are accidentally hit.
These specifications are not clearly established in the Standard SQL, and so that the implementation varies in different database systems. In PostgreSQL, LIMIT is used, while TOP is adopted in ADQL. Accordingly, we use LIMIT in SQL Search and TOP in ADQL Search.
The clause in SQL Search is as follows;
• LIMIT {number|ALL} [ OFFSET {number} ]
OFFSET specifies the number of lines to skip, where "OFFSET 0" is identical to omitting OFFSET. When OFFSET is omitted, the number specified with LIMIT is returned from the beginning. If both OFFSET and LIMIT are specified, the first number of lines specified with OFFSET are skipped, and output the results up to the number specified by LIMIT.
The clause in ADQL Search is as follows;
• TOP number
This returns the results from the begging to the number specified.
Ending character " ; "
Ending character " ; " tells the end of an SQL statement. In DARTS/Astro Query System, the ending character may be omitted, but you may not specify two or more SQL statements separated by " ; ". Please input a single SQL statement at a time.
Also, please refer to SQL tutorial and SQL general reference in DARTS/AKARI CAS page.
To the Top of the Page ⏏
Examples and explanations of SQL / ADQL
In SQL Search and ADQL Search, we provide examples of typical query statements in SQL and ADQL. Clicking each of the "Call example", an exemplary SQL statement is put into the "SQL Query" or "ADQL Query" text box, and explanation of the statement is shown. You can edit the statement in the text box.
Search types Explanation and example
Ranges Search You can use the following operators to compare values:
• = ... equal to
• != ... not equal to
• > ... greater than
• < ... less than
• >= ... greater than or equal to
• <= ... less than or equal to
And, Or When you combine various conditions, you can use AND or OR:
• aaa AND bbb : condition aaa and bbb
• aaa OR bbb : condition aaa or bbb
Because AND has a higher priority than OR, if you want to give a higher priority to OR, use parentheses, ().
Multiple match When searching the data matching one of various values or not-matching any of the values, IN clause is convenient:
• column_x IN (aaa, bbb) : column_x value is either aaa or bbb.
• column_x NOT IN (aaa, bbb) : column_x value is neither aaa nor bbb.
Above statements are equivalent to the following:
• column_x = aaa OR column_x = bbb : column_x value is aaa or column_x value is bbb.
• column_x != aaa AND column_x != bbb : column_x value is not aaa and column_x value is not bbb
Pattern match For pattern matching, you may use LIKE operator:
• LIKE column_x = 'a%' : character string starting with the character "a".
• LIKE column_x = 'a_' : character string "a + any single character".
You can use wild cards:
• % ... Any letters of any number
• _ ... Any single letter
For example, if you input "10%", you can search data which "begin with 10". Also, if you input "M__", you can search data which "begin with M followed by any two characters", e.g. 'M31'.
If you want to treat '%' or '_' as a normal letter, please put "\" before the character, e.g. '10\%', 'STAR\_A'.
Radial Search
In radial search, it is convenient to use functions of PgSphere to define geometrical shape or region on the celestial sphere:
• Point ... spoint(coordinateL, coordinateB)
• Line ... spath(spoint(), spoint(), ...)
• Circle ... scircle(spoint(), radius)
• Rectangle ... spoly(spoint(), spoint(), ...)
Following operators are available to compare geometrical shapes or regions:
• aaa @ bbb = '1' ... aaa is included in bbb
• aaa !@ bbb = '1' ... aaa is not included in bbb
• aaa && bbb = '1' ... aaa and bbb overlap each other
• aaa !&& bbb = '1' ... aaa and bbb do not overlap
You may use any coordinate systems adopted in our database. For example, if you want to search around the Galactic center, it is convenient to use Galactic coordinates ("_galactic_lon" or "_galactic_lat" at the ends of column names).
There are similar geometrical functions in ADQL Search:
• Point ... POINT('',coordinateL, coordinateB)
• Circle ... CIRCLE(POINT(), radius)
• Rectangle ... POLYGON(POINT(), POINT(), ...)
Following operators are available:
• CONTAINS(aaa, bbb) = 1 ... aaa is included in bbb
• CONTAINS(aaa, bbb) = 0 ... aaa is not included in bbb
• INTERSECTS(aaa,bbb) = 1 ... aaa and bbb overlap each other
• INTERSECTS(aaa, bbb) = 0 ... aaa and bbb do not overlap
Multiple Radial Search You can combine multiple radial search conditions using OR or AND operators.
Rectangular Search
You may carry out rectangular search using operators, >, <, >=, <=, etc.:
• column_x >= aaa AND column_x <= bbb :column_x value is larger than aaa and less than bbb.
• column_x < aaa OR column_x > bbb : column_x value is smaller than aaa or larger than bbb.
Equivalent expressions are written by using BETWEEN syntax:
• column_x BETWEEN a AND b : Data item column_x is included in between a and b.
• column_x NOT BETWEEN a AND b Data item column_x isn't included in a and b.
When you use BETWEEN syntax, please be careful if the boundaries are inclusive or not.
To search the piled up region specifying rectangle region of which region such as observation fields, it is convenient to use function of PgSphere, like Radial Search. Please refer to Radial Search in above this page.
To the Top of the Page ⏏
Formats of the search results
You can choose either html or CSV for the output format of the search results. Please specify from the "Format" pull-down menu.
To the Top of the Page ⏏
Showing the SQL statement used
If you click "Show the used SQL", the SQL statement used in the search is displayed as a pop-up. This SQL sentence may be copied into the "SQL query" text box of SQL Search.
To the Top of the Page ⏏
Downloading the data
If you click "Download Wget script", you can obtain a Wget script to download the data your selected.
To the Top of the Page ⏏
Last Modified: 22 April 2021
|
__label__pos
| 0.724978 |
Date Tags python
November 11th I’ll be participating in the Southern California Regional ACM programing competition. This is my second time competing as well as Adam’s. One of our practice problems involved finding if a wall blocks the path between two points. At the time the only way I could think of doing this involved solving for the intersection, and then checking to make sure the intersection point is contained within the domain and range of both segments.
Upon looking at the solution I noticed whoever wrote it had a method called signedarea which I was determined to figure out as the solution was much more elegant than mine. After searching this morning I came across a simpler way (and this) to determine if two line segments intersect.
The solution involves determining if three points are listed in a counterclockwise order. So say you have three points A, B and C. If the slope of the line AB is less than the slope of the line AC then the three points are listed in a counterclockwise order.
This is equivalent to:
def ccw(A,B,C):
return (C.y-A.y)*(B.x-A.x) > (B.y-A.y)*(C.x-A.x)
You might be wondering how does this help? Think of two line segments AB, and CD. These intersect if and only if points A and B are separated by segment CD and points C and D are separated by segment AB. If points A and B are separated by segment CD then ACD and BCD should have opposite orientation meaning either ACD or BCD is counterclockwise but not both. Therefore calculating if two line segments AB and CD intersect is as follows:
def intersect(A,B,C,D):
return ccw(A,C,D) != ccw(B,C,D) and ccw(A,B,C) != ccw(A,B,D)
I’ve created a python script which demonstrates this algorithm.
Comments
comments powered by Disqus
|
__label__pos
| 0.528541 |
Luuk Wuijster Luuk Wuijster - 1 month ago 12
PHP Question
Ajax doesn't work anymore, while it did before without changing anything
Okay,
I made a "news" site where you can post articles. And you can see them at the news page. I have a flag system, so you can flag it when you need to. When you click the flag button, a javascript confirm pops up asking you if you really want to flag that message. When you click confirm the id variable and article variable gets post to a php script that emails me with a link to delete it and the article.
But it doesn't work anymore after I changed my VPS from Ubunu 16.04 to Debian 8.
It doesn't pass the data to the php script. And I just don't know why.
This is my javascript code:
<script>
function Confirm(id){
if (confirm("Weet je zeker dat je dit bericht wilt vlaggen?")){
var vid = '#' + id ;
var bericht = $(vid).html();
$.ajax({
type: "POST",
url: "flag.php",
data: {"bericht": bericht, "id": id },
success: function () {
location.href="flag.php";
}
});
}
}
</script>
And this is my PHP:
<?php
var_dump($_POST);
// this is just to test if the POST is empty or not. (its empty)
$id = $_POST["id"];
$link = "http://h2624478.stratoserver.net/site/News/del.php? id=".$id."&&secure=S93ja432481Sjefdan23JS23asa923jWISQ128S212g";
$bericht = $_POST["bericht"];
$to = "[email protected]";
$subject = "Flagged message";
$headers = "From: [email protected]";
$compleet = "Bericht: \r\n \r\n " . $bericht . "\r\n \r\n" . $link;
if(!empty($bericht)) {
if (mail($to, $subject, $compleet, $headers)) {
echo 'Het bericht is geflaged, en word beken.';
} else {
echo 'Er is wat mis gegaan, probeer opnieuw.';
}
} else {
echo 'error 2';
}
//header('Location: http://h2624478.stratoserver.net/site/News/');
?>
If you want to see the site in action:
http://h2624478.stratoserver.net/site/News/
Its all in dutch but you should be able to understand it. (atleast I hope so)
Answer
Adding dataType: 'json' to your jQuery ajax would solve the problem.
Try modifying the error function in jQuery to this
error: function (jqXHR, exception) {
var msg = '';
if (jqXHR.status === 0) {
msg = 'Not connect.\n Verify Network.';
} else if (jqXHR.status == 404) {
msg = 'Requested page not found. [404]';
} else if (jqXHR.status == 500) {
msg = 'Internal Server Error [500].';
} else if (exception === 'parsererror') {
msg = 'Requested JSON parse failed.';
} else if (exception === 'timeout') {
msg = 'Time out error.';
} else if (exception === 'abort') {
msg = 'Ajax request aborted.';
} else {
msg = 'Uncaught Error.\n' + jqXHR.responseText;
}
$('#post').html(msg);
}
This functions shall help you to debug the problem.
You may want to send the response in JSON format.
e.g.
if (mail($to, $subject, $compleet, $headers)) {
echo json_encode(array('success'=>true,'msg'=>'Het bericht is geflaged, en word beken.'));
} else {
echo json_encode(array('success'=>false,'msg'=>'Er is wat mis gegaan, probeer opnieuw.'));
}
|
__label__pos
| 0.856799 |
Jira integration
Authentication
Bugsee supports both Basic Authentication (login/password) as well as OAuth when integrating with Jira. Basic is much easier to set up, it doesn't not require one to be an administrator of Jira to set up OAuth on the Jira end. It might, however, be considered less secure. In either case we strongly advise to create a special user in your Jira with limited scope and access.
Setting up OAuth
Navigate to Administration area of your JIRA dashboard and open "Application links" in "Application" section:
Administration area
Put Bugsee URL into "Application" field and click "Create new link". Use the following URL:
https://app.bugsee.com
After clicking "Create new link" button JIRA will bring you the following dialog:
Configure application URL
Ignore the warning and click "Continue".
JIRA will bring you next dialog:
Link applications
Fill application name field with the value that uniquely identifies Bugsee integration. Also don't forget to check "Create incoming link". When done click "Continue".
Finally, you will see the following dialog:
Link application keys
Supply Bugsee for both Consumer Name and Consumer Key and paste the following into public key:
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCm9MCEVvMRwTGC+9lNrmWWhD03
UnSIPsxrLVVak700hZaV+hUgM8d7vNxx+C5640eFqHNOaba5FuveXr7JTXM9lCDf
te4pqldj59/lELQjjS9Y6uRk33TlJfJhDrUncNjMmYU0oviciLf29bobRDVa9aYb
VLy6zImCFbJfPz1qbwIDAQAB
-----END PUBLIC KEY-----
Once Jira end is ready to accept OAuth calls, start Bugsee integration wizard, chose OAuth and enter your Jira URL. You will be redirected to Jira and asked to grant Bugsee permissions:
Grant permissions
After completing this step, another screen will appear and will present you with a verification code:
Verification code
Copy the code, close the window and enter the code into Bugsee wizard to continue.
Custom recipes
Jira is very customizable, fields can be made mandatory, new fields can be added, issue types, priorities and other default types can be changed. Bugsee can accomodate all these customizations with the help of custom recipes. This section provides a few examples of using custom recipes specifically with Jira. For basic introduction, refer to custom recipe documentation.
Setting components field
function fields(context) {
// ....
return {
// ...
custom: {
// This example sets component by its ID
components: [{id: "12345"}]
}
};
}
Setting default priority for crashes
function fields(context) {
const issue = context.issue;
return {
// ...
custom: {
// Lets override the priority only for crashes and leave it untouched otherwise
priority: (issue.type == 'crash') ? { id: "12345" } : undefined
}
};
}
Changing issue type for crashes
function fields(context) {
const issue = context.issue;
return {
// ...
custom: {
// Lets override the issuetype only for crashes and leave it untouched otherwise
issuetype: (issue.type == 'crash') ? { id: "12345" } : undefined
}
};
}
Changing default assignee for an issue
function fields(context) {
const issue = context.issue;
return {
// ...
custom: {
// All the bugs will go to 'joe' and all the crashes to 'jane'
assignee: (issue.type === 'crash') ? { name: "jane" } : { name: "joe" }
}
};
}
Setting version for an issue
function fields(context) {
const issue = context.issue;
const app = context.app;
return {
// ...
custom: {
// We use default Jira versions field, which is an array of available version in jira.
// Important: This assumes the version with that name is already created in Jira prior to first issue
// being reported, otherwise integration will break.
versions : [
{
name : app.version
}
],
// To store the build number, we have a custom field created in Jira, so lets store it there
customfield_10308 : app.build
}
};
}
|
__label__pos
| 0.798315 |
Full Name Validation Laravel With Code Examples
Full Name Validation Laravel With Code Examples
Through the use of the programming language, we will work together to solve the Full Name Validation Laravel puzzle in this lesson. This is demonstrated in the code that follows.
'Fullname'=> 'required|regex:/^[\pL\s]+$/u'
We have demonstrated, with a plethora of illustrative examples, how to tackle the Full Name Validation Laravel problem.
How do you validate exact words in laravel?
To get the exact words to validate you can make use of Rule::in method available with laravel. Using Rule::in method whatever the values provided by this rule has to be matched otherwise it will fail.01-Sept-2022
How many types of validation are there in laravel?
Each form request generated by Laravel has two methods: authorize and rules .
How do I create a custom validation rule in laravel?
Laravel 9 - Simple Custom Validation Rules Example
How do you validate first name and last name in HTML?
Here, you learn to validate a registration form that has the First Name, Last Name, Email Address, Mobile Number, Password & Confirm Password.1. Create an HTML Form.
What does -> Validate do Laravel?
It validates the incoming data. By default, base controller class uses a ValidatesRequests trait which provides a convenient method to validate incoming HTTP requests with a variety of powerful validation rules.
How do I count words in Laravel?
The wordCount() method is one of many methods Laravel uses to manipulate strings. The wordCount() method counts the number of words in a string.
What is custom validation in laravel?
Custom Validation Rule Using Closures The function in it is getting 3 values: attribute, value, and fail. The attribute is the field for which the validation is happening. The value corresponds to the actual value of the said object and failure is the callback method that would be executed once the validation fails.01-Mar-2021
What is bail in laravel validation?
you can easily use bail validation in laravel 6, laravel 7, and laravel 8. If you added more than one validation on your field like required, integer, min and max then if the first is fail then the other should stop to display an error message. right now by default, it prints others too.
What is male validation mean?
What is male validation? In a nutshell, male validation is where women value the opinions – especially praise and sexual attention – of men more than they respect their perspectives or those of other women or non-binary people.02-Jun-2021
How do you write a validation controller?
The first step to enable validation in a controller is to add @Valid to a method parameter that is created using model binding. In lieu of explicit validation error handling (which we will cover below), Spring Boot throws an exception if validation fails for the new object.
|
__label__pos
| 0.999968 |
Decimal to any other Number System converter.[Please check this code for required correction] | Sololearn: Learn to code for FREE!
+1
Decimal to any other Number System converter.[Please check this code for required correction]
1)Please check this code for various set of inputs because I feel there are many loopholes in it and I am not able to figure it out. 2) Suggest any shorter and better method possible. 3) how to write code for base greater than 16 https://code.sololearn.com/cfo6jy6xnCb2/?ref=app
3/19/2022 5:58:34 PM
Md Saif ali
6 Answers
New Answer
+3
I'm adding answer for questions 1, 2 here as my code example above for answer 3 became too long 2. Shorter method: _________________________ x = input("your namber") b = input ("base system") letterList = [A, B, C, D, E, F] result = "" While (x > 0) { i = Math.mod(x, b) x = (x- t)/b If (i > 9) { i = letterList[i-10] } result = str(i) + result } ________________ 1. Avoid loopholes: To avoid loop hols make sure that: 0 < Base < 10+ letterList.length (see letterList in the code above)
+2
lior bakalo thanks
+2
Md Saif ali You asked: lior bakal 1. What if I run out of alphabets i.e after Z.. 3. I didn't get ur point 2 "for infinite base..." Answer: Infinite base is for when you run out of alphabet. You then need to present the numbers is your new (huge) base as list. For example in the base of 256 the number 300 would be 1, 44.
+1
3. Use a list containing letters by order. Example code: _________________________ x = input("your namber") b = input ("base system") letterList = [A, B, C, D,...., Z] result = "" While (x > 0) { i = Math.mod(x, b) x = (x- t)/b If (i > 9) { i = letterList[i-10] } result = str(i) + result } ________________ Please note: 3.a.) Code limit base value This code is limited to some base, because at some point we run out of letters. Though you can always extend letterList to make it larger. maximum base = 10 + letterList.length 3.b.) For infinite base: To make it infinite size base would be to separate each value of the result by comma. For hex base result would be like: 43 = [2, 11] // on 16 base Then you can make even 256 base (such as used for RGB): 300 = [1, 44] // on 256 base Hope it was helpful Write me back if you have questions
+1
lior bakalo 1. What if I run out of alphabets i.e after Z. 2. How to deal with negative as well as floating numbers conversion. 3. I didn't get ur point 2 "for infinite base..."
+1
Md Saif ali You also asked: 2. How to deal with negative as well as floating numbers conversion Answer: This is indeed not addressed in my previous code. We would want to make this float an integer, then deal with it as before. Here's how: Assuming we have a number x 0 < x < 1 in the base system "b". you can use this code to extend my previous code: _________________________ x = input("your float, zero to one") b = input ("base system") letterList = [A, B, C, D,...., Z] result = "0." While (x > 0) { x = x×b i = x - Math.mod(x, 0) x =- i If (i > 9) { i = letterList[i-10] } result = result + str(I) } ____________________ Note on changes from previous integer code: A) x is multiplied by the base "b" each iteration to make is bigger then 1 but smaller then "b" B) we make sure to subtract mod (x, 0) from "i" so that i remains integer. C) we append str(i) from the right of "result" (either then left as done with integer input) For negative values add "-" to the left of "result".
|
__label__pos
| 0.994995 |
segunda-feira, 8 de março de 2021
DICAS 3278 - WINDEV WEBDEV MOBILE - COMO FAZER UM VALOR POR EXTENSO - PERGUNTAS E RESPOSTAS - 35
https://youtu.be/IOONyjHdEmM
Bom Dia Boa Tarde Boa Noite
Mande um email pedido para receber todas notificaçoes dos videos
Seja Membro do Canal
Esse Video vai estrear as 15:15 do dia 08/03/2021
DICAS 3278 - WINDEV WEBDEV MOBILE - COMO FAZER UM VALOR POR EXTENSO - PERGUNTAS E RESPOSTAS - 35
ASSUNTOS
Mande um email pedido para receber todas notificaçoes dos videos
email : [email protected]
Vamos criar uma procedure global, valor extenso
Como a rotina é grande vou explicar por partes de codigo
Vamos criar uma window extenso
vamos criar um valor
um texto multilinha
um botao para chamar o extenso
Vamos executar
=============== INDICE PERGUNTAS E RESPOSTA
-------------------------------INDICES COMANDOS
DICAS 1979 - WINDEV - COMANDOS 29 sidebar control Barra Lateral
DICAS 2147 WINDEV WEBDEV MOBILE COMANDOS 63 MAIS 1 PARAMETRO RETORNO JANELA
DICAS 2148 WINDEV WEBDEV MOBILE COMANDOS 64 The Splitter control
DICAS 2156 WINDEV WEBDEV MOBILE COMANDOS 65 HourGlass Lupa Ampuleta
DICAS 2157 WINDEV WEBDEV MOBILE COMANDOS 66 CEP
DICAS 2168 WINDEV WEBDEV MOBILE WEBSERVICE COMANDOS 67 SysListScreen
DICAS 2196 WINDEV WEBDEV MOBILE COMANDOS 68 eDT TOKEN
DICAS 2198 WINDEV WEBDEV MOBILE COMANDOS 69 eDT TOKEN PROGRAMACAO
DICAS 2213 WINDEV WEBDEV MOBILE COMANDOS 70 LEAPYEAR ANO BISSEXTO
DICAS 2215 WINDEV WEBDEV MOBILE 24 NOVIDADES 18 COMANDOS 71 BCToImage -GERAR IMAGEN CODIGO DE BARRA
DICAS 2229 WINDEV WEBDEV MOBILE 24 NOVIDADES 28 COMANDOS 72 PLANOS NOME
DICAS 2230 WINDEV WEBDEV MOBILE 24 NOVIDADES 29 COMANDOS 73 CONDICOES DEBUG
DICAS 2237 WINDEV WEBDEV MOBILE 24 NOVIDADES 33 COMANDOS 76 MININIZAR Iconize
DICAS 2240 WINDEV WEBDEV MOBILE 24 novidades 34 Comandos 77 Maximize
DICAS 2256 WINDEV WEBDEV MOBILE 24 Novidades 45 Comandos 78 Manual Fonte
DICAS 2257 WINDEV WEBDEV MOBILE 24 Novidades 46 Comandos 79 Help Como Fazer
DICAS 2258 WINDEV WEBDEV MOBILE 24 Novidades 47 Comandos 80 TraceDeleteAll
DICAS 2260 WINDEV WEBDEV MOBILE 24 Novidades 49 Comandos 82 Edit AddAssi...
DICAS 2272 WINDEV WEBDEV MOBILE COMANDOS 82 ExeListProcess
DICAS 2273 WINDEV WEBDEV MOBILE COMANDOS 83 DLL LINKADAS PROCESSOS AddChildDelayed
dica 2631 extrair conteudo com Middle
PlayList WebDev tutorial Amarildo Webdev
Playlist Windev Tutorial Amarildo Windev
PlayList Windev Mobile
Amarildo
Donwloads Windev
Windev
WxSolucoes
Matos Informatica
Repositorio Windev
Inscricao Evento Pcsof
Site forum Google
Video sobre 3 Mil Videos Windev
Video sobre Alfaserver servidor
// codigo Valor Extenso
//chamar valor extenso
edt_valor_Extenso=valor_por_Extenso(edT_valor)
//
PROCEDURE valor_por_Extenso(receber_valor is currency)
unidade_array_19 is array of 19 string
unidade_array_19[1] = "UM "
unidade_array_19[2] = "DOIS "
unidade_array_19[3] = "TRÊS "
unidade_array_19[4] = "QUATRO "
unidade_array_19[5] = "CINCO "
unidade_array_19[6] = "SEIS "
unidade_array_19[7] = "SETE "
unidade_array_19[8] = "OITO "
unidade_array_19[9] = "NOVE "
unidade_array_19[10] = "DEZ "
unidade_array_19[11] = "ONZE "
unidade_array_19[12] = "DOZE "
unidade_array_19[13] = "TREZE "
unidade_array_19[14] = "QUATORZE "
unidade_array_19[15] = "QUINZE "
unidade_array_19[16] = "DEZESSEIS "
unidade_array_19[17] = "DEZESSETE "
unidade_array_19[18] = "DEZOITO "
unidade_array_19[19] = "DEZENOVE "
dezena_array_9 is array of 9 string
dezena_array_9[1] = "DEZ "
dezena_array_9[2] = "VINTE "
dezena_array_9[3] = "TRINTA "
dezena_array_9[4] = "QUARENTA "
dezena_array_9[5] = "CINQUENTA "
dezena_array_9[6] = "SESSENTA "
dezena_array_9[7] = "SETENTA "
dezena_array_9[8] = "OITENTA "
dezena_array_9[9] = "NOVENTA "
centena_array_9 is array of 9 string
centena_array_9[1] = "CENTO "
centena_array_9[2] = "DUZENTOS "
centena_array_9[3] = "TREZENTOS "
centena_array_9[4] = "QUATROCENTOS "
centena_array_9[5] = "QUINHENTOS "
centena_array_9[6] = "SEISCENTOS "
centena_array_9[7] = "SETECENTOS "
centena_array_9[8] = "OITOCENTOS "
centena_array_9[9] = "NOVECENTOS "
valor_string is string = NumToString(receber_valor, "012,2f")
//123456789012
//000000153,13
grupo_array_4 is array of 4 string
grupo_array_4[1] = Middle(valor_string, 1, 3) // 000
grupo_array_4[2] = Middle(valor_string, 4, 3) // 000
grupo_array_4[3] = Middle(valor_string, 7, 3) // 153
grupo_array_4[4] = "0" + Middle(valor_string, 11, 2) // 13
Contador_int is int
Tamanho_int is int
parte_string is string
texto_array_4 is array of 4 string
Final_string is string
FOR Contador_int = 1 TO 4
parte_string = grupo_array_4[Contador_int]
Tamanho_int = 0
IF Val(parte_string)>= 1 AND Val(parte_string)<10 THEN
Tamanho_int = 1
END
IF Val(parte_string)>=10 AND Val(parte_string)<100 THEN
Tamanho_int = 2
END
IF Val(parte_string)>=100 AND Val(parte_string)<1000 THEN
Tamanho_int = 3
END
IF Tamanho_int = 3 THEN
IF Right(parte_string, 2) <> "00" THEN
texto_array_4[Contador_int] = texto_array_4[Contador_int] + centena_array_9[Left(parte_string, 1)] + "E "
Tamanho_int = 2
ELSE
IF Left(parte_string,1)="1" THEN
texto_array_4[Contador_int] = texto_array_4[Contador_int] + "CEM "
ELSE
texto_array_4[Contador_int] = texto_array_4[Contador_int] + centena_array_9[Left(parte_string, 1)]
END
END
END
IF Tamanho_int = 2 THEN
IF Val(Right(parte_string, 2)) < 20 THEN
texto_array_4[Contador_int] = texto_array_4[Contador_int] + unidade_array_19[Right(parte_string, 2)]
ELSE
texto_array_4[Contador_int] = texto_array_4[Contador_int] + dezena_array_9[Middle(parte_string, 2, 1)]
IF Right(parte_string, 1) <> "0" THEN
texto_array_4[Contador_int] = texto_array_4[Contador_int] + "E "
Tamanho_int = 1
END
END
END
IF Tamanho_int = 1 THEN
texto_array_4[Contador_int] = texto_array_4[Contador_int] + unidade_array_19[Right(parte_string, 1)]
END
END
IF (Val(grupo_array_4[1]) + Val(grupo_array_4[2] + Val(grupo_array_4[3])) = 0 AND Val(grupo_array_4[4]) <> 0) THEN
IF Val(grupo_array_4[4]) = 1 THEN
Final_string = texto_array_4[4] + "CENTAVO"
ELSE
Final_string = texto_array_4[4] + "CENTAVOS"
END
ELSE
Final_string = ""
IF Val(grupo_array_4[1])<>0 THEN
Final_string = Final_string + texto_array_4[1]
IF Val(grupo_array_4[1])>1 THEN
Final_string = Final_string + "MILHÕES "
ELSE
Final_string = Final_string + "MILHÃO "
END
END
IF Val(grupo_array_4[2] + grupo_array_4[3]) = 0 THEN
Final_string = Final_string + "DE "
ELSE
IF Val(grupo_array_4[2])<>0 THEN
Final_string = Final_string + texto_array_4[2]+"MIL "
END
END
Final_string = Final_string + texto_array_4[3] + "REAIS"
IF Val(grupo_array_4[4]) <> 0 THEN
Final_string = Final_string + " E " + texto_array_4[4]
IF Val(grupo_array_4[4])=1 THEN
Final_string = Final_string + " CENTAVO"
ELSE
Final_string = Final_string + " CENTAVOS"
END
END
END
RESULT Final_string
Nenhum comentário:
Postar um comentário
Teste
Teste
teste
|
__label__pos
| 0.688283 |
Home Internet How to check if a shortened link is safe or no
How to check if a shortened link is safe or no
How-to-check-if-a-shortened-link-is-safe-or-noHow to check if a shortened link is safe or no :-
URL Shortening services are really popular these days. It helps in making short URLs to share on Twitter or other social media websites. But it has also increased the security risks. Now days, cyber criminals use short URLs to hide their malicious links. And you cannot identify a malicious link just by looking at it. Most of the poplar URL shortening services like Goo.gl or bit.ly have ability to check and block malicious websites. But these websites also fail sometimes. And other less popular URL shortening services cannot detect malicious websites. So, there is always a risk of exposing yourself to various attacks by clicking on the link.
Today i am going to show you How to check if a shortened link is safe or no with online services ?
1. www.trueurl.net
with trueurl.net, you can check the short url and spam url links. You have also option to use the IE , Firefox and Chrome extensions.
How-to-check-if-a-shortened-link-is-safe-or-no
2. www.urluncover.com
URL Uncover is really simple and easy to use. Just visit the website and then paste the short URL in the URL field. Click on uncover button to check the target URL and its trust level. After clicking on the button, it lists destination URL Title, Description and trust level. It uses McAfee site advisor to tell the website trust level. Trust level helps you to check whether the website is safe to use of not. It also lists the screenshot of website to give an idea of website.
How to check if a shortened link is safe or no1
3. CheckShortURL – Your shortened URL expander
CheckShortURL is an expand link facility: it allows you to retrieve the original URL from a shortened link before clicking on it and visiting the destination.
How-to-check-if-a-shortened-link-is-safe-or-no
4. LongURL | The Universal Way to Expand Shortened URLs
Avoid phishing, malware, and viruses by examining short URLs before visiting them. Find out where links really take you.
How-to-check-if-a-shortened-link-is-safe-or-no1
5. URL X-ray: Find out where shortened URLs lead to without clicking
How-to-check-if-a-shortened-link-is-safe-or-no1
so be safe with short url.
Hope you like my post.How to check if a shortened link is safe or no. Please share with others.
LEAVE A REPLY
Please enter your comment!
Please enter your name here
twenty + 16 =
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.739907 |
This article was contributed by Christine Tomas who works as a tech expert and aspiring writer at Howly.
Is your computer slow and unresponsive? If so, use these tips and solutions to help speed up your PC performance in Windows, specifically for Asus laptops.
In this article, we will discuss 4 various tips and tools that can help you boost the speed and efficiency of your PC. Earlier we covered some common asus laptop problems and solutions to help you fix them. Whether you’re running low on storage space or have overheating issues, we’ll provide you with the knowledge and resources you need to get your computer running at its best in no time. From uninstalling unnecessary programs and apps to running disk cleanup and disk defragmenter, and disabling startup programs, we’ll cover it all.
So, if you’re ready to take control of your computer’s performance, keep reading to learn more!
1. Uninstalling Unnecessary Programs and Apps
To uninstall unnecessary programs and apps on a Windows computer, you can follow these steps:
1. Click on the Windows Start button, and then select “Control Panel“.
2. In the Control Panel, select “Programs and Features” or “Apps and Features” (depending on your version of Windows).
3. Find the program or app that you want to uninstall in the list and select it.
4. Click the “Uninstall” button or link, and then follow the prompts to remove the program or app from your computer.
5. Repeat steps 3-4 for any other programs or apps that you want to remove.
Alternatively, you can use the built-in Settings App on Windows 10 to uninstall apps.
1. Click on the Windows Start button and select the gear icon to open Settings.
2. Select “Apps“.
3. Find the app that you want to uninstall and select it.
4. Click the “Uninstall” button and follow the prompts to remove the app from your computer.
Note that some programs or apps may also have their own uninstaller programs that can be used to remove them. Check the program’s documentation or support website for more information. Also, be careful when uninstalling programs as you might remove something that is important for the system to run properly.
2. Clearing Out Temporary Files and Browser Cache
To clean out temporary files and browser cache on a Windows computer, you can follow these steps:
1. Open the File Explorer and navigate to the “C:” drive.
2. Click on the “View” tab and select “Options” or “Change folder and search options“.
3. Select the “View” tab and then scroll down to the “Hidden files and folders” section.
4. Select “Show hidden files, folders, and drives” and then click “OK“.
5. Open the folder named “Windows” and then open the folder named “Temp“.
6. Select all of the files and folders in the Temp folder and press the “Delete” key on your keyboard.
7. Repeat steps 1-6 for all users on the computer.
To clear the browser cache, you can follow these steps for each browser:
1. Open the browser and click on the Menu icon (usually three horizontal lines or dots).
2. Select “Settings” or “Options“.
3. Look for a section labeled “Privacy” or “Privacy and security“.
4. Click on “Clear browsing data” or “Clear cache“.
5. Select the types of data you want to remove, such as “Cached images and files” and click on “Clear data“.
It is important to clear out your temporary files and browser cache regularly to free up disk space and improve the performance of your computer and browser. However, be aware that clearing the browser cache will also log you out of any website that you were previously logged in to.
3. Running Disk Cleanup and Disk Defragmenter
To run Disk Cleanup and Disk Defragmenter on a Windows computer, you can follow these steps:
1. Click on the Windows Start button, and then select “Computer” or “This PC“.
2. Right-click on the drive you want to clean up or defragment, and select “Properties“.
3. In the Properties window, select the “Tools” tab.
4. Under the “Optimize and Defragment Drive” section, click on “Optimize” to run Disk Defragmenter, or click on “Clean up system files” to run Disk Cleanup.
Alternatively, you can use the built-in Disk Cleanup and Disk Defragmenter tools in Windows:
1. Click on the Windows Start button and type “Disk Cleanup” or “Defragment” in the search box.
2. Select the “Disk Cleanup” or “Disk Defragmenter” option from the list of results.
3. Select the drive you want to clean up or defragment.
4. Follow the prompts to complete the process.
It is important to run Disk Cleanup and Disk Defragmenter regularly to free up disk space and improve the performance of your computer. Disk Cleanup will remove unnecessary files, such as temporary files, recycle bins, and other unnecessary files. Disk Defragmenter will reorganize the data on your hard drive to improve the performance of your computer.
Note that some newer versions of Windows, such as Windows 10, have a built-in automatic disk cleanup and defragmenting feature, so you may not need to run them manually.
4. Disabling Startup Programs
To disable startup programs on a Windows computer, you can follow these steps:
1. Click on the Windows Start button, and then type “msconfig” in the search box and press Enter.
2. In the System Configuration window, select the “Startup” tab.
3. In the Startup tab, you will see a list of all the programs that are set to start automatically when your computer boots up.
4. Uncheck the box next to the program you want to disable.
5. Click on “Apply” and then “OK” to save the changes.
Alternatively, you can use the Task Manager to disable startup programs:
1. Press “CTRL + ALT + DELETE” and select “Task Manager”.
2. In the Task Manager window, select the “Startup” tab.
3. Find the program you want to disable, select it and click on the “Disable” button.
Disabling unnecessary startup programs can help to improve the startup time of your computer and make your computer run faster. However, be aware that disabling some startup programs may affect the functionality of other programs or the operating system itself. So it’s important to be careful when disabling startup programs, especially if you are not familiar with the program. It’s always good to research the program before disabling it.
In conclusion, there are several tips and tools that you can use to improve the performance of your Windows computer. These include:
• Uninstalling unnecessary programs and apps.
• Cleaning out temporary files and browser cache.
• Running disk cleanup and disk defragmenter.
• Disabling startup programs.
By following these steps, you can free up disk space, improve the performance of your computer, and speed up the startup time. However, it’s important to be cautious when uninstalling programs or disabling startup programs, as it may affect the functionality of other programs or the operating system itself. Also, it’s important to regularly perform those steps to maintain the performance of your computer.
About the author
Christine Tomas is a tech expert, consultant, and aspiring writer at Howly. She writes for different news portals and thematic blogs for tech experts that helps her stay at the heart of programming, technology news.
|
__label__pos
| 0.989247 |
Alex Alex - 4 months ago 5
PHP Question
PHP: echoing a constant with a variable name
How can I do that?
I have something like:
define($stuff.'_FOO', 'whatever');
echo $stuff.'_FOO';
and it doesn't work :(
I just want to echo the constant's value...
Answer
Check out constant().
In your case:
echo constant($stuff . '_FOO');
Comments
|
__label__pos
| 0.97118 |
0
$\begingroup$
Using spectral decomposition, we can write any symmetric matrix as
$$\Sigma = Q \Lambda Q^{\top}$$
where $Q$ is orthonormal, and
$$\Lambda = \text{diag}(\lambda_1, ..., \lambda_p)$$
with $\lambda_1 \geq ... \geq \lambda_p \geq 0$.
An alternative parameterization can be made for the covariance matrix in terms of eigenvalues $\lambda_1,...,\lambda_p$ and $Q$ can be expressed using Euler angles in terms of $p(p-1)/2$ angles, $\theta_{ij}$, where $i = 1,2,...,p-1$ and $j = i, ..., p-1$.[1]
Can someone elaborate on this method such that given a function with p eigenvalues and $p(p-1)/2$ angles I can build a valid $\Sigma$
[1]: Hoffman, Raffenetti, Ruedenberg. "Generalization of Euler Angles to N‐Dimensional Orthogonal Matrices". J. Math. Phys. 13, 528 (1972)
$\endgroup$
• $\begingroup$ yes. It is a angles. edited $\endgroup$ – Rohit Arora Jul 8 '15 at 13:59
• $\begingroup$ You have to specify a convention for what those angles mean. Even in $\mathbb{R}^3$ there are multiple conventions. If you want to use the convention in the paper you reference, then you will have to explain it here in your post. $\endgroup$ – whuber Jul 8 '15 at 17:57
1
$\begingroup$
Those angles are sufficient to define the eigenvectors, to within normalization of the eigenvectors. Therefore you can use them to determine the eigenvectors, which after normalization, can be used to populate Q (there are two possibilities differing by a factor of -1 for the signs of elements for each column of Q, , but they cancel out due to multiplying Q and Q'). I presume the details are provided in the paper.
$\endgroup$
• 1
$\begingroup$ can you show your idea with an example? $\endgroup$ – TPArrow Jul 8 '15 at 14:39
• $\begingroup$ If p =2, one angle is sufficient to define orientation of the 1st eigenvector relative to the axes, and 2nd eigenvector is orthogonal to that. Similarly, if p = 3, 3 angles plus orthogonality of the 3 eigenvectors is sufficient to defene the orientation of all 3 eigenvectors. Etc. As I wrote, I presume the details are in the paper, which I don't have access to. $\endgroup$ – Mark L. Stone Jul 8 '15 at 14:52
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.777177 |
How the SIGN() function works in Mariadb?
The SIGN() function in MariaDB is a simple but useful function that returns the sign of a given numeric value.
Posted on
The SIGN() function in MariaDB is a simple but useful function that returns the sign of a given numeric value. It determines whether the input value is positive, negative, or zero, and returns a corresponding integer value of 1, -1, or 0, respectively.
Syntax
The syntax for the MariaDB SIGN() function is as follows:
SIGN(X)
• X: The numeric value for which the sign is to be determined.
The function returns an integer value:
• If X is positive, it returns 1.
• If X is negative, it returns -1.
• If X is zero, it returns 0.
Example: Determining the sign of various numeric values
In this example, we’ll use the SIGN() function to determine the sign of different numeric values.
SELECT SIGN(42), SIGN(-3.14), SIGN(0);
The following is the output:
+----------+-------------+---------+
| SIGN(42) | SIGN(-3.14) | SIGN(0) |
+----------+-------------+---------+
| 1 | -1 | 0 |
+----------+-------------+---------+
The SIGN() function correctly identifies the sign of the positive value 42 as 1, the negative value -3.14 as -1, and the zero value 0 as 0.
The following are some functions related to the MariaDB SIGN() function:
• MariaDB ABS() function is used to return the absolute value of a numeric value.
• MariaDB CEIL() function is used to round a numeric value up to the nearest integer.
• MariaDB FLOOR() function is used to round a numeric value down to the nearest integer.
• MariaDB ROUND() function is used to round a numeric value to a specified number of decimal places.
• MariaDB TRUNCATE() function is used to truncate a numeric value to a specified number of decimal places.
Conclusion
The SIGN() function in MariaDB is a simple but powerful tool that can be used in various scenarios where you need to determine the sign of a numeric value. It can be particularly useful in conditional logic, data validation, and mathematical operations. While the SIGN() function itself is straightforward, it can be combined with other functions and conditional statements to perform more complex operations and calculations based on the sign of a value.
|
__label__pos
| 0.962575 |
Why an animation of an object with Scale effect don't work?
0 favourites
• 8 posts
From the Asset Store
112 High-Quality destruction sounds for videogames
• Hi C2 fellas!
An object with Move To behavior and Scale effect (made with "Set size" with "lerp" and "delta-time") have an animation that isn't working when I call him.
When I call the animation of this object in an event, only the first frame is displayed with a short and slow scale effect.
I tried to solve this in several ways, but I couldn't (perhaps because I already quite tired...).
Someone has a solution for this?
Thank you in advance the attention and collaboration of you.
• Hi, the Move To behavior is ok, it doesn't interfere with the object animation, the scale effect on the other hand, does.
I don't know how you are doing it, but if you are working with scale and animation at the same time, try to avoid change the scale every tick. Put the trigger once effect every time you change the scale, and try not to change it in the middle of the animation. If the object are not being scaled the animations should work properly.
Give it a try and if you still have problems try post your .capx here so we can take a better look at what you're doing.
• Hi RayKi! Thanks for your collaboration! ^^
The object leaves from an one frame animation to a real "5 looped frames" animation long after the scale effect has been executed.
I put the scale effect in a group, that is activated at the start of layout. Then the effect isn't under the condition "Every tick".
Group Scale
Object: Set Size
lerp(Object.Width,194,1.5*dt)
lerp(Object.Height, 331,1.5*dt)
Addendum: Even maintaining the object actual size with a boolean instance variable before the animation execution, the problem continues.
• I also tried to deactivate the group of the Scale effect, but not worked.
• Try Construct 3
Develop games in your browser. Powerful, performant & highly capable.
Try Now Construct 3 users don't see these ads
• Weird o-o
Try uploading your .capx then.
Sorry for not being able to help any further
• Thank you for your attention and help, ^^
I forgot to say that I'm using the R208 beta version of Construct 2.
Is this the cause? o.O
• I tried other things and... nothing.
• I partially solved the problem by switching the SET SIZE by SET SCALE, for manipulating an object with one or more animations apparently only work well if it is through the Scale function.
Now the problem is when I try to change the animation of the object before moving it to another position. The object gets with strange behavior, locking in a frame of previous animation while trying to make the transition to the next animation.
So bizarre...
Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)
|
__label__pos
| 0.570045 |
Do TouchTunes Credits Expire? Unveiling the Truth Behind TouchTunes Credit Expiration
In the world of digital entertainment and jukebox services, TouchTunes has become a popular choice for music lovers. However, as users accumulate credits on their TouchTunes accounts, a common question arises – do these credits expire? This article aims to unveil the truth behind TouchTunes credit expiration, providing users with the information they need to make the most out of their digital music experience.
Understanding the TouchTunes Credit System
The TouchTunes credit system is a popular way for users to purchase and use credits on the TouchTunes digital jukebox platform. Understanding how this credit system works is essential for users to make the most of their experience.
When users purchase credits, they are essentially buying a virtual currency that can be used to play songs on any TouchTunes-compatible jukebox. Each song played deducts a specific number of credits from the user’s account. The cost per song varies based on factors such as location and popularity.
Unlike physical tokens or coins, TouchTunes credits are stored electronically in the user’s online account, making it easy to access and manage. Users can add more credits to their account at any time by purchasing them through the TouchTunes app or website. This ensures a seamless and hassle-free experience for the users.
Understanding the TouchTunes credit system is crucial for users to keep track of their credits and make informed choices while using the platform. By familiarizing themselves with the system, users can fully enjoy the TouchTunes experience without any unwanted surprises.
The Terms and Conditions of TouchTunes Credit Expiration
The Terms and Conditions of TouchTunes Credit Expiration play a crucial role in understanding how the credit system works and when the credits expire. When it comes to TouchTunes, each credit has an expiration date, which varies depending on the type of credit and the region.
Generally, promotional credits often have a shorter lifespan compared to the credits purchased by users. For instance, credits received as a part of a special offer or promotion may have a limited validity period, usually ranging from 30 to 60 days. Additionally, some credits may be tied to specific events or holidays and may expire shortly after the occasion.
To ensure transparency, TouchTunes clearly states the expiration dates of each credit in the details provided during the purchase or allocation process. Users are encouraged to review the Terms and Conditions to become familiar with the specific expiration period of their credits.
It is important for users to be aware of these terms to prevent any disappointment or frustration caused by losing unused credits. By staying informed about the expiration dates, users can plan their usage accordingly and take full advantage of their credits before they expire.
3. How Long do TouchTunes Credits Last?
TouchTunes credits have a specific validity period before they expire, and understanding this timeframe is crucial for users. The duration for which TouchTunes credits last depends on various factors, including the method of acquiring the credits and the specific terms and conditions of the establishment using the TouchTunes service.
Typically, TouchTunes credits acquired through purchases directly from the TouchTunes website or from authorized retailers have a standard expiration period of one year from the date of purchase. This means that users have a full twelve months to utilize their credits before they are forfeited.
However, it is important to note that in some cases, credits acquired through promotional offers or as part of specific campaigns may have a shorter validity period. Users should carefully read the terms and conditions provided by TouchTunes or the establishment where they obtained the credits to determine the exact expiration date.
To maximize the usage of their TouchTunes credits, users should keep track of their remaining balance and plan their usage accordingly. Being aware of the expiration date will help users make the most out of their credits and avoid losing any value due to expiration.
The Impact of TouchTunes Credit Expiration on Users
The expiration of TouchTunes credits can have a significant impact on users of the platform. Many users accumulate a large number of credits over time, whether through purchased credits or promotional offers. When these credits expire, it can be frustrating and disappointing for users who had plans to use them for future music plays or jukebox features.
One of the main impacts of credit expiration is the loss of value. Users may have spent their hard-earned money or put effort into earning credits, only to have them become worthless once they expire. This can lead to a sense of wasted resources and dissatisfaction with the TouchTunes credit system.
Additionally, the expiration of credits can disrupt the user experience. Users may have plans to enjoy a night out with friends or have a specific occasion in mind to use their credits. When these credits expire before they have a chance to use them, it can result in disappointment and a missed opportunity for enjoyment.
Overall, the impact of TouchTunes credit expiration on users can range from financial losses to frustration and missed opportunities. It is important for users to understand the expiration policies and take proactive steps to avoid losing their credits.
Tips to Avoid Losing TouchTunes Credits
Losing TouchTunes credits can be frustrating and disappointing, especially if you have spent a significant amount of money on them. However, there are several tips and tricks you can utilize to ensure you don’t lose your hard-earned credits.
1. Regularly check the expiration date: TouchTunes credits typically have an expiration date, so it is crucial to keep track of when they will expire. Take note of the date and set reminders beforehand to ensure you use or transfer the credits in time.
2. Utilize credits frequently: Instead of accumulating a large number of credits, try to use them regularly. By doing this, you decrease the chance of forgetting about them and letting them expire.
3. Share credits with friends: If you have a surplus of credits that you are unable to use before the expiration date, consider gifting them to a friend. TouchTunes allows for credit transfers, so this would ensure that the credits are put to good use.
4. Stay informed: Keep yourself updated on any changes or updates made by TouchTunes regarding credit expiration. By staying informed, you can adjust your usage accordingly and avoid any unnecessary losses.
By following these tips, you can prevent the frustration of losing TouchTunes credits and make the most of your music experience without worrying about expiration dates.
6. Reclaiming Expired TouchTunes Credits
Reclaiming expired TouchTunes credits can be a frustrating experience, but it is not entirely impossible. While TouchTunes has a policy of expiration for its credits, there are methods you can try to recover them.
Firstly, it is important to note that TouchTunes customer support should be your initial point of contact. They have a dedicated team to address customer concerns, including expired credits. Reach out to them through their website or helpline and provide all necessary details, such as the date and amount of the expired credits.
Sometimes, customer support may restore the expired credits as a one-time exception, especially if you have a long history with the platform or can provide valid reasons for the expiration. Remember to remain polite and explain your situation clearly.
Another option is to seek assistance from your local establishment where you used the TouchTunes service. Speak with the manager or owner, as they might have a better chance of liaising with TouchTunes on your behalf.
Lastly, if all else fails, consider sharing your experience on social media. While this may not guarantee the return of your credits, it can create awareness and put pressure on TouchTunes to address customer concerns regarding credit expiration.
Overall, while reclaiming expired TouchTunes credits may not always be successful, it is worth trying the available avenues and advocating for a fair resolution.
Exploring Alternatives to TouchTunes Credit Expiration
Many TouchTunes users may be frustrated with the credit expiration policy and seek alternatives that do not impose such limitations. Fortunately, there are several alternatives available for those who want to enjoy their music without the fear of losing their credits.
One popular alternative is the Jukevision app, which allows users to play music on their smartphones or tablets at participating venues. Jukevision offers a wide selection of songs and does not have a credit expiration policy, giving users the freedom to enjoy their music without any time constraints.
Another alternative is the AMI Music app, which provides a similar experience to TouchTunes. This app allows users to search and play songs at various locations without worrying about credit expiration. Furthermore, AMI Music also offers additional features such as personalized playlists and music recommendations.
In addition to these apps, some venues may offer their own music streaming platforms that do not have credit expiration policies. It’s worth checking with your local establishments to see if they have any alternative systems in place.
By exploring these alternatives, users can enjoy their music without the fear of losing credits due to expiration policies, providing a more seamless and enjoyable experience for music lovers.
FAQs
FAQ 1: Do TouchTunes credits have an expiration date?
Answer: No, TouchTunes credits do not expire. Unlike some other digital credits or gift cards, TouchTunes credits remain valid indefinitely until they are used on the TouchTunes app or jukebox.
FAQ 2: Can I transfer my TouchTunes credits to another account?
Answer: Unfortunately, it is not currently possible to transfer TouchTunes credits from one account to another. Each user’s credits are tied to their individual account and cannot be transferred or shared with other users.
FAQ 3: What happens to my TouchTunes credits if I delete the app?
Answer: Deleting the TouchTunes app does not impact your credits. Your credits are safely stored on the TouchTunes server linked to your account. When you reinstall the app or log in from another device, your credits will still be available for use.
FAQ 4: Are there any fees involved with using TouchTunes credits?
Answer: TouchTunes credits do not come with any extra fees. You can use your credits to play songs on TouchTunes jukeboxes or select music through the TouchTunes app without incurring any additional charges. The cost of song plays will be deducted from your credit balance, but there are no hidden fees or expiration-related costs.
Verdict
In conclusion, TouchTunes credits do expire after a certain period of inactivity on the user’s account. This expiration policy is set in place to encourage users to remain engaged with the TouchTunes platform and ensure that the credits are being utilized. It is important for TouchTunes users to be aware of this expiration policy and regularly check and use their credits to prevent them from expiring. By staying informed and active on the platform, users can fully maximize their TouchTunes experience and enjoy their earned credits without any unexpected expiration.
Leave a Comment
|
__label__pos
| 0.830629 |
Давным давно я мечтал об удобном (ну или не очень, но чтобы стандартном) способе работы с CRL'ами в PowerShell. Ленивые разрабы .NET'а как не чесались, так и до сих пор не чешутся по этому поводу, хотя, задача весьма востребованная. Я несколько раз пытался подойти к решению этого вопроса самостоятельно. Но незнание чего-то не позволяло этого достичь. Но не так давно в качестве практического упражнения в ASN.1 я написал вот такой страшный парсер: Basic CRL parser for PowerShell. Это даже не концепт, а просто отработка навыков работы с ASN.1. Но в виду его неуниверсальности, розовомедленности его даже стыдно запускать и показывать как он работает. Неделю назад я снова вспомнил об этой теме и решил подойти к вопросу более основательно и написал код с использованием p/invoke неуправляемого кода и с блек-джеком и шлюхами, который используется стандартыми Windows инструментами для работы с CRL объектами. Я не буду рассказывать об истории его создания и как он там внутри работает, потому что это лишено всякого смысла. Взамен я предлагаю рабочий код, который вы можете использовать в собственных целях:
#####################################################################
# Get-CRL.ps1
# Version 1.0
#
# Retrieves CRL object from a file or a DER-encoded byte array.
#
# Vadims Podans (c) 2011
# http://www.sysadmins.lv/
#####################################################################
#requires -Version 2.0
function Get-CRL {
<#
.Synopsis
Retrieves CRL object from a file or a DER-encoded byte array.
.Description
Retrieves CRL object from a file or a DER-encoded byte array.
.Parameter Path
Specifies the path to a file.
.Parameter RawCRL
Specifies a pointer to a DER-encoded CRL byte array.
.Example
Get-CRL C:\Custom.crl
Returns X509CRL2 object from a specified file
.Example
$Raw = [IO.FILE]::ReadAllBytes("C:\Custom.crl")
Get-CRL -RawCRL $Raw
Returns X509CRL2 object from a DER-encoded byte array.
.Outputs
System.Security.Cryptography.X509Certificates.X509CRL2
.NOTES
Author: Vadims Podans
Blog : http://en-us.sysadmins.lv
#>
[OutputType('System.Security.Cryptography.X509Certificates.X509CRL2')]
[CmdletBinding(DefaultParameterSetName='FileName')]
param(
[Parameter(ParameterSetName = "FileName", Mandatory = $true, Position = 0, ValueFromPipeline = $true)]
[string]$Path,
[Parameter(ParameterSetName = "RawData", Mandatory = $true, Position = 0)]
[Byte[]]$RawCRL
)
#region content parser
switch ($PsCmdlet.ParameterSetName) {
"FileName" {
if ($(Get-Item $Path -ErrorAction Stop).PSProvider.Name -ne "FileSystem") {
throw {"File either does not exist or not a file object"}
}
if ($(Get-Item $Path -ErrorAction Stop).Extension -ne ".crl") {
throw {"File is not valid CRL file"}
}
$Content = Get-Content $Path
if ($Content[0] -eq "-----BEGIN X509 CRL-----") {
[Byte[]]$cBytes = [Convert]::FromBase64String($(-join $Content[1..($Content.Count - 2)]))
} elseif ($Content[0][0] -eq "M") {
[Byte[]]$cBytes = [Convert]::FromBase64String($(-join $Content))
} else {
[Byte[]]$cBytes = [IO.File]::ReadAllBytes($Path)
}
}
"RawData" {[Byte[]]$cBytes = $RawCRL}
}
#endregion
$signature = @"
[DllImport("CRYPT32.DLL", CharSet = CharSet.Auto, SetLastError = true)]
public static extern int CertCreateCRLContext(
int dwCertEncodingType,
byte[] pbCrlEncoded,
int cbCrlEncoded
);
[DllImport("CRYPT32.DLL", SetLastError = true)]
public static extern Boolean CertFreeCRLContext(
IntPtr pCrlContext
);
[DllImport("CRYPT32.DLL", CharSet = CharSet.Auto, SetLastError = true)]
public static extern int CertNameToStr(
int dwCertEncodingType,
ref CRYPTOAPI_BLOB pName,
int dwStrType,
System.Text.StringBuilder psz,
int csz
);
[DllImport("CRYPT32.DLL", CharSet = CharSet.Auto, SetLastError = true)]
public static extern IntPtr CertFindExtension(
[MarshalAs(UnmanagedType.LPStr)]String pszObjId,
int cExtensions,
IntPtr rgExtensions
);
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CRL_CONTEXT
{
public int dwCertEncodingType;
public byte[] pbCrlEncoded;
public int cbCrlEncoded;
public IntPtr pCrlInfo;
public IntPtr hCertStore;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CRL_INFO
{
public int dwVersion;
public CRYPT_ALGORITHM_IDENTIFIER SignatureAlgorithm;
public CRYPTOAPI_BLOB Issuer;
public Int64 ThisUpdate;
public Int64 NextUpdate;
public int cCRLEntry;
public IntPtr rgCRLEntry;
public int cExtension;
public IntPtr rgExtension;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CRYPT_ALGORITHM_IDENTIFIER
{
[MarshalAs(UnmanagedType.LPStr)]public String pszObjId;
public CRYPTOAPI_BLOB Parameters;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CRYPTOAPI_BLOB
{
public int cbData;
public IntPtr pbData;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CRL_ENTRY
{
public CRYPTOAPI_BLOB SerialNumber;
public Int64 RevocationDate;
public int cExtension;
public IntPtr rgExtension;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
public struct CERT_EXTENSION
{
[MarshalAs(UnmanagedType.LPStr)]public String pszObjId;
public Boolean fCritical;
public CRYPTOAPI_BLOB Value;
}
"@
Add-Type @"
using System;
using System.Security;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
namespace System
{
namespace Security
{
namespace Cryptography
{
namespace X509Certificates
{
public class X509CRL2
{
public int Version;
public string Type;
public X500DistinguishedName IssuerDN;
public string Issuer;
public DateTime ThisUpdate;
public DateTime NextUpdate;
public Oid SignatureAlgorithm;
public X509ExtensionCollection Extensions;
public X509CRLEntry[] RevokedCertificates;
public byte[] RawData;
}
public class X509CRLEntry
{
public string SerialNumber;
public DateTime RevocationDate;
public int ReasonCode;
public string ReasonMessage;
}
}
}
}
}
"@
try {Add-Type -MemberDefinition $signature -Namespace PKI -Name CRL}
catch {throw "Unable to load required types"}
#region Variables
[IntPtr]$pvContext = [IntPtr]::Zero
[IntPtr]$rgCRLEntry = [IntPtr]::Zero
[IntPtr]$pByte = [IntPtr]::Zero
[byte]$bByte = 0
[IntPtr]$rgExtension = [IntPtr]::Zero
$ptr = [IntPtr]::Zero
$Reasons = @{1="Key compromise";2="CA Compromise";3="Change of Affiliation";4="Superseded";5="Cease Of Operation";
6="Hold Certificiate";7="Privilege Withdrawn";10="aA Compromise"}
#endregion
# retrive CRL context and CRL_CONTEXT structure
$pvContext = [PKI.CRL]::CertCreateCRLContext(65537,$cBytes,$cBytes.Count)
if ($pvContext.Equals([IntPtr]::Zero)) {throw "Unable to retrieve context"}
$CRL = New-Object System.Security.Cryptography.X509Certificates.X509CRL2
# void first marshaling operation, because it throws unexpected exception
try {$CRLContext = [Runtime.InteropServices.Marshal]::PtrToStructure([IntPtr]$pvContext,[PKI.CRL+CRL_CONTEXT])} catch {}
$CRLContext = [Runtime.InteropServices.Marshal]::PtrToStructure([IntPtr]$pvContext,[PKI.CRL+CRL_CONTEXT])
$CRLInfo = [Runtime.InteropServices.Marshal]::PtrToStructure($CRLContext.pCrlInfo,[PKI.CRL+CRL_INFO])
$CRL.Version = $CRLInfo.dwVersion + 1
$CRL.Type = "Base CRL"
$CRL.RawData = $cBytes
$CRL.SignatureAlgorithm = New-Object Security.Cryptography.Oid $CRLInfo.SignatureAlgorithm.pszObjId
$CRL.ThisUpdate = [datetime]::FromFileTime($CRLInfo.ThisUpdate)
$CRL.NextUpdate = [datetime]::FromFileTime($CRLInfo.NextUpdate)
$csz = [PKI.CRL]::CertNameToStr(65537,[ref]$CRLInfo.Issuer,3,$null,0)
$psz = New-Object text.StringBuilder $csz
$csz = [PKI.CRL]::CertNameToStr(65537,[ref]$CRLInfo.Issuer,3,$psz,$csz)
$CRL.IssuerDN = New-Object Security.Cryptography.X509Certificates.X500DistinguishedName $psz
$CRL.Issuer = $CRL.IssuerDN.Format(0)
$rgCRLEntry = $CRLInfo.rgCRLEntry
if ($CRLInfo.cCRLEntry -ge 1) {
for ($n = 0; $n -lt $CRLInfo.cCRLEntry; $n++) {
$Entry = New-Object System.Security.Cryptography.X509Certificates.X509CRLEntry
$SerialNumber = ""
$CRLEntry = [Runtime.InteropServices.Marshal]::PtrToStructure($rgCRLEntry,[PKI.CRL+CRL_ENTRY])
$pByte = $CRLEntry.SerialNumber.pbData
$SerialNumber = ""
for ($m = 0; $m -lt $CRLEntry.SerialNumber.cbData; $m++) {
$bByte = [Runtime.InteropServices.Marshal]::ReadByte($pByte)
$SerialNumber = "{0:x2}" -f $bByte + $SerialNumber
$pByte = [int]$pByte + [Runtime.InteropServices.Marshal]::SizeOf([byte])
}
$Entry.SerialNumber = $SerialNumber
$Entry.RevocationDate = [datetime]::FromFileTime($CRLEntry.RevocationDate)
$CRLReasonCode = ""
[IntPtr]$rcExtension = [PKI.CRL]::CertFindExtension("2.5.29.21",$CRLEntry.cExtension,$CRLEntry.rgExtension)
if (!$rcExtension.Equals([IntPtr]::Zero)) {
$CRLExtension = [Runtime.InteropServices.Marshal]::PtrToStructure($rcExtension,[PKI.CRL+CERT_EXTENSION])
$pByte = $CRLExtension.Value.pbData
$bBytes = $null
for ($m = 0; $m -lt $CRLExtension.Value.cbData; $m++) {
$bByte = [Runtime.InteropServices.Marshal]::ReadByte($pByte)
[Byte[]]$bBytes += $bByte
$pByte = [int]$pByte + [Runtime.InteropServices.Marshal]::SizeOf([byte])
}
$Entry.ReasonCode = $bBytes[2]
$Entry.ReasonMessage = $Reasons[$Entry.ReasonCode]
}
$CRL.RevokedCertificates += $Entry
$rgCRLEntry = [int]$rgCRLEntry + [Runtime.InteropServices.Marshal]::SizeOf([PKI.CRL+CRL_ENTRY])
}
}
$rgExtension = $CRLInfo.rgExtension
if ($CRLInfo.cExtension -ge 1) {
$Exts = New-Object Security.Cryptography.X509Certificates.X509ExtensionCollection
for ($n = 0; $n -lt $CRLInfo.cExtension; $n++) {
$ExtEntry = [Runtime.InteropServices.Marshal]::PtrToStructure($rgExtension,[PKI.CRL+CERT_EXTENSION])
[IntPtr]$rgExtension = [PKI.CRL]::CertFindExtension($ExtEntry.pszObjId,$CRLInfo.cExtension,$CRLInfo.rgExtension)
$pByte = $ExtEntry.Value.pbData
$bBytes = $null
for ($m = 0; $m -lt $ExtEntry.Value.cbData; $m++) {
[byte[]]$bBytes += [Runtime.InteropServices.Marshal]::ReadByte($pByte)
$pByte = [int]$pByte + [Runtime.InteropServices.Marshal]::SizeOf([byte])
}
$ext = New-Object Security.Cryptography.X509Certificates.X509Extension $ExtEntry.pszObjId, @([Byte[]]$bBytes), $ExtEntry.fCritical
[void]$Exts.Add($ext)
$rgExtension = [int]$rgExtension + [Runtime.InteropServices.Marshal]::SizeOf([PKI.CRL+CERT_EXTENSION])
}
if ($exts | ?{$_.Oid.Value -eq "2.5.29.27"}) {$CRL.Type = "Delta CRL"}
$CRL.Extensions = $Exts
}
$CRL
[void][PKI.CRL]::CertFreeCRLContext($pvContext)
}
И, собственно, его вывод:
[↓] [vPodans] Get-CRL .\Desktop\pica-1.crl Version : 2 Type : Base CRL IssuerDN : System.Security.Cryptography.X509Certificates.X500DistinguishedName Issuer : CN=Sysadmins LV Internal Class 1 SubCA-1, OU=Information Systems, O=Sysadmins LV, C=LV ThisUpdate : 22.02.2011 19:22:27 NextUpdate : 26.02.2011 19:42:27 SignatureAlgorithm : System.Security.Cryptography.Oid Extensions : {System.Security.Cryptography.Oid, System.Security.Cryptography.Oid, System.Security.Cryptography .Oid, System.Security.Cryptography.Oid...} RevokedCertificates : {System.Security.Cryptography.X509Certificates.X509CRLEntry, System.Security.Cryptography.X509Cer tificates.X509CRLEntry, System.Security.Cryptography.X509Certificates.X509CRLEntry, System.Securi ty.Cryptography.X509Certificates.X509CRLEntry...} RawData : {48, 130, 3, 39...} [↓] [vPodans]
И некоторые внутренности:
[↓] [vPodans] $crl = Get-CRL .\Desktop\pica-1.crl [↓] [vPodans] $crl.Extensions[0].format(0) KeyID=1b fa 5e 73 2d 67 13 5c ce d3 0e e6 e8 7a a9 60 8c 0b 63 fc [↓] [vPodans] $crl.Extensions[4].format(1) [1]Freshest CRL Distribution Point Name: Full Name: URL=http://www.sysadmins.lv/pki/pica-1+.crl [↓] [vPodans] $crl.RevokedCertificates SerialNumber RevocationDate ReasonCode ReasonMessage ------------ -------------- ---------- ------------- 3bfe8e77000000000078 27.12.2010 19:32:00 0 163c8142000000000072 27.11.2010 23:27:00 5 Cease Of Operation 14d70748000000000071 27.11.2010 23:27:00 5 Cease Of Operation 411726e0000000000054 04.08.2010 21:26:00 0 1cee2e2000000000002b 01.05.2010 15:32:00 0 2ee0af5a000000000021 24.04.2010 22:25:00 0 [↓] [vPodans] $crl.GetType().FullName System.Security.Cryptography.X509Certificates.X509CRL2 [↓] [vPodans]
Friday, February 25, 2011 6:44:04 PM (FLE Standard Time, UTC+02:00) Comments [0]
Comments are closed.
·
All content © 2008 - 2015, Vadims Podāns
"Spaces" Theme provided by: Vadims Podāns
About
E-mail - Send mail to the author(s)
Live Messenger -
For english language visitors
Библиотека
Календарик
<April 2015>
SunMonTueWedThuFriSat
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789
Карта расположения посетителей
Favorites
Fan list
Disclaimer
Вся информация на сайте предоставляется на условиях «как есть», без предоставления каких-либо гарантий и прав.
При использовании материалов c данного сайта ссылка на оригинальный источник обязательна.
Protected by Copyscape Online Plagiarism Scanner
|
__label__pos
| 0.829356 |
Free CompTIA Security+ SY0-601 Practice Question
You are tasked with ensuring the security of a software application's source code. Which of the following methodologies would be BEST to employ for a comprehensive evaluation of the code for potential security vulnerabilities without running it?
• Carry out dynamic code analysis by executing the application in a controlled environment to find runtime vulnerabilities.
• Initiate a thorough manual code review where a team of developers inspects each line of code iteratively.
• Perform static code analysis using automated tools to review the source code for security-related issues.
• Implement fuzzing sessions to identify input that may cause crashes or unforeseen behavior.
This question's topic:
CompTIA Security+ SY0-601 /
Implementation
Your Score:
Check or uncheck an objective to set which questions you will receive.
|
__label__pos
| 0.644733 |
0
How to use ChatGPT for SEO?
Content creation: ChatGPT can generate high-quality, informative, and engaging articles and blog posts on a wide range of topics. This can help you create valuable content that is relevant to your target audience, thereby improving your search engine rankings.
Keyword optimization: You can use ChatGPT to generate meta descriptions, titles, and headings that are optimized for specific keywords. This can help improve your website’s visibility in search engine results pages (SERPs) for those keywords.
Chatbots: ChatGPT can be used to create chatbots for websites and other platforms. These chatbots can engage with visitors and provide them with information about your products and services, which can improve their overall experience on your site and increase the likelihood of them making a purchase.
Frequently asked questions (FAQs): ChatGPT can be used to generate a comprehensive list of FAQs for your website. This can help visitors quickly find the information they need, which can reduce bounce rates and improve user experience.
Product descriptions: ChatGPT can be used to generate detailed and informative product descriptions for your e-commerce site. This can help visitors understand your products better and make informed purchasing decisions.
|
__label__pos
| 0.989791 |
It may help.
https://docs.google.com/spreadsheet/ccc?key=0Atatq_AL3AJwdElwYVhTRk9KZF9WVmtDTDVhY0xPSmc#gid=0
On Thu, Oct 3, 2013 at 1:31 PM, Robert Važan <[email protected]> wrote:
I need to store one trillion data points. The data is highly compressible down to 1 byte per data point using simple custom compression combined with standard dictionary compression. What's the most space-efficient way to store the data in Cassandra? How much per-row overhead is there if I store one data point per row?
The data is particularly hard to group. It's a large number of time series with highly variable density. That makes it hard to pack subsets of the data into meaningful column families / wide rows. Is there a table layout scheme that would allow me to approach the 1B per data point without forcing me to implement complex abstraction layer on application level?
|
__label__pos
| 0.525869 |
Initial testthat used a style of testing that looked like expect_that(a, equals(b))) this allowed expectations to read like English sentences, but was verbose and a bit too cutesy. This style will continue to work but has been soft-deprecated - it is no longer documented, and new expectations will only use the new style expect_equal(a, b).
is_null()
is_a(class)
is_true()
is_false()
has_names(expected, ignore.order = FALSE, ignore.case = FALSE)
is_less_than(expected, label = NULL, ...)
is_more_than(expected, label = NULL, ...)
equals(expected, label = NULL, ...)
is_equivalent_to(expected, label = NULL)
is_identical_to(expected, label = NULL)
equals_reference(file, label = NULL, ...)
shows_message(regexp = NULL, all = FALSE, ...)
gives_warning(regexp = NULL, all = FALSE, ...)
prints_text(regexp = NULL, ...)
throws_error(regexp = NULL, ...)
matches(regexp, all = TRUE, ...)
|
__label__pos
| 0.676729 |
How do I export my data?
Log in to the Kittyhawk.io web portal and navigate to Reporting > CSV Exports on the left-hand menu bar.
There you can select which data you’d like to pull out of the system as a .CSV file.
You can then slice and dice your data any way you want for any needed custom reports.
|
__label__pos
| 0.960486 |
Paul Edward Pagente Paul Edward Pagente - 1 year ago 105
Vb.net Question
Repeat a Procedure using Checkbox column in Datagridview
Good Afternoon.
I have a question and I don't know if its already done or new. Is it possible to repeat a same procedure many times based on checked column in datagridview?
Here is the scenario
Before I have a Datagridview that has a 3 Columns named
ID
,
Name
and
Cash
everytime I will click a row the data will transfer in 3 Labels and after that I will put a number in textbox that will compute the cash before saving it in database.
Now here is my target I will add a
checkboxcolumn
on the start of datagridview next to
ID
and here is the next part Lets say i will check 2 rows. Is it possible that both of them will go in the same procedure? (Transfer to Labels the compute before saving to database one by one?)
This is what I tried so far
This is the code of transfering datagridview data in labels (under cellclick)
Dim i As Integer
i = DataGridView1.CurrentRow.Index
Label2.Text = DataGridView1.Item("ItemCode", i).Value
Label3.Text = DataGridView1.Item("Description", i).Value
Label4.Text = DataGridView1.Item("ReflectedQty", i).Value
Label5.Text = DataGridView1.Item("UOM", i).Value
Label6.Text = DataGridView1.Item("UnitPrice", i).Value
Label7.Text = DataGridView1.Item("Total", i).Value
Label8.Text = DataGridView1.Item("Remarks", i).Value
Dim cell As DataGridViewCheckBoxCell = DataGridView1.Rows(e.RowIndex).Cells(0)
DataGridViewCheckBoxColumn_Uncheck()
cell.Value = True
standard()
Here is the computation part (under private sub standard)
Dim con As MySqlConnection = New MySqlConnection("server=192.168.2.87;userid=root;password=****;database=inventory")
Dim cmd As MySqlCommand = New MySqlCommand("select StandardUOM,QtyPerUoM from item_master_list where ItemCode = '" & Label2.Text & "'", con)
Dim reader As MySqlDataReader
con.Open()
reader = cmd.ExecuteReader
While reader.Read
Label9.Text = reader.GetString("StandardUOM")
Label10.Text = reader.GetString("QtyPerUoM")
End While
Here is the saving part or transferring to database (button click)
DataGridView1.Columns.RemoveAt(0)
Dim con1 As MySqlConnection = New MySqlConnection("datasource=192.168.2.87;database=inventory;userid=root;password=*****")
Dim cmdinsert As MySqlCommand = New MySqlCommand("insert into receiving (RINo,PONo,ItemCode,Description,QtyPack,PackUoM,UnitPrice,Total,Remarks,ExpiryDate,QtyStan,StanUoM,PCS) values ('" & frm_Add_Receiving_Items.TextBox1.Text & "','" & Label1.Text & "','" & Label2.Text & "','" & Label3.Text & "','" & Label11.Text & "','" & Label5.Text & "','" & Label6.Text & "','" & Label7.Text & "','" & Label8.Text & "','" & DateTime.Now.ToString("yyyy-MM-dd") & "','" & Label12.Text & "','" & Label9.Text & "','" & Label10.Text & "')", con1)
con1.Open()
cmdinsert.ExecuteNonQuery()
con1.Close()
enter image description here
Here is the output of the code
enter image description here
I hope I clear my question.
TYSM for future help
Answer Source
Read the Comments I placed in this code carefully so you'll understand what's going on.
I already included here the code for CellClick (which I replaced with CellValueChanged and the code for saving only the checked rows.
You can call SaveCheckedRecords() in one of your button_Clicks.
I also included a Bonus code that compute for Unit Price 's total .
Sub DataGridView1_CurrentCellDirtyStateChanged( _
ByVal sender As Object, ByVal e As EventArgs) _
Handles DataGridView1.CurrentCellDirtyStateChanged
If DataGridView1.IsCurrentCellDirty Then
DataGridView1.CommitEdit(DataGridViewDataErrorContexts.Commit)
End If
End Sub
Private Sub DataGridView1_CellValueChanged(ByVal sender As Object, _
ByVal e As DataGridViewCellEventArgs) _
Handles DataGridView1.CellValueChanged
If e.ColumnIndex = 0 Then 'SO THAT CHECKBOX COLUMN WILL ONLY TRIGGER THE CHANGES
'THIS WILL HOLD THE VALUE OF THE CHECKBOX (TRUE OR FALSE)
Dim currCheckCell As DataGridViewCheckBoxCell = _
CType(DataGridView1.Rows(e.RowIndex).Cells(0), DataGridViewCheckBoxCell)
'LABEL CHANGES BASED ON THE ROW OF THE CHECKBOX
'IF-CONDITION SO THAT LABEL CHANGES WILL HAPPEN ONLY IF THE CHECKBOX IS CHECKED
If currCheckCell.Value = True Then
Dim i As Integer = e.RowIndex
Label2.Text = DataGridView1.Item("ItemCode", i).Value
Label3.Text = DataGridView1.Item("Description", i).Value
Label4.Text = DataGridView1.Item("ReflectedQty", i).Value
Label5.Text = DataGridView1.Item("UOM", i).Value
Label6.Text = DataGridView1.Item("UnitPrice", i).Value
Label7.Text = DataGridView1.Item("Total", i).Value
Label8.Text = DataGridView1.Item("Remarks", i).Value
End If
Standard()
Dim totalstr As Double = 0
For Each drow As DataGridViewRow In DataGridView1.Rows
Dim checkCell As DataGridViewCheckBoxCell = _
CType(drow.Cells(0), DataGridViewCheckBoxCell)
If checkCell.Value = True Then
totalstr += Val(drow.Cells(5).Value)
End If
Next
lblTotal.Text = FormatNumber(totalstr, 2)
End If
End Sub
Public Sub SaveCheckedRecords()
DataGridView1.Columns.RemoveAt(0)
Dim con1 As MySqlConnection = New MySqlConnection("datasource=192.168.2.87;database=inventory;userid=root;password=*****")
Dim cmdinsert As MySqlCommand = New SqlCommand
For Each drow As DataGridViewRow In DataGridView1.Rows
Dim checkCell As DataGridViewCheckBoxCell = _
CType(drow.Cells(0), DataGridViewCheckBoxCell)
If checkCell.Value = True Then 'AGAIN, TO CHECK IF THE COLUMN IS CHECKED
'CELL INDEXES ARE ASSUMED SINCE YOU DIDN'T SPECIFIED IT ALSO
'YOU ARE THE ONE WHO KNOWS THE RIGHT INDECES SO CHANGE THEM IF THE INDECES ARE WRONG
Dim ItemCode As String = drow.Cells(1).Value
Dim Desc As String = drow.Cells(2).Value
Dim ReflectedQty As String = drow.Cells(3).Value
Dim UOM As String = drow.Cells(4).Value
Dim UnitPrice As Double = Val(drow.Cells(5).Value)
Dim Total As Double = Val(drow.Cells(6).Value)
Dim Remarks As String = drow.Cells(7).Value
'NOW USE ALL OF THE VARIABLES ABOVE IN YOUR INSERT QUERY
'CMDINSERT PART HERE SINCE I DON'T KNOW SOME OF YOUR LABELS
'LIKE LABEL11 SO IT'S UP TO YOU TO CREATE THE COMMAND WHICH IS EASY
con1.Open()
cmdinsert.ExecuteNonQuery()
con1.Close()
End If
Next
End Sub
If there are errors (as I haven't tested the code) or things that you still don't understand, just comment it here .
|
__label__pos
| 0.604856 |
In computer networking, “localhost” refers to the computer that a certain program is running on. For example, if you’re running a program on your own computer (like a web browser or local web development environment), then your computer is the “localhost”. On the other hand, if you’re talking about a MySQL database that sits on your host’s web server, then your host’s web server is the “localhost” in that scenario.
In the most simple terms, you can essentially think of localhost as meaning “this computer”. Just remember that “this computer” applies to the program, not necessarily to the computer that you’re physically using.
On a more technical level, localhost typically resolves to the IP address 127.0.0.1, which is known as the loopback address. Because of its importance, the term “localhost” is a reserved domain name. That means that, in order to avoid confusion, it’s impossible to register a domain name that contains “localhost” in the top-level or second-level domain name.
Still looking for that perfect WordPress host?
Try Kinsta's premium managed WordPress hosting to experience your site without problems.
• Styleized controls representing management Fully managed
• Shield with a tick representing securitySecure like Fort Knox
• Merging lines representing migrationsFree migrations
• Three right chevrons representing server speedUltimate speed
• Circular arrow with center dot representing backupsDaily backups
• Offset hexagons representing our server stackGoogle Cloud Platform
2 Common Situations Where You’ll Encounter Localhost With WordPress
There are two areas where you’re likely to encounter the term localhost in connection with WordPress:
• Developing/testing WordPress websites in a local environment
• Setting up/configuring access to your WordPress site’s database
First, many WordPress developers create something called a “local testing environment” to build WordPress websites. Essentially, this environment mimics a live website but is entirely self-contained on the user’s own computer.
Because the site only exists on the localhost, it’s easy for developers to test things without fear of breaking something on a live website that might be receiving external traffic. Once the site is complete, developers can then move the website from their localhost environment to a live server environment.
Beyond WordPress development and testing, you also might run into the term localhost if you’re manually entering or editing your WordPress site’s database credentials. This could happen both when you’re editing your wp-config.php file or working with a plugin.
Below, you’ll learn a bit more about each situation…
How To Install WordPress Locally (Your “Localhost”)
There are a number of both general and WordPress-specific tools that you can use to create a local WordPress development environment. With any of these tools, you essentially create a self-contained web server on your own computer. Here are some of the most common local development tools for WordPress:
With all of these tools, you’ll first need to install and configure the tool to create your local environment. Essentially, this is a localhost web server (that is – it’s a web server running on “your computer”). Once your local server environment is running, you can install WordPress on it like you would at your host. Some tools, like DesktopServer, also include built-in mechanisms to quickly install WordPress.
You’ll then be able to access your local WordPress site by typing “localhost” into your browser address bar:
what is localhost in web development?
An example of a local development environment
Why Your WordPress Database Is Located At Localhost In Most Situations
As you learned at the beginning of this article, “localhost” simply means the computer that a program is running on.
Struggling with downtime and WordPress problems? Kinsta is the hosting solution designed to save you time! Check out our features
Here’s why that’s important to your WordPress site’s database:
In 99% of situations, your WordPress site’s database will be located on the same server as the rest of your WordPress site’s files. As a result, whenever you’re editing your wp-config.php file or a plugin asks you for your “MySQL hostname” or “MySQL host”, you can just enter “localhost”.
For example, here’s what the DB_HOST entry in the wp-config.php file looks like for a WordPress installation at Kinsta:
Localhost in your wp-config.php file
Localhost in your wp-config.php file
Similarly, if you’re using a plugin like Duplicator to migrate your WordPress website, then you can simply enter “localhost” when the plugin asks you to enter the Host value for your database:
Localhost use in the Duplicator plugin
Localhost use in the Duplicator plugin
In both these situations, you’re basically telling WordPress (or a WordPress plugin) to look for the database on the same server where WordPress is located. While you’re unlikely to encounter the term localhost on a daily basis unless you’re a developer, it does come in handy to know how localhost relates to WordPress.
Remember – for an easy way to remember what localhost is, just think of phrases like “this computer” or “the computer this program is running on.” It’s that simple!
10
Shares
|
__label__pos
| 0.707705 |
Take the 2-minute tour ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
I am having trouble evaluating $$\int \dfrac{1}{x\sqrt{x^4-4}} dx$$
I tried making $a = 2$, $u = x^{2}$, $du = 2x dx$ and rewriting the integral as: $$\dfrac{1}{2} \int \dfrac{du}{\sqrt{u^2-a^2}} $$ But I believe something is not right at this step (perhaps when changing from $dx$ to $du$)?
I end up with: $${1\over 4} \operatorname{arcsec} \dfrac{1}{2}x^{2} + C$$
Any help would be appreciated, I feel I am only making a simple mistake. Also, for some reason, on WA, it is showing an answer involving $\tan^{-1}$ but I do not see an $a^{2} + u^{2}$ possibility. Note that I do know how sometimes (different) inverse trig functions when integrated are equal.
Ex: $$\int \dfrac{1}{\sqrt{e^{2x}-1}} dx = \arctan{\sqrt{e^{2x}-1}} + C = \operatorname{arcsec}(e^{x}) + C $$
share|improve this question
wolframalpha.com/input/…;. Press: Show Steps. – Isaac Solomon Apr 6 '12 at 3:31
As noted in my original post, I already checked WA. I was looking for an easier way since there should be a direct substitution with an $a$ and a $u$, rather than jumping through hoops with 3-4 substitutions. Correct me if I'm wrong though. – Joe Apr 6 '12 at 3:34
Your first substitution is not correct. You have $du=2x\,dx$, but you want to replace $dx\over x$. Use ${dx\over x} = {du\over 2 u}$. This gives $\int {du\over u\sqrt {u^2-4}}$; which is an $\rm arcsec$ form. – David Mitra Apr 6 '12 at 3:36
@DavidMitra I understand to use $dx\over {x}$ but where do you get $dx\over {x}$ = $du\over {2u}$ rather than equals $du\over {2x}$? – Joe Apr 6 '12 at 3:42
add comment
1 Answer
up vote 6 down vote accepted
You did not make the substitution correctly (your substitution would work as you wrote it if $x$ were originally upstairs).
But the choice you made for $u$ will work:
You have $u=x^2$ and $du=2x\,dx$.
From $du=2x\,dx$, you have, dividing both sides by $2x^2$ $$\tag{1}{du\over 2x^2}={x\,dx\over x^2}.$$ Substituting $u=x^2$ on the left hand side of $(1)$ and simplifying the right hand side, we have $$ \color{maroon}{{du\over 2 u}}=\color{maroon}{{dx\over x}}.$$ Substituting into the integral gives $$\int {\color{maroon}{dx}\over\color{maroon} x \sqrt{ x^4-4}}= \int {\color{maroon}{du}\over\color{maroon}{ 2u}\sqrt {u^2-4}} $$ which is an $\rm arcsec$ form.
share|improve this answer
What would be your final answer then? It seems it would be the same as mine. I am still not entirely sure how you jumped from $du = 2x dx$ to the line below it with $du \over{2x^2}.$ Mind elaborating? I see the x cancel out on the RHS and the substitution of u in the line below it - it's the line above "Substituting into the integral" that is still puzzling me. – Joe Apr 6 '12 at 3:54
What jumped out at you to divide both sides by $2x^{2}?$ To try and get a $dx\over {x}$ term on the RHS? – Joe Apr 6 '12 at 3:59
@jay Yes, we needed to write $dx\over x$ in terms of $u$. – David Mitra Apr 6 '12 at 4:01
1
When I differentiate $${1\over 4} \operatorname{arcsec} \dfrac{1}{2}x^{2} + C$$ I get the original, I must have just been careless in the beginning by forgetting the $u$ term on the bottom of the inside. – Joe Apr 6 '12 at 4:08
+1. Sorry for being a bit stubborn, thanks for the help David. – Joe Apr 6 '12 at 4:15
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.991677 |
Documentation Home
MySQL Connector/Python Developer Guide
Related Documentation Download this Manual
PDF (US Ltr) - 0.6Mb
PDF (A4) - 0.6Mb
HTML Download (TGZ) - 85.5Kb
HTML Download (Zip) - 103.3Kb
10.6.8 cursor.MySQLCursorPrepared Class
The MySQLCursorPrepared class inherits from MySQLCursor. This class is available as of Connector/Python 1.1.0.
In MySQL, there are two ways to execute a prepared statement:
• Use the PREPARE and EXECUTE statements.
• Use the binary client/server protocol to send and receive data. To repeatedly execute the same statement with different data for different executions, this is more efficient than using PREPARE and EXECUTE. For information about the binary protocol, see C API Prepared Statements.
In Connector/Python, there are two ways to create a cursor that enables execution of prepared statements using the binary protocol. In both cases, the cursor() method of the connection object returns a MySQLCursorPrepared object:
• The simpler syntax uses a prepared=True argument to the cursor() method. This syntax is available as of Connector/Python 1.1.2.
import mysql.connector
cnx = mysql.connector.connect(database='employees')
cursor = cnx.cursor(prepared=True)
• Alternatively, create an instance of the MySQLCursorPrepared class using the cursor_class argument to the cursor() method. This syntax is available as of Connector/Python 1.1.0.
import mysql.connector
from mysql.connector.cursor import MySQLCursorPrepared
cnx = mysql.connector.connect(database='employees')
cursor = cnx.cursor(cursor_class=MySQLCursorPrepared)
A cursor instantiated from the MySQLCursorPrepared class works like this:
• The first time you pass a statement to the cursor's execute() method, it prepares the statement. For subsequent invocations of execute(), the preparation phase is skipped if the statement is the same.
• The execute() method takes an optional second argument containing a list of data values to associate with parameter markers in the statement. If the list argument is present, there must be one value per parameter marker.
Example:
cursor = cnx.cursor(prepared=True)
stmt = "SELECT fullname FROM employees WHERE id = %s" # (1)
cursor.execute(stmt, (5,)) # (2)
# ... fetch data ...
cursor.execute(stmt, (10,)) # (3)
# ... fetch data ...
1. The %s within the statement is a parameter marker. Do not put quote marks around parameter markers.
2. For the first call to the execute() method, the cursor prepares the statement. If data is given in the same call, it also executes the statement and you should fetch the data.
3. For subsequent execute() calls that pass the same SQL statement, the cursor skips the preparation phase.
Prepared statements executed with MySQLCursorPrepared can use the format (%s) or qmark (?) parameterization style. This differs from nonprepared statements executed with MySQLCursor, which can use the format or pyformat parameterization style.
To use multiple prepared statements simultaneously, instantiate multiple cursors from the MySQLCursorPrepared class.
The MySQL client/server protocol has an option to send prepared statement parameters via the COM_STMT_SEND_LONG_DATA command. To use this from Connector/Python scripts, send the parameter in question using the IOBase interface. Example:
from io import IOBase
...
cur = cnx.cursor(prepared=True)
cur.execute("SELECT (%s)", (io.BytesIO(bytes("A", "latin1")), ))
User Comments
Sign Up Login You must be logged in to post a comment.
|
__label__pos
| 0.836644 |
Shielding Secrets: The Essence of Secure Key Exchange
Shielding Secrets: The Essence of Secure Key Exchange
In the realm of cybersecurity, secure key exchange is a cornerstone for safeguarding communications and protecting sensitive data. This article delves into the significance of secure key exchange, exploring its fundamental role in encryption, data integrity, and maintaining secure communication channels.
Understanding Secure Key Exchange
Secure key exchange is the process of securely transmitting cryptographic keys between parties to establish a shared secret key for encrypted communication. This exchange is crucial in ensuring that only authorized entities can decrypt and access sensitive information. Secure key exchange mechanisms form the backbone of secure communication protocols, adding a layer of protection against eavesdropping and unauthorized access.
The Role of Encryption in Secure Key Exchange
Encryption is the bedrock of secure key exchange. As cryptographic keys are transmitted between parties, they are vulnerable to interception by malicious actors. Utilizing encryption during key exchange ensures that even if intercepted, the transmitted keys remain unintelligible to unauthorized entities. This cryptographic protection forms the basis for establishing secure communication channels.
Key Exchange Protocols: An Overview
Various key exchange protocols exist to facilitate secure communication. Protocols such as Diffie-Hellman, RSA, and Elliptic Curve Cryptography (ECC) employ sophisticated mathematical algorithms to ensure the secure exchange of cryptographic keys. Each protocol has its strengths and is selected based on the specific security requirements of the communication system.
Diffie-Hellman Key Exchange: A Pioneer in Security
The Diffie-Hellman key exchange protocol revolutionized secure communication by allowing two parties to generate a shared secret key over an untrusted network. This mathematical breakthrough laid the foundation for modern secure key exchange methods and remains a vital component in many cryptographic applications.
Ensuring Data Integrity through Key Exchange
Beyond confidentiality, secure key exchange plays a pivotal role in maintaining data integrity. When cryptographic keys are exchanged securely, it ensures that the data transmitted between parties remains unaltered during transit. This integrity check is essential for detecting any tampering attempts or unauthorized modifications to the transmitted information.
Public Key Infrastructure (PKI): Orchestrating Trust
Public Key Infrastructure is a vital component in secure key exchange, providing a framework for managing digital keys and certificates. Through PKI, entities can verify the authenticity of each other’s public keys, establishing a trust framework that is essential for secure communication. PKI adds an additional layer of assurance in key exchange processes.
Challenges and Mitigations in Key Exchange Security
While key exchange protocols are robust, challenges exist, particularly in the face of evolving cyber threats. Quantum computing poses a potential threat to current key exchange algorithms. To mitigate this, researchers are exploring post-quantum cryptography to develop key exchange methods resistant to quantum attacks and ensure long-term security.
Implementing Forward Secrecy for Ongoing Protection
Forward secrecy is a concept that enhances key exchange security by ensuring that even if a long-term secret key is compromised, past communications remain confidential. Implementing forward secrecy in key exchange protocols adds an extra layer of protection, making it challenging for adversaries to decrypt past communications even if they gain access to current keys.
The Evolving Landscape of Key Exchange Security
As technology advances, the landscape of key exchange security continues to evolve. New protocols and methodologies are developed to address emerging threats and vulnerabilities. Staying informed about the latest advancements in key exchange security is crucial for organizations aiming to maintain robust cybersecurity postures.
Embracing a Secure Digital Future with Key Exchange
In conclusion, secure key exchange is fundamental to the fabric of secure communication and data protection. As cyber threats become more sophisticated, the importance of implementing and continually refining key exchange protocols cannot be overstated. Organizations and individuals alike must embrace the evolving landscape of key exchange security to safeguard their digital interactions.
For more information on secure key exchange and its pivotal role in cybersecurity, visit ITCertsWin. Explore the world of secure communication and encryption to fortify your digital presence and ensure the confidentiality and integrity of your sensitive information.
|
__label__pos
| 0.999455 |
planetp planetp - 9 months ago 34
Python Question
How to reliably split a string in Python?
In Perl I can do:
my ($x, $y) = split /:/, $str;
and it will work whether or not the string contains the pattern.
In Python, however this won't work:
a, b = "foo".split(":") # ValueError: not enough values to unpack
What's the canonical way to prevent errors in such cases?
Answer
Use str.partition():
>>> a, sep, b = "foo".partition(":")
>>> a, sep, b
('foo', '', '')
It will always return a 3-tuple, whether the separator is found or not.
|
__label__pos
| 0.999735 |
Assign Y
July 3, 2015
I’m sorry I missed Tuesday’s exercise; I’ve been very busy at work. Today’s exercise is an interview question of the kind I don’t like: it’s tricky, you either know the answer or you don’t, and it’s unlikely to be useful in any real programming situation:
You are give four integers x (which must be either 0 or 1), y, a and b. Your first task is to assign a to y if x = 0, or assign b to y if x = 1, without using any conditional operator, including the ternary operator. Your second task is to perform the same assignment without using any arithmetic operators.
Your task is to complete the two-part puzzle given above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Pages: 1 2
6 Responses to “Assign Y”
1. Lucas A. Brown said
In Python:
y = (1-x) * a + x * b
y = [a,b][x]
In C:
uint64_t a=/* blah */, b=/* blah */, x=/* 0 or 1 */, y;
x = (x << 1) | x; x = (x << 2) | x; x = (x << 4) | x;
x = (x << 8) | x; x = (x << 16) | x; x = (x << 32) | x;
y = (b & x) | (a & (~x));
This is basically a translation of the first Python line for 64-bit data. If you’re feeling pedantic, you might declare the bit-twiddling to be both arithmetic and conditional, but it’s branch-free and technically avoids the arithmetic operators.
2. Gil Martinez said
In Racket:
; arithmetic solution
(let ((a 2) (b 3) (x 1))
(let ((y (+ (* a (modulo x 2))
(* b (modulo (add1 x) 2)))))
y))
; non-arithmetic solution
(let ((a 2) (b 3) (x 1))
(let ((y (or (and (= 0 x)
a)
b)))
y))
The arithmetic solution uses modular addition on x to determine the coefficients of a and b. The resulting terms are added to produce b.
The non-arithmetic solution uses logical operators.
3. captain falcom said
I can’t decide if this is cheating or not. Both a and b get assigned to something, which might not be the intended outcome.
“`
vars = [nil, nil]
vars[x] = y
a, b = vars
“`
4. Mike said
Is it okay to answer that the problem is under specified? What if x isn’t 0 or 1? ;-)
# equation of a line with slope (b-a)/(1-0) and y-intercept of a
# this works if x isn't an integer, and assigns some value even if x isn't 0 or 1.
y = (b - a)*x + a
# standard short circuit logical expression - no arithmetic operators
y = x and b or a
# Don't think I've seen this answer before
try:
y = b // x
except ZeroDivisionError:
y = a
5. programmingpraxis said
@MIke: x must be either 0 or 1. I’ve changed the problem statement to make that explicit.
6. Brett Warrem said
I derived a formula ((a**(1-x)/b**x)**(1-2*x)) that assigns a or b to y depending on x:
def assign_y(x, a, b):
y = int((a**(1-x)/b**x)**(1-2*x))
return y
if __name__ == “__main__”:
print(assign_y(0, 3, 2))
print(assign_y(1, 3, 2))
Results:
3
2
As for the second part of the question; not a franking clue.
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
%d bloggers like this:
|
__label__pos
| 0.955659 |
How do I make it so a player can't move?
Hello! I want to make my own pre banned players system, But I do not know how to make the player not move. Here is my script. (basically I Don’t want them to be able to move.)
local BannedGui = script.BannedGui
local BannedText = "You have been banned from this game."
game.Players.PlayerAdded:Connect(function(player)
if player.Name == "Censored for privacy reasons." then
Lighting.Blur.Size = 50
BannedGui.Parent = game.StarterGui
script.BannedGui.Frame.Reason.Text = BannedText
wait(5)
player:Kick("You have been banned from this server.")
end
end)
2 Likes
Just change their WalkSpeed
Otherwise remove their character
Otherwise BindAction with context action service the WASD keys and they shouldn’t move
2 Likes
How do I local BindAction the player’s W, A, S, D keys?
player:BindAction(????)
is it something like that?
Just check context action service https://developer.roblox.com/en-us/api-reference/function/ContextActionService/BindAction
1 Like
This should work, it’s a way to completely disable the players controls:
local controls = require(game.Players.LocalPlayer.PlayerScripts:WaitForChild("PlayerModule")):GetControls()
controls:Disable()
5 Likes
Something like this?
local BannedGui = script.BannedGui
local BannedText = "You have been banned from this game."
local ContextActionService = game:GetService("ContextActionService")
game.Players.PlayerAdded:Connect(function(player)
if player.Name == "###########" then
Lighting.Blur.Size = 50
BannedGui.Parent = game.StarterGui
script.BannedGui.Frame.Reason.Text = BannedText
wait(5)
player:Kick("You have been banned from this server.")
ContextActionService:BindAction(Enum.KeyCode.W, Enum.KeyCode.A, Enum.KeyCode.S, Enum.KeyCode.D)
end
end)
1 Like
Noonon, Use as Chaasey stated, This wouldn’t work. Check out @Chaasey Method :slight_smile:
1 Like
Alright. {{{((([[[[[30 CHAR]]]]])))}}}
I changed the selected name to TrackoTheTaco so I can test it and it did not work.
local controls = require(game.Players.LocalPlayer.PlayerScripts:WaitForChild("PlayerModule")):GetControls()
local BannedGui = script.BannedGui
local BannedText = "You have been banned from this game."
game.Players.PlayerAdded:Connect(function(player)
if player.Name == "##############" then
Lighting.Blur.Size = 50
BannedGui.Parent = game.StarterGui
script.BannedGui.Frame.Reason.Text = BannedText
controls:Disable()
wait(5)
player:Kick("You have been banned from this server.")
end
end)
Strange, maybe try setting the players walkspeed to 0, making them unable to move.
game.Players.LocalPlayer.Character.Humanoid.Walkspeed = 0
3 Likes
Try anchoring the humanoidrootpart
game.Players.PlayerAdded:Connect(function(player)
if player.Name == "Censored for privacy reasons." then
player.CharacterAdded:Connect(function(char)
char:WaitForChild("HumanoidRootPart").Anchored = true
end)
end
end)
Note that the character might stay in the air which you can fix by adding a wait before anchoring the HumanoidRootPart.
3 Likes
Alternatives:
• WalkSpeed & JumpPower set to 0
• Anchor character (stops animations)
2 Likes
Thank you! It worked. [[[{{{30 Characters}}}]]]
|
__label__pos
| 0.991296 |
The Importance of Flow in Software Development
(based on [GR17c])
From social and psychological theories and studies [Csi97], we know that there exists a mental state called “flow” that allows individuals to concentrate deeply on a specific task without noticing the surrounding environment or the time, while remaining fully aware of the current work that they are doing.
In a recent TV broadcast about cognitive brain function, several illustrative examples were given, such as a free climber who says that he is at peak performance when he completely forgets about the world and the danger associated with the climb, but fully concentrates on the rocks and all the next moves he is planning to make. It was also shown that the world record holders in speed tasks, such as stacking cubes or solving Rubic’s Cube, do not use their cerebrum very intensively when executing the speed task. Only a small, but obviously efficient, part of the brain is concentrating on the task to be executed. Similar examples can be found among athletes and musicians who may occasionally get into a “groove” where performance and concentration reach a peak level.
Software developers that fully concentrate on their work also report this kind of flow, where only the relevant parts of the brain are focused on the core task. We can argue that software development is more complex and probably involves more parts of the brain than speed stacking, but it also seems that software development becomes more productive when the developer has the ability to reach flow for a large part of his or her working time.
This ability to reach and sustain flow depends partially on the developer’s own circumstances; for example, whether they get enough sleep, have little to no stress at home, and lead a safe and enjoyable life. To a large extent, this ability also depends on the concrete working circumstances. Is the room quiet? Can the developers work on their own for a long time without disturbances by phones, emails, or background noise in the room? It is also important that the developer is in a constant state of focus while thinking of the important issues that enable him or her to execute their task. In the software development context, it is helpful if the tooling aids productivity and does not slow down the focus time (e.g., compilation time should not take long). Agile development methods have the potential to capitalize on the opportunity of software developers to get into the flow and provide continuous improvement to the system they are developing. Mihaly Csikszentmihalyi also argues that the flow is very helpful to engage in creative discovery of new ideas and inventions [1]. Software development can benefit from flow because the need to identify an optimal architecture and the best structure for object interactions can be a creative activity.
Agile development methods also thrive from early and immediate feedback: The software should always be able to compile and to run the associated tests. To remain in the flow, it is helpful that compilation and test executions are quick, because otherwise developers may become distracted by the temptation to read emails, go for the next coffee, take an early lunch, or engage another co-worker in a conversation. Software development tools are vitally important for productive development and keeping developers in the flow zone.
When considering the current state of tooling for model-based software development (compared to just coding), an opportunity exists for new capabilities that help developers achieve flow. Currently, many tools are able to help with creating and editing large models, perform partial consistency checks, and generate code for multiple platforms. But in comparison with the available tooling for traditional general purpose programming languages, there is still a large gap in tool capabilities. Models are often interacted with in a monolithic form, i.e., all models are processed in batch each time a code generation request is started. The time it takes to perform code generation and model checking may cause a disruption in the flow. If a code generation process (for a large industrial model, or a set of models within the project) takes longer than drinking a cup of coffee, software developers that use model-based techniques may lose their flow of concentration. They will not get the same feeling of satisfaction that would result from a better transition across the tool usage, which may hamper productivity when delays emerge. We hope that modeling tools will improve the opportunity for developers to achieve flow through improved tool implementation, but also by better modeling languages that enhance modularity and incremental compilation.
Summary
1. Software developers greatly benefit from automation, which keeps them in flow.
2. Development tools must assist developers with comfort and efficiency and completely automate tedious jobs.
This essay is essentially taken from a SoSyM editorial, which is published under the Creative Commons licence and contains some updates:
1. [GR17c]
J. Gray, B. Rumpe:
In: Journal Software and Systems Modeling (SoSyM), Volume 16(4), pp. 927-928, Springer Berlin / Heidelberg, 2017.
2. [Csi97]
M. Csikszentmihalhi:
Basic Books, New York, 1997.
|
__label__pos
| 0.710173 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
How come the font FixedSys and Terminal ONLY work on FireFox and not on any other major browser?
What's the deal? Is it because they are system fonts or am I missing something?
Thanks
share|improve this question
They don't work on any of the browsers on my Mac. – Seth Sep 11 '10 at 1:55
I'll take a stab and say they are System fonts so they will only work on Window computers. Although Macs have Lucida Console, right? – naknode Sep 11 '10 at 2:02
Are you positive it doesn't work on at least IE? – BoltClock Sep 11 '10 at 6:41
I'd like to be able to use it too. I wonder if there's a Web font available for FixedSys. – Bent Rasmussen May 8 '12 at 10:43
3 Answers 3
up vote 1 down vote accepted
How come the font FixedSys and Terminal ONLY work on FireFox and not on any other major browser?
What's the deal? Is it because they are system fonts or am I missing something?
The fonts do not depend on browser, they should be applied if they are present on the client machine. If you can view it in FF, they should be viewable in other browsers as well. The FixedSys font is shipped with Windows and is available (Not sure about other OS). Make sure that you can differentiate it or there could be some other problem. Using reset CSS may also help.
share|improve this answer
I know this is old, but I converted http://www.fixedsysexcelsior.com/ to a Webfont a while back.. the files are here:
http://doir.ir/fixedsys
css usage is:
@font-face {
font-family: 'FixedsysExcelsior301Regular';
src: url('fsex300-webfont.eot');
src: url('fsex300-webfont.eot?#iefix') format('embedded-opentype'),
url('fsex300-webfont.woff') format('woff'),
url('fsex300-webfont.ttf') format('truetype'),
url('fsex300-webfont.svg#FixedsysExcelsior301Regular') format('svg');
font-weight: normal;
font-style: normal;
-webkit-font-smoothing:aliased;
font-smooth:never;
}
Firefox does have a little bit of trouble rendering it in some instances.. but this appears to be a bug with many webfonts (fonts appear blurry when the browser is a certain size, normal when another size.
share|improve this answer
Those fonts aren't available everywhere (for example Linux). If you want to display monospaced text, use monospace as fallback. It defaults to the standard monospace font of the system.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.579735 |
Sign up ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Are there any known statements that are provably independent of $ZF + V=L$? A similar question was asked here but focusing on "interesting" statements and all examples of statements given in that thread are not provably indepedent of $ZF + V=L$, they all raise the consistency strength bar. For example, the claim that "there exists an inaccessible" is independent of $ZF + V=L$, is really just an assumption. Because of Gödel's second incompleteness theorem, we cannot prove this. It is well possible that $ZFC$ proves "there is no inaccessible". The same holds for $Con(ZF)$ or "there is a set model of ZF". Those are assumed to be independent of $ZF + V=L$, but this cannot be proved without large cardinal assumptions.
So my question is: Is there any known (not necessarily "interesting" or "natural") statement $\phi$ and an elementary proof of $Con(ZF) => Con(ZF + V=L + \phi) \wedge Con(ZF + V=L + \neg\phi)$? Or is there at least a metamathematical argument that such statements should exists? (Contrast this with the situation of $ZFC$ and $CH$!)
And if not: Might $ZF + V=L$ be complete in a weak sense: There is no statement provably independent of it?
What is known about this?
share|cite|improve this question
Does the answer to your question not depend on the exact system in which you allow the proof of the independence to happen? – Mariano Suárez-Alvarez Mar 13 '10 at 14:50
3
If ZF turns out to be inconsistent then no statement is provably independent of ZF+V=L because every statement is provable assuming it :P . Hence no known statements are provably independent of ZF+V=L ;-) – Kevin Buzzard Mar 13 '10 at 15:46
3
But Kevin, the OP asked for a proof of the implication Con(ZF) implies Con(ZF+ psi) + Con(ZF + not-psi). If Con(ZF) fails, then this implication is vacuously true. – Joel David Hamkins Mar 13 '10 at 17:24
@Joel: Serves me right for not reading the question. I feel I should delete my comment but perhaps I'll leave it and just say here that of course you're right. Let me run another one past you though while you're here though: how about letting psi be "Con(ZF+V=L)"? – Kevin Buzzard Mar 13 '10 at 17:52
1
Yes, that statement is what Goedel uses in the 2nd Incompleteness theorem, and it is provably equivalent to "I am not provable". The situation in general is complicated by the need for omega-consistency in Goedel's version of the Incompletness theorem, to get both sides of the indpendence. Thus, one should use the Rosser sentence instead. For example, some consistent theories T do prove their own inconsistency. e.g. ZFC+not-Con(ZFC), and so one cannot always say that Con(T) is independent of T, even when T is consistent, although Con(T) is not probable. – Joel David Hamkins Mar 13 '10 at 18:01
1 Answer 1
up vote 6 down vote accepted
The Incompleteness theorem provides exactly the requested independence. (But did I sense in your question that perhaps you thought otherwise?)
The Goedel Incompleteness theorem says that if T is any consistent theory interpreting elementary arithmetic with a computable list of axioms, then T is incomplete. Goedel provided a statement σ, such as the "I am not provable" statement, which is provably equivalent to Con(T), or if you prefer, the Rosser sentence, "there is no proof of me without a shorter proof of my negation", such that T does not prove σ and T does not prove ¬σ.
This establishes Con(T) implies Con(T + σ) and Con(T + ¬σ), as you requested. [Edit: one really needs to use the Rosser sentence to make the full conclusion here.]
In particular, this applies to the theory T = ZFC+ V=L, since this theory interprets arithmetic and has a computable list of axioms. Thus, this theory, if consistent, is incomplete, using the corresponding sentences σ above. Since it is also known (by another theorem of Goedel) that Con(ZF) is equivalent to Con(ZFC + V=L). This establishes the requrested implication:
• Con(ZF) implies Con(ZFC + V=L + σ) and Con(ZFC + V=L + ¬σ)
The Incompleteness theorem can be proved in a very weak theory, much weaker than ZFC or even PA, and this implication is similarly provable in a very weak theory (PA suffices).
One cannot provably omit the assumption Con(ZF) of the implication, since the conclusion itself implies that assumption. That is, the existence of an independent statement over a theory directly implies the consistency of the theory. So since we cannot prove the consistency of the theory outright, we cannot prove the existence of any independent statements. But in your question, you only asked for relative consistency (as you should to avoid this triviality), and this is precisely the quesstion that the Incompleteness theorem answers.
share|cite|improve this answer
1
Thank you! But i'm still somewhat confused: I don't think that Con(T) implies Con(T + Con(T)), otherwise the theory T + Con(T) could prove it's own consistency. So one cannot take Con(ZF + V=L) as an example for a statement provably independent of ZF + V=L. Or am i missing something? – user4607 Mar 13 '10 at 18:20
In this general case, one must use the Rosser sentence. The issue you mention is exactly the difference between the Goedel Incompleteness theorem and the Goedel-Rosser Incompleteness theorem. The statement sigma you want is: "For any proof of me, there is a shorter proof of my negation". – Joel David Hamkins Mar 13 '10 at 18:41
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.991677 |
how-to-fix-a-mouse-sensor
A mouse sensor is a component that helps the computer know how far you have moved the pointer. Mouse sensors are becoming a popular upgrade choice for many gamers. It collects information via vibration and sends it to the processor through an interface called the I2C bus. If your mouse sensor has stopped working, then this article will help you learn how to fix a mouse sensor or why your mouse no longer works.
How To Fix a Mouse Sensor Not Working?
If your mouse is not working right, it can be difficult to move accurately on your screen. To fix a mouse sensor, you will need an extra mouse and a small screwdriver. Since there are so many different kinds of mice, it’s hard to find information online and on your own, so here is a quick guide to help you get started fixing your computer mouse.
Reasons
Bad Surface or Dirt on the Mouse
The most common reason for a cursor being stuck on a screen is an unclean mouse. If you have a new mouse, make sure to clean it after each use and before storing it away. You can do this by wiping down the surface with a cloth or paper towel, or just using water and mild soap. Your computer should be able to detect any foreign objects on the surface of your mouse (tape, dirt, etc.) and warn you if there is something wrong with it. If you don’t see this warning message, try cleaning your mouse again until it does.
Bad Wireless Connection
There are many reasons why your mouse may not be working properly. A poor wireless connection is among the most typical. By turning off Wi-Fi, removing your mouse, and rebooting it, you can resolve the issue. Try directly connecting your mouse to a USB port on your computer if it doesn’t work. Try detaching your mouse and then putting it back in again if something doesn’t work.
Failing Batteries
If everything else fails, try replacing the batteries in your mouse with new ones. Make sure they’re fresh batteries — even if they’re still good ones.
Steps to Solve Problems
A mouse sensor may be the easiest fix for a computer. The sensor on a mouse can be fixed in several ways. Steps-to-Solve-Problems-to-solve-the-Mouse-sensor
Check the Mouse Pad
First, check the mouse pad. The mouse on your computer has a sensor on the bottom. It ought to be dust- and debris-free. It can be causing issues if your mouse pad has a hard surface. To check whether it fixes the problem, try shifting it about or even taking it off completely. After examining these possibilities, if your mouse’s functionality is still off, consider unplugging it for a while and then plugging it back in. This may reset its settings to default, which will fix any issues with tracking or sensitivity settings.
Check the Mouse Skates
The mouse skates are the part of the mouse that glide when you move it. If they’re dirty or damaged, the mouse stop working on the laptop and will be more prone to catching on things and causing unintentional clicks. To check for a dirty skate, rub a little bit of soap onto the edge of one skate and then rub it into the center of its corresponding roller. Use a toothbrush or any comparable item to clean them if they are dirty on both sides. If there’s only one dirty skate, use an old toothbrush or similar tool to scrub out the dirt around that roller. Again, if you find any dirt anywhere else in your mouse’s body, clean it thoroughly with an appropriate cleaning solution.
Update Mouse Drivers
If there is an issue with your mouse, the first thing to do is update the drivers. If you’re on Windows, go to Device Manager, click Scan for hardware changes, then locate and update the driver for your mouse. If you’re running Mac OS X, simply restart your computer and launch System Preferences. Click Mouse in the sidebar and then click Sleep on the left-hand side of the window. If you have a wireless mouse and there are no updates available for it, try using another USB cable instead of one that connects directly to your computer.
Check USB Port
If your USB port isn’t working properly, it could be due to dust or debris clogging up your ports. To remove any junk that may have been collected inside the ports, use pressurized gas. Try disconnecting your mouse for a while, then plugging it back in again if it doesn’t work.
Check if the mouse is overheated or too cold
If you feel like your mouse is getting warm when it’s plugged in but not when you unplug it, then there might be a problem with either the keyboard or the mouse itself—or both! If this seems like a problem happening now and then but isn’t something that happens all the time, try plugging in another USB cable that’s known to work with your computer (and make sure it’s not being used by any other device).
Check for Software Conflicts
If you have multiple programs running at once, there may be conflicts between them that prevent one program from working properly with another. For example, if you have Microsoft Word open and try to use Internet Explorer, Internet Explorer may not respond correctly because it’s trying to use the same settings as Microsoft Word. That’s why it’s important to close other programs before you start troubleshooting something like this.
Defective Mouse
If you have a defective mouse, the best way to fix it is to replace the entire unit. If this is not possible due to cost or other reasons, try replacing the individual components. A defective sensor can be fixed by replacing the cable or switch that connects the mouse to your computer. If you have a virus on your computer, try running a full scan of your system with an anti-virus program.
Defective Motherboard
The motherboard is the heart of your computer, and it’s responsible for keeping everything else in check. If there’s a problem with the motherboard, it can prevent your mouse from working properly. You may have a defective motherboard if you’re experiencing issues with your mouse not working at all, or if it only works sometimes or intermittently. If you have checked all of the other troubleshooting steps and still have issues with your mouse, you should replace the motherboard.
Disconnect External Devices
If you’re not getting any response when you click on the mouse, or if you’re having trouble scrolling through pages and menus, it might be time to disconnect external devices from your computer. This is significant since there’s a risk that they might be interfering with your mouse. If your mouse isn’t working properly, this may be the culprit. Disconnecting devices can usually be done by simply unplugging them from the back of your computer. If you need to disconnect something that isn’t powered by USB, such as a printer or scanner, then make sure that the device has its power source and cable. Try restarting your machine and looking to see if anything else is linked if this doesn’t resolve the issue.
How to Fix Optical Mouse Sensor?
Optical mouse sensors are usually found in most mice. The optical mouse sensor is a light-sensitive device that uses an LED to determine the position of the mouse cursor. Because it relies on light, the optical sensor can be affected by dust or even a dark room. If the sensor stops working properly or if your mouse feels “jittery” or “uneven” when moving around, then you may have a problem with your optical sensor. You can fix your optical mouse sensor by cleaning it thoroughly and then testing it again. You can clean your optical mouse sensor with compressed air or by using a cotton swab dampened with alcohol to remove any debris from within the housing of your mouse. Make sure that all sides of your optical sensor are clean before proceeding with testing to ensure that there aren’t any dirt particles inside of it preventing proper function. Once you have cleaned out all of the internal components of your mouse, test them out one last time before replacing them in their original location. You should be able to see if there is still any interference from other parts of your computer system or if there are any issues with movement as well as speed accuracy when using this particular piece.
Conclusion
And that’s it! With a quality replacement and some patience, you’ll be in business again. The steps above should help get you there, but keep one thing in mind: if your mouse still isn’t working right after you try these solutions, it’s likely dead and beyond repair. At that point, you’re probably best off buying a new mouse altogether. However, this information will be enough to help you in solving your mouse sensor problems. FAQs
What is a mouse sensor?
A mouse sensor is the part of a touchpad that detects movement. The sensor is located under the top plate of your mouse and is responsible for detecting the pressure you apply to the surface.
Which mouse sensor is the best?
Sensors come in two varieties: optical and laser. While laser sensors require electricity, optical sensors use light. You can choose either you prefer because some mice have both.
Is it possible to fix a mouse sensor?
Yes, it is possible to fix a mouse cursor. They include replacing the sensor, cleaning or lubricating the sensor, and replacing components such as batteries or cables.
Can I extend the life of my mouse by cleaning it?
Yes, by giving your mouse regular cleanings, you can increase its lifespan. Your mouse needs a good cleaning at the very least once every two weeks. You can also use a toothbrush or other small object to remove debris from between the scroll wheel and the center button on your mouse. If you have more than one computer running at once, we recommend cleaning your mouse on each one separately so that you can be sure that everything is working properly when you’re done.
How do I lubricate my mouse?
You can lubricate your mouse at any time that you’re not using it. The best way to lubricate is with an oil-based lubricant such as WD-40, which is non-toxic and safe for use around electronics like mice and keyboards. You should apply just enough pressure to get some oil out of its container but not so much that it spills out over.
Similar Posts
|
__label__pos
| 0.61408 |
Mathematical Programming Test
Here are some most common formulae based mathematical programs. These programming test is basically for checking the mathematical knowledge and Java operator precedence to implement these formulae.
1. Program to calculate the area of rectangle
Input: Width = 10; Height = 5
Output:
Area of Rectangle = Width * Height
= 10 * 5
= 50
2. Program to calculate the volume of sphere
Input:Radius = 48; Pie = 3.14
Output:
Volume = (4.0/3.0) * pie * (radius * radius * radius);
= (4.0/3.0) * 3.14 * 48 * 48 * 48
= 463433.132812
3. Program find the area of the pentagon
Input:s = 13; a = 5;
Output:
Area of Pentagon = (5.0/2.0) * s * a
= (5.0/2.0) * 13 * 5
= 162.5
4. Program to find the area of parallelogram
Input:base = 4; height = 18;
Output:
Area of Parallelogram = base * height;
= 4 * 18
= 72
5. Program to find the area of square
Input:a = 13
Output:
Area of Square = a2
= 132
= 169
6. Program to find the surface area of sphere
Input:Radius = 37, Pie = 3.14
Output:
Volume = 4 * pie * (radius * radius);
= 4 * 3.14 * 37 * 37
= 17210.285714
7. Program to find the volume of cone
Input:Radius = 38, Height = 35, Pie = 3.14
Output:
Volume = pie * radius * radius * height/3;
= 3.14 * 38 * 38 * 35/3
= 48766.666667
8. Program to find the volume of the cube
Input:side = 4
Output:
Volume of cube = side3
= 43
= 64
9. Program to find the volume of cylinder
Input:radius (r) = 38 , height (h) = 35
Output:
Volume of the cylinder = pie * radius2 * height
= 3.14 * 38* 38 * 35
= 146300.000000
10. Program to calculate the CGPA percentage
CGPA percentage is = (float)(9.5 * (CGPA));
Input:
CGPA = (Grades in all Subjects) / (Total Number of Subjects).
English = 9.1;
Hindi = 8.5;
Maths = 9.5;
Science =9.6;
SocialStudy = 8.6;
CGPA = (9.1+8.5+9.5+9.6+8.6)/(5.0);
Output:CGPA percentage is = 86.070000
11. Program to convert Celsius into Fahrenheit
Temperature in Fahrenheit = ((celsius * 9) / 5) + 32
Input:celsius= 12
Output:Temperature in Fahrenheit = 53.6
12. Program to convert days into years
Input:days= 799;
Output:
Number of years = days / 365;
= 799 / 365
= 2
13. Program to convert Fahrenheit into Celsius
Temperature in Celsius = ((Fahrenheit-32)*5)/9
Input:Fahrenheit = 54
Output:Temperature in Celsius= ((54-32)*5)/9 = 12.22222
14. Program to find the area of an equilateral triangle
Input:side (a) = 5
Output:
Area of Equilateral Triangle = ( 1.73 * a * a) / 4
= ( 1.73 * 5 * 5) / 4
= 10.81250
15. Program to find the area of a triangle
Input:b = 5; h = 12
Output:Area of Triangle = (b * h) / 2
= (5 * 12) / 2
= 30.0
16. Program to find the area of the right angle triangle
Input:b = 5; h = 8
Output:Area of Triangle = (b * h) / 2
= (5 * 8) / 2
= 20.0
17. Program to find the perimeter of the rectangle
Input:a = c = 5
b = d = 4
Output:Perimeter of Rectangle = 2 * ( a + b);
= 2 * (5 + 4)
= 18.00000
18. Program to find the simple interest
Simple Interest = (P × R × T) / 100
Input:P = 34000, R = 30,T = 5
where P = Principal Amount, R = Rate per Annum, T = Time (years)
Output: Simple Interest = 51000.000
19. Program to find the surface area of a cube
Surface Area Of Cube = 6 ( a * a )
Input:b = 5, h = 5
a (side) = length = breadth = height
Output:Surface Area Of Cube = 6 * 5 * 5=150.00000
20. Program to find the surface area of sphere
Input:l= 2, w = 3, h = 5;
where l = length, w = width and h = height.
Output:
Surface Area OfCuboid = 2 * (l * w+ w * h + h * l)
= 2 * (2 * 3 + 3 * 5 + 5 * 2)
= 62.00000
21. Program to find the surface area of the cylinder
Surface Area of Cylinder = 2 Π (r + h)
Input:r = 2.0, h = 5.0
Output:
Surface Area of Cylinder = 2 Π (r + h)
Here, r = radius, h = height, and Π ( pie ) = 3.14
= 2 * 3.14 * ( 2.0 + 5.0)
= 44.00000
22. Write a Java program to compute the area of a hexagon.
Area of a hexagon = (6 * s^2)/(4*tan(π/6))
where s is the length of a side
Input: Input the length of a side of the hexagon: 6
Output: The area of the hexagon is: 93.53074360871938
23. Write a Java program to compute the area of a polygon.
Area of a polygon = (n*s^2)/(4*tan(π/n))
where n is n-sided polygon and s is the length of a side
Input:
Input the number of sides on the polygon: 7
Input the length of one of the sides: 6
Output: The area is: 130.82084798405722
24. Write a Java program to compute the distance between two points on the surface of earth.
Distance between the two points [ (x1,y1) & (x2,y2)]
d = radius * arccos(sin(x1) * sin(x2) + cos(x1) * cos(x2) * cos(y1 – y2))
Radius of the earth r = 6371.01 Kilometers
Input:
Input the latitude of coordinate 1: 25
Input the longitude of coordinate 1: 35
Input the latitude of coordinate 2: 35.5
Input the longitude of coordinate 2: 25.5
Output: The distance between those points is: 1480.0848451069087 km
Advertisements
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 1 |
Svish Svish - 1 year ago 73
Java Question
Is there anything like an Internal class in Java?
In C# you can mark a class as
internal
so that it is only accessible from within the same package. Is there anything similar in Java?
Answer Source
You can create package-private classes by omitting the security modifier (public, private) from the class's declaration.
package com.sample;
class MyPackagePrivateClass
{
...
}
|
__label__pos
| 0.998437 |
Category: Aspect Oriented Programming
AOP
Executing on the Background Thread with PostSharp
Background ThreadIn my last two PostSharp blog posts, I looked at creating some custom attributes that you could use to get custom behavior out of PostSharp. Of course, I was implementing simple functionality and in fact, PostSharp had most of if not all of those features already built in.
Today, I want to dig in a little more to some of the built in power of PostSharp. To do this, I had to sign up for the 45 day evaluation license for PostSharp Ultimate. I had been putting that off to do as much “free stuff” as I could, but I really wanted to see what PostSharp could do, so I fired up the evaluation.
I created a very simple UI that looks like this:
Simple "Long Running Process" UI
When you click the buttons, I call a class that just sleeps for three seconds and returns to simulate some slow process.
public void LongRunningOperationBlocking()
{
Thread.Sleep(3000);
return;
}
I created code that will run this blocking process and then when the UI thread is free again, it displays a message box with whatever was in the textbox. Here is the UI in action. Note that I cannot select the text after clicking the button because the UI is locked up for 3 seconds.
Blocking Fail
However, with PostSharp, we can easily execute these long running methods on a background thread. As long as the method returns void, the context menu will reveal the “Execute method in the background” option like this:
The PostSharp Context Menu Showing Background Thread Option
After selecting that option, you get this wizard:
The PostSharp Add a Feature Wizard
The sum total of the “visible” changes is that our other method got a new attribute, “[Background]”
[Background]
public void LongRunningOperationBackground()
{
Thread.Sleep(3000);
return;
}
But, the win is that now when I click the button, the UI returns immediately and we see the message box right away, even while the long running process is going.
Blocking Fail
That is a gigantic amount of threading code that we don’t have to write. It is only probably 4-6 lines per method, but you have to write the same thing over and over again for every method you want to have do work on a background thread. Of course, any time you can just cut out “boilerplate code”, you make your code not only easier to write, but easier to read and discern intent. And that is “A Good Thing™”!
As always, you can download the code for this series from my Github repository for it. The path to this post’s code is here.
AOP
Parameter Checking with PostSharp
Guard TowerLast time, I started off my PostSharp series by adding some logging coming in and out of a method. It was very easy to do and it was nice to have a single point of code that wasn’t copy-paste repeated all over the codebase. This time, I want to look at what it might take to check the parameters that come into the method.
As Matt Groves pointed out in the comments of my last post, Phil Haack created something called NullGuard to do the kind of thing that I’m endeavoring to do here. In addition, PostSharp themselves created the ability to do just this very thing with the release of PostSharp 3, but it is only available in the paid editions. You can see an explanation in detail here.
Smart people don’t roll their own code for things that are already “solved problems” (I ranted about this in a recent podcast). However, I’m going to tackle this simple problem a few different ways in order to demonstrate how you can create your own more complex rules with PostSharp.
I created a class called NoNullParamsAttribute.cs and its code is as follows
using PostSharp.Aspects;
using System;
using System.Text;
namespace IntroToAop2
{
[Serializable]
public class NoNullParamsAttribute : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
var messages = new StringBuilder();
for (int i = 0; i < args.Arguments.Count; i++)
{
if (args.Arguments.GetArgument(i) == null)
{
messages.AppendFormat("Parameter \"{0}\" cannot be null. ", args.Method.GetParameters()[i].Name);
}
}
if (messages.Length > 0)
{
throw new ArgumentException(messages.ToString());
}
}
}
}
Now, we can call that above one of our methods from last week like this
[NoNullParams]
public string ReverseString(string input)
{
var inputArray = input.ToCharArray();
Array.Reverse(inputArray);
return new string(inputArray);
}
Now when I call it passing in null, I get an argument exception with the message ‘Parameter “input” cannot be null.’. I could obviously do the same check to make sure that it was not empty, or that all integers were greater than zero, and so on. Of course, the examples I mentioned earlier are much more sophisticated, however today we’ve seen how to iterate over method parameters, get their name, and examine them. Not too bad for a day’s work!
All of the code can be found on my GitHub repo for this series under IntroToAop2.
AOP
My Intro to AOP with PostSharp
PostSharp LogoOn the last episode of my podcast, I interviewed Matt Groves about Aspect Oriented Programming (AOP). Talking with him inspired me enough to really make a go and spend some time with it to see if I could get comfortable with it. I’m going to try to use PostSharp because I like how it works and because (to be honest) they helped promote that last podcast, and that makes me like their style 😉
As Matt talked about in the podcast, AOP is basically a kind of design pattern. Essentially, you find stuff that is in your code and in a lot of methods, but doesn’t ACTUALLY pertain to the method. Common examples are logging, error handling, cache checks, and parameter checking, to name a few. Once you isolate this code that is basically boilerplate code that is used over and over again throughout your methods, you pull that code out and allow it to exist in only one place.
Frameworks like PostSharp act as post-compilers, so you write your code, add some attributes, and PostSharp will take an extra compiler pass through your code and write the code in for you, as if you had done it yourself. This allows for no additional runtime overhead, as the code works out to just basically be your standard code you would have written anyway.
To get started, head over to PostSharp’s site and download their product. It gets added as a Visual Studio extension and – as of today – you get a 45 day license to their full product when you download it. After that, it will revert to the free version. The free version still has a ton of great features and everything I’m doing today will work on only the free version.
To start with, I’m only going to make a console application that just does some simple string manipulation. You can see the entire project in its completed state on GitHub here.
After I created the project, I had to right click the project and select Add PostSharp to Project. This added a reference to PostSharp’s .dll and set PostSharp up in my packages config. Now, we can start making our first aspect. I’m going to tackle the “low hanging fruit” of some very simple logging first.
I’m going to keep this very simple, so I’m just going to do Console.WriteLines instead of complicating this project by integrating a complicated logging framework just to show an example. I have a Utility class with a ReverseString method. In addition to the “business logic” of actually reversing the string, I am also writing out every entry and exit to and from the method.
public class Utility
{
public string ReverseString (string input)
{
Console.WriteLine("Entering ReverseString at {0}", DateTime.Now.Ticks.ToString());
var inputArray = input.ToCharArray();
Array.Reverse(inputArray);
Console.WriteLine("Leaving ReverseString at {0}", DateTime.Now.Ticks.ToString());
return new string(inputArray);
}
}
Calling that method with some test data gives me this output:
Original Results
Now, every method I’d write, I’d have to dirty up with those WriteLines. It clutters up the code and makes it that much harder to understand. Let’s pull that out into an Aspect.
I add a plain C# class to my project called WriteLineAttribute.cs. The code for it is here:
using PostSharp.Aspects;
using System;
namespace IntroToAop1
{
[Serializable]
public class WriteLineAttribute : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
Console.WriteLine("Entering {0} at {1}", args.Method.Name, DateTime.Now.Ticks.ToString());
}
public override void OnExit(MethodExecutionArgs args)
{
Console.WriteLine("Leaving {0} at {1}", args.Method.Name, DateTime.Now.Ticks.ToString());
}
}
}
Some things to point out. You have to mark the class as Serializable. This is just due to how PostSharp manages these things. Secondly, you just have to inherit from OnMethodBoundaryAspect. That is one of the built in Aspects that PostSharp offers for exactly what we want to do, operating coming in and out of methods. Now, to get my behavior, I just override the OnEntry and OnExit methods and put the Console.WriteLines in that I had before (making the method name dynamically generated from the arguments passed in to me).
Now, my ReverseString method looks like this:
[WriteLine]
public string ReverseString (string input)
{
var inputArray = input.ToCharArray();
Array.Reverse(inputArray);
return new string(inputArray);
}
The WriteLines are gone and I just added the [WriteLine] attribute at the top of my method. Running that gives me this output:
After Results
Okay, maybe this intro isn’t “blow your mind impressive”, but it does show you some of the power to make clean code that PostSharp offers you. If I wanted to log something new, I only have to change it in one place. If I wanted to make that logging come on and off based on a compile flag or a configuration value, again it is a single point of change. And the fact that my results are exactly the same (except for the timestamps) means that I had a successful refactor while cleaning up my code.
Again, if you want to play around with the code, you can find it on GitHub. I’ve added some more string manipulation methods just to further show how much nicer the code will look “PostSharp’ed”.
Next time, I’m going to use PostSharp to validate the parameters on my methods so that they don’t blow up when you pass in null values!
|
__label__pos
| 0.683343 |
The Semantic Web continues to break new ground, and Web 3.0 seems to be a term that people associate with it. In the backwaters of semantics, microformats aims to develop standards to embed semantic information into XHTML. I can’t help to think that’s strange.
One of the principles of microformats is to “design for humans first, machines second”. Still, almost all formats are about adding span tags and class or rel attributes to existing XHTML. Humans will never see those, or benefit from them, unless there’s some kind of machine parsing done on top. Microformats were first built by people working with the blog search engine technorati, one of the reasons being to make it easier for technorati to aggregate data from those blogs. So machines it is.
Thing is, if you’re going to give information to machines, why not use vCard instead of the equivalent microformat hCard? hCard is just a translation of vCard into XHTML. vCards open in your e-mail program, allowing you to save the contact information there, hCards don’t open anywhere. vCards are also just as easy (probably easier) to crawl and parse as microformats.
So what I’m saying is, could we please use real formats instead of microformats?
Update: This article was too fuzzy, so let me clarify: This discussion is about embedded formats vs. formats. The “vs.” come from the fact that lots of sites that implement microformats choose not to implement the corresponding format, which in some cases lead to people not being able to use the extra information.
|
__label__pos
| 0.510495 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7958152 B2
Publication typeGrant
Application numberUS 12/140,931
Publication dateJun 7, 2011
Priority dateAug 17, 2005
Fee statusLapsed
Also published asCN1917522A, CN100592748C, US7403949, US8554744, US20070043734, US20080243942, US20110196845
Publication number12140931, 140931, US 7958152 B2, US 7958152B2, US-B2-7958152, US7958152 B2, US7958152B2
InventorsDavid Maxwell Cannon, Colin Scott Dawson, Robert Clair Edwards, Jr.
Original AssigneeInternational Business Machines Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Elimination of redundant objects in storage systems
US 7958152 B2
Abstract
Provided are a method, system, and article of manufacture, wherein a data structure corresponding to a set of client nodes selected from a plurality of client nodes is generated. Objects from the selected set of client nodes are stored in the data structure. A determination is made that an object corresponding to a client node of the selected set of client nodes has to be stored. An additional determination is made as to whether the object has already been stored in the data structure by any client node of the selected set of client nodes. The object is stored in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes.
Images(11)
Previous page
Next page
Claims(12)
1. A system, wherein a server node is coupled to a plurality of client nodes, the system comprising:
memory; and
processor coupled to the memory, wherein the processor is operable to:
(i) generate a data structure corresponding to a set of client nodes selected from the plurality of client nodes;
(ii) store, in the data structure, objects from the selected set of client nodes;
(iii) determine that an object corresponding to a client node of the selected set of client nodes has to be stored;
(iv) determine whether the object has already been stored in the data structure by any client node of the selected set of client nodes; and
(v) store the object in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes, wherein the selected set of client nodes share common objects that are stored in the data structure on the server node, and wherein redundant files are eliminated by storing the common objects of the selected set of client nodes in the data structure generated on the server node.
2. The system of claim 1, wherein the processor is further operable to:
store database entries for the objects stored in the data structure corresponding to the selected set of client nodes, wherein the database entries include metadata corresponding to the objects; and
include in the metadata, insertion time for the objects and the number of client nodes that reference the objects, wherein the number of objects stored in the data structure is limited based on insertion times for the objects and the number of client nodes that reference the objects.
3. The system of claim 2, wherein the processor is further operable to:
receive, from one client, a request to delete one object;
dereference a pointer to the one object stored in the data structure, wherein the pointer is associated with inventory entries specific to the one client;
decrement the number of client nodes that references the one object; and
delete the one object from the data structure, in response to determining that no client nodes reference the one object.
4. The system of claim 1, wherein the server node and the set of client nodes can communicate in a client server environment and in a peer to peer environment.
5. An article of manufacture for controlling a server node that is coupled to a plurality of client nodes, wherein the article of manufacture is capable of causing operations, the operations comprising:
generating a data structure corresponding to a set of client nodes selected from the plurality of client nodes;
storing, in the data structure, objects from the selected set of client nodes;
determining that an object corresponding to a client node of the selected set of client nodes has to be stored;
determining whether the object has already been stored in the data structure by any client node of the selected set of client nodes; and
storing the object in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes, wherein the selected set of client nodes share common objects that are stored in the data structure on the server node, and wherein redundant files are eliminated by storing the common objects of the selected set of client nodes in the data structure generated on the server node.
6. The article of manufacture of claim 5, wherein the article of manufacture is a computer readable medium.
7. The article of manufacture of claim 5, the operations further comprising:
storing database entries for the objects stored in the data structure corresponding to the selected set of client nodes, wherein the database entries include metadata corresponding to the objects; and
including in the metadata, insertion time for the objects and the number of client nodes that reference the objects, wherein the number of objects stored in the data structure is limited based on insertion times for the objects and the number of client nodes that reference the objects.
8. The article of manufacture of claim 7, the operations further comprising:
receiving, from one client, a request to delete one object;
dereferencing a pointer to the one object stored in the data structure, wherein the pointer is associated with inventory entries specific to the one client;
decrementing the number of client nodes that references the one object; and
deleting the one object from the data structure, in response to determining that no client nodes reference the one object.
9. The article of manufacture of claim 5, wherein the server node and the set of client nodes can communicate in a client server environment and in a peer to peer environment.
10. A system, comprising:
a server node;
a plurality of client nodes coupled to the server node;
means for generating a data structure corresponding to a set of client nodes selected from the plurality of client nodes;
means for storing, in the data structure, objects from the selected set of client nodes;
means for determining that an object corresponding to a client node of the selected set of client nodes has to be stored;
means for determining whether the object has already been stored in the data structure by any client node of the selected set of client nodes; and
means for storing the object in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes, wherein the selected set of client nodes share common objects that are stored in the data structure on the server node, and wherein redundant files are eliminated by storing the common objects of the selected set of client nodes in the data structure generated on the server node.
11. The system of claim 10, further comprising:
means for storing database entries for the objects stored in the data structure corresponding to the selected set of client nodes, wherein the database entries include metadata corresponding to the objects; and
means for including in the metadata, insertion time for the objects and the number of client nodes that reference the objects, wherein the number of objects stored in the data structure is limited based on insertion times for the objects and the number of client nodes that reference the objects.
12. The system of claim 11, further comprising:
means for receiving, from one client, a request to delete one object;
means for dereferencing a pointer to the one object stored in the data structure, wherein the pointer is associated with inventory entries specific to the one client;
means for decrementing the number of client nodes that references the one object; and
means for deleting the one object from the data structure, in response to determining that no client nodes reference the one object.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of application Ser. No. 11/206,467 filed on Aug. 17, 2005, now U.S. Pat. No. 7,403,949, which is incorporated herein by reference in its entirety.
BACKGROUND
1. Field
The disclosure relates to a method, system, and article of manufacture for the elimination of redundant objects in storage systems.
2. Background
A storage management application, such as IBM* Tivoli* Storage Manager* (TSM), may be implemented on a storage management server. The storage management application may manage storage requirements for a plurality of client nodes that are coupled to the storage management server via a network. * IBM, Tivoli, and Tivoli Storage Manager are trademarks or registered trademarks of IBM Corporation.
The storage management application may create and manage a repository for data and programs that are backed up, archived, migrated, or otherwise copied from the client nodes to the storage management server. The storage management server may store data objects, such as files, in one or more storage pools and may use a database stored in the storage management server for tracking information about the stored data objects.
The storage management application may perform incremental backup, incremental archiving, migration, or incremental copying of data from the client nodes to the storage management server. For example, if the storage management application comprises a backup application then the backup application may perform incremental backup operations in which files are backed up only if the files have changed since a previous, periodic full backup, where the periodic full backups may be made on a weekly, monthly or some other periodic basis. TSM extends incremental backup by using a “progressive incremental backup” in which objects are backed up once and then never backed up again unless the objects undergo modifications on a client node. The progressive incremental approach for backups, archiving, or copying of data, etc., may reduce the amount of data that has to be copied or moved to the storage management server from the client nodes, and can reduce network traffic and storage space requirements over the incremental approach for backups, archiving, or copying of data. The progressive incremental backup approach may use a database that tracks information about every stored object and the location at which each object is stored.
In certain computing environments, different client nodes may store the same files in the storage management server. For example, client nodes may have the same operating system files or different people working on the same project may store the same document locally on different client nodes. The storage of the same data object in different client nodes may introduce redundancy for backups, archiving, migration, copying, etc., by backing up and storing the same files from different client nodes on the storage management server, and may lead to inefficiencies even in systems using the progressive incremental approach or certain other approaches.
SUMMARY OF THE DESCRIBED EMBODIMENTS
Provided are a method, system, and article of manufacture, wherein a data structure corresponding to a set of client nodes selected from a plurality of client nodes is generated. Objects from the selected set of client nodes are stored in the data structure. A determination is made that an object corresponding to a client node of the selected set of client nodes has to be stored. An additional determination is made as to whether the object has already been stored in the data structure by any client node of the selected set of client nodes. The object is stored in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes.
In certain embodiments, the plurality of client nodes and the server node are computational devices, wherein the selected set of client nodes share common objects that are stored in the data structure.
In additional embodiments, database entries are stored for the objects stored in the data structure corresponding to the selected set of client nodes, wherein the database entries include metadata corresponding to the objects. Insertion time for the objects and the number of client nodes that reference the objects are included in the metadata.
In yet additional embodiments, a request to delete one object is received from one client. A pointer is dereferenced to the one object stored in the data structure, wherein the pointer is associated with inventory entries specific to the one client. The number of client nodes that references the one object is decremented. The one object is deleted from the data structure, in response to determining that no client nodes reference the one object.
In further embodiments, the number of objects stored in the data structure are limited based on insertion times for the objects and the number of client nodes that reference the objects.
In still further embodiments, the server node and the set of client nodes can communicate in a client server environment and in a peer to peer environment, wherein redundant files are eliminated by storing common objects of the set of client nodes in the data structure generated on the server node.
BRIEF DESCRIPTION OF THE DRAWINGS
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
FIG. 1 illustrates a block diagram of a computing environment in accordance with certain embodiments;
FIG. 2 illustrates a block diagram of common objects of a selected set of client nodes, in accordance with certain embodiments;
FIG. 3 illustrates block diagrams of exemplary common objects corresponding to exemplary client nodes, in accordance with certain embodiments;
FIG. 4 illustrates an exemplary database for maintaining inventory entries for objects, in accordance with certain embodiments;
FIG. 5 illustrates a flowchart that shows the processing of backup, deletion, and information requests, in accordance with certain embodiments;
FIG. 6 illustrates a flowchart that shows the processing of backup requests, in accordance with certain embodiments;
FIG. 7 illustrates a flowchart that shows the processing of deletion requests, in accordance with certain embodiments;
FIG. 8 illustrates a flowchart that shows how to maintain limits on the number of common objects, in accordance with certain embodiments;
FIG. 9 illustrates a flowchart that shows the elimination of redundant objects for a selected set of client nodes, in accordance with certain embodiments; and
FIG. 10 illustrates the architecture of a computer system, wherein in certain embodiments the server and the clients of the computing environment of FIG. 1 may be implemented in accordance with the architecture of the computer system.
DETAILED DESCRIPTION
In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments. It is understood that other embodiments may be utilized and structural and operational changes may be made. For example, while the following description describes embodiments with reference to a backup of data, it is understood that alternative embodiments may be utilized for archiving of data, migration of data, etc.
Redundant Objects in Storage Management Systems
Certain storage management applications reduce redundancy by recognizing objects that have previously been backed up by other client nodes and by not backing up multiple instances of the same object from a plurality of client nodes. While such an approach may reduce redundancy, extensive processing may be required to determine whether a particular object has already been backed up by any other client node.
Certain storage management applications select a model node from a plurality of client nodes, where the model node manages which objects are common or candidates for being redundant for a plurality of other client nodes. Before a client node backs up an object to a server, the client node checks to determine whether the model node has already backed up the object to the server. As a result, redundant objects may be reduced on the server.
If a model node is maintained, then an object may not be considered redundant if the object is not found in the model node. For example, consider an enterprise with 10,000 employees with a first node labeled FRED that corresponds to data handled by an employee named Fred and a second node labeled JOE that corresponds to data handled by an employee named Joe, where the enterprise also has a model node named MODEL_A. In such a case, objects backed up by node MODEL_A may become the candidate objects that can be eliminated from backups by nodes FRED and JOE. However, nodes FRED and JOE may be used on the same project and may have a common set of objects between them that are the same. If there is a single model node MODEL_A, then while backing up data FRED and JOE would not have the common set of objects eliminated from the backup if the common set of objects were not backed up by the node MODEL_A. For an enterprise with 10,000 employees it may not always be desirable to have MODEL_A backup the objects that would be common to FRED and JOE because the objects that are common to nodes FRED and JOE may not be considered relevant across the 10,000 employees of the enterprise.
Certain embodiments, do not use a dedicated model node that is used to declare those objects that are considered to be candidates for redundant object elimination, but instead use a selected set of client nodes that may be referred to as an “adaptive model node”. The adaptive model node looks at relationships between end-user nodes and the objects being populated to a backup server by these end-user nodes and dynamically determines redundancy between the objects.
Elimination of Redundant Objects
FIG. 1 illustrates a block diagram of a computing environment 100 in accordance with certain embodiments. At least one computational platform 102, where in certain embodiments the computational platform 102 may comprise a server, is coupled via a network 104 to a plurality of computational platforms 106 a, 106 b, . . . , 106 i, . . . , 106 n, where in certain embodiments the plurality of computational platforms 106 a . . . 106 n may comprise client nodes. While FIG. 1 indicates the computational platform 102 to be a server, and the computational platforms 106 a . . . 106 n to be client nodes, in alternative embodiments the computational platforms 102, 106 a . . . 106 n may comprise any suitable computational platform, including those presently known in the art, such as personal computers, workstations, mainframes, midrange computers, network appliances, laptop computers, telephony devices, blade computers, hand held computers, etc. The server 102 may also be referred to as a server node and the client nodes 106 a . . . 106 n may also be referred to as clients. Alternative embodiments may be implemented in a computing environment that is not based on a client-server paradigm. For example alternative embodiments may be implemented in a peer-to-peer networked environment.
In certain embodiments, a group of selected client nodes, such as client nodes 106 a . . . 106 i, is designated as a selected set of client nodes 108. While FIG. 1 shows client nodes 106 a . . . 106 i as being designated as the selected set of client nodes 108, in alternative embodiments the selected set of client nodes 108 may be comprised of different client nodes than those indicated in FIG. 1. In certain embodiments, there may be a plurality of different selected sets of client nodes 108.
The selected set of client nodes 108 may be selected via a computer program or may be designated by a user. The selected set of client nodes 108 may be selected according to a criterion such that objects are more likely to be shared among the selected set of client nodes 108, than between the selected set of client nodes 108 and other client nodes, such as client node 106 n. The selected set of client nodes 108 is a derived representation of the client nodes 106 a . . . 106 i and may be a namespace representing the unique objects, such as file names, stored by all members of the selected set of client nodes 108.
The coupling of the server 102 to the client nodes 106 a . . . 106 n may be direct or may be via any network 104 known in the art, such as a Storage Area Network (SAN), Local Area Network (LAN), Wide Area Network (WAN), the Internet, an Intranet, etc.
A storage management application 110, a database 112, and a storage 114 may be coupled to the server 102. While FIG. 1 shows the storage 114 to be inside the server, in certain embodiments, the storage 114 may be external to the server 102 and may comprise storage units that are coupled to the server via a network. For the purposes of the application, the server 102, also referred to as a server node 102, may be considered to encompass the storage 114 whether or not the storage 114 is internal or external to the server 102. In certain embodiments, the server 102 may be used for backup processing. In other embodiments, the server 102 may be used for archiving, migration or copying of data. The database 112 may be external or internal to the server 102. The database 112 may include one or more containers 116 or any other data structure that represents the one or more selected sets of client nodes 108. For example, there may be different containers of client nodes for different projects. The containers 116 may be implemented as a list or any other suitable data structure.
The storage 114 includes a data structure of named common objects 118 that represents common objects of the selected set of client nodes 108. In addition, the storage 114 also includes data structures representing specific objects of client nodes 120 a, 120 b, . . . 120 i corresponding to the client nodes 106 a, 106 b, . . . 106 i respectively.
The storage management application 110 may allow some or all objects from the selected set of client nodes 108 to be backed up to the storage 114 and stored as the common objects 118 of the selected set of client nodes 108. Similarly, the storage management application 108 may also allow certain objects from the client nodes 106 a . . . 106 i to be backed up to the storage 114 and stored as the specific objects of client nodes 120 a . . . 120 i. For example, the actual data of the objects from client node 106 b that are not in the common objects 118 of the selected set of client nodes 108 may be backed up in the storage 114 as the specific objects of client node 120 b. The common objects 118 may comprise objects that are shared among the selected set of client nodes 108. Various mechanisms may be used to determine whether an object that is to be backed up by a client node has already been stored as a common object. For example, in certain embodiments a checksum may be computed for an object to be backed up and compared with a checksum for the common object to determine whether an object to be backed up has possibly been already stored on the server 102. In addition to comparing the checksum, additional or alternative checks may be performed to determine whether an object to be backed up by a client has already been stored as a common object on the server 102.
The database 112 includes an inventory with entries corresponding to common objects 118 of selected sets of client nodes and the specific objects 120 a . . . 120 i corresponding to the selected set of client nodes 108. The storage management application 110 may query the database 112 for inventory information corresponding to one or more objects of any client node 106 a . . . 106 i.
Therefore, FIG. 1 illustrates certain embodiments in which the storage management application 110 allows the selected set of client nodes 108 to be represented by a container 116 in the server 102. In response to a request from a client node, such as client node 106 b that belongs to the set of client nodes 108, to backup an object to the storage 114, the storage management application 110 determines whether the object has already been stored in the common objects 118 of the selected set of client nodes 108. If the object has not been already stored in the common objects 118 of the selected set of client nodes 108, the client node sends the object to the storage 114. If the object has already been stored in the common objects 118 of the selected set of client nodes 108, then a backup is performed for the client node, such as client node 106, but the client node does not have to resend the previously stored object.
FIG. 2 illustrates a block diagram of common objects 118 of a selected set of client nodes implemented in the server 102, in accordance with certain embodiments.
The common objects of a selected set of client nodes may include a plurality of objects 200 a, 200 b, . . . , 200 p, wherein in certain embodiments the objects 200 a . . . 200 p are files. The objects 200 a . . . 200 p may be the union of all objects found in the selected set of client nodes 108.
Associated with the objects 200 a, 200 b, . . . 200 p, are indicators corresponding to reference counts 202 a, 202 b, . . . 202 p, insertion times 204 a, 204 b, . . . 204 p, and other metadata 206 a, 206 b, . . . 206 p. For example, reference count 202 a, insertion time 204 a, and other metadata 206 a are associated with object 200 a.
A reference count indicates the number of client nodes of the selected set of client nodes 108 that have the corresponding object in common. For example, if client node 106 a and client node 106 b both share the object 200 a stored in the common objects 118 of selected set of client nodes, then the reference count 202 a for the object 200 a may be two. The insertion time for an object is the time at which the object was inserted into the common objects 118 of the selected set of client nodes 108. For example, the insertion time 204 a for object 200 a may be “Mar. 11, 2005”.
The other metadata 206 a . . . 206 p may include other descriptive information of the objects. Such information may indicate the name and size of the file, etc.
Certain embodiments may allow pointers that reference the objects 200 a . . . 200 p stored in the common objects 118 of selected set of client nodes. By using pointers the actual data of the objects 200 a . . . 200 p need to be stored only once in the common objects 118 of the selected set of client nodes 108.
The metadata that has been shown in association with the common objects 118 of the selected set of client nodes 108 may be stored in the database 112 along with the objects 200 a . . . 200 p or representations of the objects 200 a . . . 200 p. FIG. 2 provides a schematic illustration of the association of objects with corresponding metadata. The actual association of objects with corresponding metadata may be implemented differently by using other data structures.
FIG. 3 illustrates block diagrams of exemplary common objects corresponding to exemplary client nodes implemented in the computing environment 100, in accordance with certain embodiments.
In FIG. 3, an exemplary client node X 300 includes object A 300 a, object B 300 b and object C 300 c, an exemplary client node Y 302 includes object A 302 a, object B 302 b, and object D 302 c, and an exemplary client node Z 304 includes object A 304 a, object E 304 b, and object F 304 c.
Based on client node X 300, client node Y 302, and client node Z 304 that form a selected set of client nodes, the exemplary common objects of the selected set of client nodes 306 may be generated by the storage management application 110.
The exemplary common objects of the selected set of client nodes 306 include objects A, B, C, D, E, F 306 a, 306 b, 306 c, 306 d, 306 e, 306 f and corresponding metadata 308 a, 308 b, 308 c, 308 d, 308 e, 308 f. Therefore, the exemplary common objects of the selected set of client nodes 306 may potentially be the union of all objects present in client node X 302, client node Y 304, and client node Z 306.
FIG. 4 illustrates an exemplary database 400 for maintaining inventory entries for objects, in accordance with certain embodiments. The exemplary database 400 may correspond to the database 112 implemented in the computing environment 100 of FIG. 1.
The exemplary database 400 may include inventory entries 402 a . . . 402 i for objects of the client nodes 106 a . . . 106 i. In FIG. 4, data structures of the inventory entries 402 a for objects of the client node 106 a are shown for illustration. In addition the exemplary database 400 may include data structures for common object inventory entries 404, where the common object inventory entries 404 are for the common objects 118 of the selected set of client nodes 108. The common object inventory entries 404 may include object inventory entries 406 a . . . 406 q corresponding to the common objects 118 of the selected set of client nodes 108, where the object inventory entries may include corresponding metadata, such as metadata 408 a . . . 408 q.
The inventory entries 402 a for client node 106 a may include a plurality of object inventory entries 410 a . . . 410 m corresponding to objects of the client node 106 a. An exemplary object inventory entry 410 a may include metadata 412 for the corresponding object, where the metadata 412 may include an active/inactive indicator that indicates whether the object is active or inactive for the client node 106 a. The exemplary object inventory entry 410 a may also include a time of object backup that indicates when the object was backed up for the client node 106 a, where the time of object backup may be also be referred to as the insertion time 204 a . . . 204 p. The time of object backup for a specific node may be different than that for other client nodes and may be different than the time at which the object was stored as a common object. Additionally, the exemplary object inventory entry 410 a may also include an object management policy that indicates a policy by which the object is managed for the client node 106 a, even though a different policy might be used for the same object on other client nodes. The exemplary object inventory entry 410 a may also include security attributes for the object of the client node 106 a, where the security attributes could be different for the same object on different client nodes. Certain other metadata discussed earlier, such as reference counts 202 a . . . 202 p, insertion times 204 a . . . 204 p may also be included.
In addition to the metadata 412, the object inventory entry 410 a may also include a reference 414 to a corresponding object inventory entry in the common object inventory entries 404, if the object was backed up in the common objects 118 of the selected set of client nodes 108. The exemplary database 400 reduces redundancy of nodes by storing references 414 to corresponding object inventory entries for the model node, but at the same time allows for different attributes for the same object on different nodes by storing the attributes in the metadata 412.
The data structures shown in the exemplary database 400 are shown for illustration. In alternative embodiments, the exemplary database 400 may be structured differently. The metadata 412, 408 a . . . 408 q, i.e., the metadata for both specific objects and common objects, may also include other information for each object, such as the name and size of the object in addition to the time the object was stored. In addition, in certain embodiments a table may be used to keep track of the actual client nodes that reference a common object.
FIG. 5 illustrates a flowchart that shows the processing of backup, deletion, and information requests, in accordance with certain embodiments. The operations described in the flowchart for FIG. 5 may be implemented in the server 102 of the computing environment 100.
Control starts at block 500, where one or more selected sets of client nodes 108 are designated to the one or more containers 116 that represent the selected sets of client nodes in the server 102. In certain embodiments, the definition of the selected set of client nodes 108 could be performed by a command interface to the server 102. In other embodiments, the selected set of client nodes 108 represented by the one or more containers 116 may be dynamically created using business policies or other attributes defined for an enterprise. For example, the server 102 may derive the one or more containers 116 for machines of the same type, such as those having the same architecture and operating system, and for those end user client owners who belong in the same department or those end user client owners that are assigned to the same project. The server 102 may allow characteristics specific to the environment of the users to be considered while defining the client nodes to be designated to the one or more containers 116.
After the execution of block 500 control may proceed to block 502, where the server 102 receives a request from a client node, wherein the request is related to an object, such as a file. Block 502 need not be executed immediately after the execution of block 500. The lapse of time between the execution of block 500 and block 502 is indicated by a dashed line in FIG. 5.
The storage management application 110 on the server 102 may determine (at block 504) that the request is a backup request and then process (at block 506) the backup request. The storage management application 110 on the server 102 may also determine (at block 508) that the request is a deletion request for an object and process (at block 510) the deletion request. Furthermore, the storage management application 110 may determine (at block 512) that the request is for seeking information on objects stored in the server 102, and process (at block 514) the request for information on objects. Further details of blocks 506 and 510 are described in FIGS. 6 and 7 respectively.
FIG. 6 illustrates a flowchart that shows the processing of backup requests, in accordance with certain embodiments. The operations described in the flowchart for FIG. 6 may be implemented in the server 102 of the computing environment 100. The operations shown in FIG. 6 may in certain embodiments implement the operations described in block 506 of FIG. 5.
Control starts at block 600, where the storage management application 110 initiates the processing of the backup request 504 from a client node of the selected set of client nodes 108. At the time block 600 is performed a decision has already been made in the system as to which objects need to be backed up. This may be performed by comparing objects on the client node with metadata on the server to determine which objects needed to be backed up or via some other mechanism.
The storage management application 110 determines (at block 604) from the database 112 whether the object has been backed up in the common objects 118 of the selected set of client nodes 108. If not, then the object is a new object to be backed up and the storage management application 110 backs up (at block 606) the object to add to the common objects 118 of the selected set of client nodes 108.
The storage management application 110 initializes (at block 608) the reference count, such as one of the reference counts 202 a . . . 202 p, of the object in the common objects 118 of the selected set of client nodes 108 as one and stores the time the object was backed up, in one of the entries labeled insertion time 204 a . . . 204 p. The metadata in the common object inventory entries of the database 112 are created (at block 610). The storage management application 110 creates (at block 612) an inventory entry for the object in the specific inventory entry 120 a . . . 120 i for the object and creates a reference to the inventory entry of object as backed up in the common objects 118 of selected set of client nodes. The inventory entries may be maintained in the database 112. Control proceeds to block 614 for processing the next object for backup and control returns to block 604. If there is no next object for backup in block 614, then the process exits.
If at block 604, the storage management application 110 determines from the database 112 that the object has been backed up in the common objects 118 of the selected set of client nodes 108, then the storage management application 110 increments (at block 616) the reference count, such as one of the reference counts 202 a . . . 202 p, of the object by one, where the object is in the common objects 118 of the selected set of client nodes 108. The storage management application 110 creates (at block 618) an inventory entry for the object in the specific inventory entries 402 a . . . 402 i of the object and creates a reference 414 to the inventory entry of the object 406 a . . . 406 q as backed up in the common objects 118 of the selected set of client nodes 108. Control proceeds to process (at block 614) to process the next object, if any such object has to be backed up.
Therefore, FIG. 6 illustrates certain embodiments to backup objects in the server 102. If the object is a new object then the object may be backed up in the common objects 118 of the selected set of client nodes 108. If the object has already been stored in the common objects 118 of the selected set of client nodes 108, the object is not sent to the server 102 once again but inventory entries, reference counts, and other metadata are updated to reference the already stored object on the server. A reference count 202 a . . . 202 p that indicates the number of client nodes using a backed up common object is maintained.
FIG. 7 illustrates a flowchart that shows the processing of deletion requests, in accordance with certain embodiments. The operations described in the flowchart for FIG. 7 may be implemented in the server 102 of the computing environment 100. The operations shown in FIG. 7 may in certain embodiments implement the operations described in block 510 of FIG. 5.
Control starts at block 700, where the storage management application 110 initiates the processing of the deletion request 508 from a client node. The storage management application 110 determines (at block 702) whether the object to be deleted has a reference to the common object inventory entries 404. If not, then the object is not stored as a common object, and the storage management application 110 deletes (at block 704) the object from the specific objects of the client node 120 a . . . 120 i, and the next request is processed (at block 706).
If the storage management application 110 determines (at block 702) that the object to be deleted has a reference to the common object inventory entries 404, then the storage management application 110 dereferences (at block 708) the reference to the common object inventory entries 404, e.g., by updating the exemplary reference to corresponding common object inventory entry 414.
The storage management application 110 decrements (at block 710) the reference count 202 a . . . 202 p for the object in the common objects by one. The storage management application 110 determines (at block 712) whether the reference count for the object is zero. If so, then the object is no longer being used by any of the selected set of client nodes 108 and the storage management application 110 deletes (at block 714) the object from the common set of objects. In alternative embodiments, other policies may be used for deleting common objects. Control proceeds to block 706, where the storage management application 110 processes the next request.
If the storage management application 110 determines (at block 712) that the reference count for the object is not zero, then the object is being used by at least one of the selected set of client nodes 108 and the object is not deleted. Control proceeds to block 706, where the storage management application 110 processes the next request.
Therefore, FIG. 7 illustrates certain embodiments in which a common object stored in the common objects 118 of selected set of client nodes is deleted when the common object is no longer being used by any of the client nodes. In certain embodiments, the non-referenced file may be kept in the common objects 118 of the selected set of client nodes 108 for a period of time after the reference count goes to zero.
FIG. 8 illustrates a flowchart that shows how to maintain limits on the number of common objects 118, in accordance with certain embodiments. The operations described in the flowchart for FIG. 8 may be implemented in the server 102 of the computing environment 100.
Control starts at block 800, where the storage management application 110 initiates the process for maintaining limits on the number of common objects 118 by processing the common objects 118 of the selected set of nodes one by one. If the number of common objects is not limited then the space requirements for storing the common objects 118 of the selected set of nodes may keep on increasing as new objects are added.
The storage management application 110 determines (at block 802) whether an object being processed has a reference count 202 a . . . 202 p of one. If so, then only one client node is using the common object and the storage management application 110 determines (at block 804) whether the object was inserted before some specified time period. If so, the storage management application 110 eliminates (at block 806) the object from the common objects 118 of the selected set of client nodes 108. However, the object is not removed from storage because there is still one specific object that references the eliminated common object. The database entries that show the object as a common object is removed and the specific object inventory for the object will now reference the object directly rather than showing the object as a common object. The storage management application 110 determines (at block 808) whether all objects in the common objects 118 of the selected set of client nodes 108 have been processed, If so, the process stops (block 810), and if not, the storage management application 110 processes (at block 812) the next object by returning control to block 802.
If at block 802, the storage management application 110 determines that the object being processed does not have a reference count of one, then control proceeds to block 808. Also, at block 804, if storage management application 110 determines that the object was not inserted before some specified period control proceeds to block 808.
Therefore, FIG. 8 illustrates certain embodiments in which a common object is deleted when the common object is being used by a single client node and the common object was inserted before some specified period. In alternative embodiments, other criteria may be used to limit the size of the common objects 118 of the selected set of client nodes 108.
FIG. 9 illustrates a flowchart that shows the elimination of redundant objects for a selected set of client nodes, in accordance with certain embodiments. The operations described in the flowchart for FIG. 9 may be implemented in the server 102 of the computing environment 100.
Control starts at block 900, wherein in certain embodiments a server node 102 generates a data structure corresponding to a set of client nodes 106 a . . . 106 i selected from a plurality of client nodes 106 a . . . 106 n. The data structure may include files, lists, arrays, records, trees, tables, or any other suitable data structure known in the art. The data structure is capable of storing data and may allow operations to be performed on the data. For example, in certain exemplary embodiments the generated data structure may include common objects 118 of the selected set of client nodes 108, a container 116, a database 112, etc.
The server node 102 stores (at block 902), in the data structure, objects from the selected set of client nodes 108. The server node 102 determines (at block 904) that an object corresponding to a client node of the selected set of client nodes 108 has to be stored. In alternative embodiments, the operations of block 904 may be performed by a client node.
The server node 102 determines (at block 906) whether the object has already been stored in the data structure by any client node of the selected set of client nodes 108. The server node 102 stores (at block 908) the object in the data structure, in response to determining that the object has not already been stored in the data structure by any client node of the selected set of client nodes 108. In certain embodiments, operations shown as being performed by the server node 102 may be performed by a client node.
In certain embodiments, the objects of a selected set of client nodes 108 may be stored in the server 102 as common objects 118. The selected set of client nodes 108 may send objects to the server node 102 if the objects have not already been stored as common objects 118. If the object has already been stored as common objects 118, a backup operation performed by a client node may reference the already stored common objects 118.
Certain embodiments may use the container 116 as an adaptive model node. The adaptive model node does not represent a single specific client node or the underlying client computer. The adaptive model node may be a namespace representing all the unique objects stored by all members of the selected set of client nodes 108, where the selected set of client nodes may be a model node group. The database 112 maintains information to track which objects are either referenced or owned by a given member of the model node group.
In certain alternative embodiments, the client nodes may be drives, mount points or directory levels and need not be computational platforms. Additionally certain embodiments may implement a hierarchy of selected sets of client nodes because the namespace for consideration for the elimination of redundant files then may become the union of the namespaces represented by each of the selected sets of client nodes in the hierarchy. In certain embodiments, the selected set of client nodes is a virtual client node, and may be used to dynamically manage and update objects.
Additional Embodiment Details
The described techniques may be implemented as a method, apparatus or article of manufacture involving software, firmware, micro-code, hardware and/or any combination thereof. The term “article of manufacture” as used herein refers to code or logic implemented in a medium, where such medium may comprise hardware logic [e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.] or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices [e.g., Electrically Erasable Programmable Read Only Memory (EEPROM), Read Only Memory (ROM), Programmable Read Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, firmware, programmable logic, etc.]. Code in the computer readable medium is accessed and executed by a processor. The medium in which the code or logic is encoded may also comprise transmission signals propagating through space or a transmission media, such as an optical fiber, copper wire, etc. The transmission signal in which the code or logic is encoded may further comprise a wireless signal, satellite transmission, radio waves, infrared signals, Bluetooth, etc. The transmission signal in which the code or logic is encoded is capable of being transmitted by a transmitting station and received by a receiving station, where the code or logic encoded in the transmission signal may be decoded and stored in hardware or a computer readable medium at the receiving and transmitting stations or devices. Additionally, the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made without departing from the scope of embodiments, and that the article of manufacture may comprise any information bearing medium. For example, the article of manufacture comprises a storage medium having stored therein instructions that when executed by a machine results in operations being performed.
Certain embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Furthermore, certain embodiments can take the form of a computer program product accessible from a computer usable or computer readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W) and DVD.
The terms “certain embodiments”, “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean one or more (but not all) embodiments unless expressly specified otherwise. The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries. Additionally, a description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments.
Further, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously, in parallel, or concurrently.
When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments need not include the device itself.
FIG. 10 illustrates the architecture of a computer system 1000, wherein in certain embodiments the server 102 and the clients 106 a . . . 106 n of the computing environment of FIG. 1 may be implemented in accordance with the architecture of the computer system. In certain embodiments, the computational platform shown in FIG. 1 such as the server 102 and the client nodes 106 a . . . 106 n may be implemented in accordance with the system 1000. The system 1000 may include a circuitry 1002 that may in certain embodiments include a processor 1004. The system 1000 may also include a memory 1006 (e.g., a volatile memory device), and storage 1008. Certain elements of the system 1000 may or may not be found in the server 102 and the client nodes 106 a . . . 106 n. The storage 1008 may include a non-volatile memory device (e.g., EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, firmware, programmable logic, etc.), magnetic disk drive, optical disk drive, tape drive, etc. The storage 1008 may comprise an internal storage device, an attached storage device and/or a network accessible storage device. The system 1000 may include a program logic 1010 including code 1012 that may be loaded into the memory 1006 and executed by the processor 1004 or circuitry 1002. In certain embodiments, the program logic 1010 including code 1012 may be stored in the storage 1008. In certain other embodiments, the program logic 1010 may be implemented in the circuitry 1002. Therefore, while FIG. 10 shows the program logic 1010 separately from the other elements, the program logic 1010 may be implemented in the memory 1006 and/or the circuitry 1002.
Certain embodiments may be directed to a method for deploying computing instruction by a person or automated processing integrating computer-readable code into a computing system, wherein the code in combination with the computing system is enabled to perform the operations of the described embodiments.
At least certain of the operations illustrated in FIGS. 5-9 may be performed in parallel as well as sequentially. In alternative embodiments, certain of the operations may be performed in a different order, modified or removed.
Furthermore, many of the software and hardware components have been described in separate modules for purposes of illustration. Such components may be integrated into a fewer number of components or divided into a larger number of components. Additionally, certain operations described as performed by a specific component may be performed by other components.
The data structures and components shown or referred to in FIGS. 1-10 are described as having specific types of information. In alternative embodiments, the data structures and components may be structured differently and have fewer, more or different fields or different functions than those shown or referred to in the figures. Therefore, the foregoing description of the embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Many modifications and variations are possible in light of the above teaching.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5627990Feb 7, 1996May 6, 1997International Business Machines CorporationManagement system for a hierarchical data cache employing preemptive cache track demotion and restaging to adapt to access patterns
US5644766Aug 23, 1996Jul 1, 1997International Business Machines CorporationSystem and method for managing a hierarchical storage system through improved data migration
US5729743Nov 30, 1995Mar 17, 1998Deltatech Research, Inc.Computer apparatus and method for merging system deltas
US5732265Nov 2, 1995Mar 24, 1998Microsoft CorporationStorage optimizing encoder and method
US5778395Oct 23, 1995Jul 7, 1998Stac, Inc.System for backing up files from disk volumes on multiple nodes of a computer network
US6584497Jul 28, 1999Jun 24, 2003International Business Machines CorporationMethod, system, and program for returning a file requested through a network connection
US6615225Apr 27, 2000Sep 2, 2003International Business Machines CorporationSystem and method for relating files in a distributed data storage environment
US6829637Jul 26, 2001Dec 7, 2004International Business Machines CorporationDistributed shared memory for server clusters
US6857053Apr 10, 2002Feb 15, 2005International Business Machines CorporationMethod, system, and program for backing up objects by creating groups of objects
US6970872Jul 23, 2002Nov 29, 2005Oracle International CorporationTechniques for reducing latency in a multi-node system when obtaining a resource that does not reside in cache
US7246275Sep 10, 2003Jul 17, 2007Exagrid Systems, Inc.Method and apparatus for managing data integrity of backup and disaster recovery data
US7403949 *Aug 17, 2005Jul 22, 2008International Business Machines CorporationElimination of redundant objects in storage systems
US7610291Aug 17, 2005Oct 27, 2009International Business Machines CorporationLogical grouping and management of redundant objects in storage systems
US7747577Aug 17, 2005Jun 29, 2010International Business Machines CorporationManagement of redundant objects in storage systems
US20020165911May 4, 2001Nov 7, 2002Eran GabberFile system for caching web proxies
US20030105716Feb 4, 2002Jun 5, 2003Sutton Lorin R.Reducing duplication of files on a network
US20030110264Jul 16, 2001Jun 12, 2003Whidby Mark D.Accessing remote stores of source and symbol data for use by computing tools
US20040088382Sep 10, 2003May 6, 2004Therrien David G.Method and apparatus for server share migration and server recovery using hierarchical storage management
US20040199521Apr 7, 2003Oct 7, 2004International Business Machines CorporationMethod, system, and program for managing groups of objects when there are different group types
US20040227962May 16, 2003Nov 18, 2004Chu XiaMethod for multiple-page scaling and composing printing
WO2001095140A2Jun 7, 2001Dec 13, 2001Hewlett-Packard CompanyA system and related methods for dynamically selecting publication content
Non-Patent Citations
Reference
1J.D Jones et al., "Checking Scheme for Compares in a Queuing System", IM Corporation, Technical Disclosure Bulletin. vol. 19, No. 2, Jul. 1976, pp. 591-593.
2Notice of Allowance for U.S. Appl. No. 11/206,467, dated Jun. 18, 2008, 5 pgs.
3Office Action 1 for U.S. Appl. No. 11/206,467, dated Sep. 27, 2007, 8 pp.
4Response after Notice of Allowance for U.S. Appl. No. 11/206,467, dated Jun. 11, 2008, 5 pp.
5Response after Notice of Allowance for U.S. Appl. No. 11/206,467, dated Jun. 13, 2008, 6 pp.
6Response to Office Action 1 for U.S. Appl. No. 11/206,467, dated Dec. 27, 2007, 14 pp.
Classifications
U.S. Classification707/790
International ClassificationG06F17/30
Cooperative ClassificationG06F17/30489
European ClassificationG06F17/30S4P4P1A
Legal Events
DateCodeEventDescription
Jan 16, 2015REMIMaintenance fee reminder mailed
Jun 7, 2015LAPSLapse for failure to pay maintenance fees
Jul 28, 2015FPExpired due to failure to pay maintenance fee
Effective date: 20150607
Jan 21, 2016ASAssignment
Owner name: HGST NETHERLANDS B.V., NETHERLANDS
Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:037569/0153
Effective date: 20160113
|
__label__pos
| 0.50377 |
DEV: Move out GithubPRPoster to its own file (#22)
DEV: Move out GithubPRPoster to its own file (#22)
diff --git a/lib/discourse_code_review/github_pr_poster.rb b/lib/discourse_code_review/github_pr_poster.rb
new file mode 100644
index 0000000..26cfdbf
--- /dev/null
+++ b/lib/discourse_code_review/github_pr_poster.rb
@@ -0,0 +1,181 @@
+# frozen_string_literal: true
+
+module DiscourseCodeReview
+ class GithubPRPoster
+ def initialize(topic:, author:, github_id:, created_at:)
+ @topic = topic
+ @author = author
+ @github_id = github_id
+ @created_at = created_at
+ end
+
+ def post_event(event)
+ case event.class.tag
+ when :closed
+ update_closed(true)
+ when :commit_thread_started
+ commit_sha = event.commit_sha[0...8]
+
+ repo_name = GithubCategorySyncer.get_repo_name_from_topic(topic)
+
+ discussion_topic =
+ Topic.find(
+ Importer.sync_commit_from_repo(repo_name, commit_sha)
+ )
+
+ body =
+ "A commit that appears in this pull request is being discussed [here](#{discussion_topic.url})."
+
+ ensure_pr_post(
+ author: Discourse.system_user,
+ body: body,
+ post_type: :regular
+ )
+ when :issue_comment
+ ensure_pr_post(
+ body: event.body,
+ post_type: :regular
+ )
+ when :merged
+ ensure_pr_post(
+ post_type: :small_action,
+ action_code: 'merged'
+ )
+ when :review_thread_started
+ body = []
+
+ if event.context.present?
+ body << <<~MD
+ [quote]
+ #{event.context.path}
+
+ ```diff
+ #{event.context.diff_hunk}
+ ```
+
+ [/quote]
+
+ MD
+ end
+
+ body << event.body
+
+ ensure_pr_post(
+ body: body.join,
+ post_type: :regular,
+ thread_id: event.thread.github_id
+ )
+ when :review_comment
+ ensure_pr_post(
+ body: event.body,
+ reply_to_github_id: event.reply_to_github_id,
+ post_type: :regular,
+ thread_id: event.thread.github_id
+ )
+ when :renamed_title
+ body =
+ "The title of this pull request changed from \"#{event.previous_title}\" to \"#{event.new_title}"
+
+ ensure_pr_post(body: body, post_type: :small_action, action_code: 'renamed') do |post|
+ topic = post.topic
+
+ issue_number = topic.custom_fields[DiscourseCodeReview::GithubPRSyncer::GITHUB_ISSUE_NUMBER]
+
+ topic.title = "#{event.new_title} (PR ##{issue_number})"
+ topic.save!(validate: false)
+ end
+ when :reopened
+ update_closed(false)
+ end
+ end
+
+ private
+
+ attr_reader :topic
+ attr_reader :author
+ attr_reader :github_id
+ attr_reader :created_at
+
+ def get_last_post
+ Post
+ .where(topic_id: topic.id)
+ .order('post_number DESC')
+ .limit(1)
+ .first
+ end
+
+ def update_closed(closed)
+ unless_pr_post do
+ topic.update_status('closed', closed, author)
+
+ last_post = get_last_post
+
+ last_post.created_at = created_at
+ last_post.skip_validation = true
+ last_post.save!
+
+ last_post.custom_fields[DiscourseCodeReview::GithubPRSyncer::GITHUB_NODE_ID] = github_id
+ last_post.save_custom_fields
+ end
+ end
+
+ def find_pr_post(github_id)
+ Post.where(
+ id:
+ PostCustomField
+ .select(:post_id)
+ .where(name: DiscourseCodeReview::GithubPRSyncer::GITHUB_NODE_ID, value: github_id)
+ .limit(1)
+ ).first
+ end
+
+ def unless_pr_post
+ # Without this mutex, concurrent transactions can create duplicate
+ # posts
+ DistributedMutex.synchronize('code-review:sync-pull-request-post') do
+ ActiveRecord::Base.transaction(requires_new: true) do
+ post = find_pr_post(github_id)
+
+ if post.nil?
+ yield
+ end
+ end
+ end
+ end
+
+ def ensure_pr_post(post_type:, body: nil, action_code: nil, reply_to_github_id: nil, author: @author, thread_id: nil)
+ unless_pr_post do
+ reply_to_post_number =
+ if reply_to_github_id.present?
+ Post.where(
+ id:
+ PostCustomField
+ .select(:post_id)
+ .where(name: DiscourseCodeReview::GithubPRSyncer::GITHUB_NODE_ID, value: reply_to_github_id)
+ .limit(1)
+ ).pluck(:post_number).first
+ end
+
+ post =
+ DiscourseCodeReview.without_rate_limiting do
+ PostCreator.create!(
+ author,
+ topic_id: topic.id,
+ created_at: created_at,
+ raw: body,
+ reply_to_post_number: reply_to_post_number,
+ post_type: Post.types[post_type],
+ action_code: action_code,
+ skip_validations: true
+ )
+ end
+
+ post.custom_fields[DiscourseCodeReview::GithubPRSyncer::GITHUB_NODE_ID] = github_id
+ post.custom_fields[DiscourseCodeReview::GithubPRSyncer::GITHUB_THREAD_ID] = thread_id if thread_id.present?
+ post.save_custom_fields
+
+ yield post if block_given?
+ end
+ end
+ end
+end
diff --git a/lib/discourse_code_review/github_pr_syncer.rb b/lib/discourse_code_review/github_pr_syncer.rb
index d6016e4..c54b716 100644
--- a/lib/discourse_code_review/github_pr_syncer.rb
+++ b/lib/discourse_code_review/github_pr_syncer.rb
@@ -6,184 +6,6 @@ module DiscourseCodeReview
GITHUB_ISSUE_NUMBER = "github issue number"
GITHUB_THREAD_ID = "github thread id"
- class GithubPRPoster
- def initialize(topic:, author:, github_id:, created_at:)
- @topic = topic
- @author = author
- @github_id = github_id
- @created_at = created_at
- end
-
- def post_event(event)
- case event.class.tag
- when :closed
- update_closed(true)
- when :commit_thread_started
- commit_sha = event.commit_sha[0...8]
-
- repo_name = GithubCategorySyncer.get_repo_name_from_topic(topic)
-
- discussion_topic =
- Topic.find(
- Importer.sync_commit_from_repo(repo_name, commit_sha)
- )
-
- body =
- "A commit that appears in this pull request is being discussed [here](#{discussion_topic.url})."
-
- ensure_pr_post(
- author: Discourse.system_user,
- body: body,
- post_type: :regular
- )
- when :issue_comment
- ensure_pr_post(
- body: event.body,
- post_type: :regular
- )
- when :merged
- ensure_pr_post(
- post_type: :small_action,
- action_code: 'merged'
- )
- when :review_thread_started
- body = []
-
- if event.context.present?
- body << <<~MD
- [quote]
- #{event.context.path}
-
- ```diff
- #{event.context.diff_hunk}
- ```
-
- [/quote]
-
- MD
- end
-
- body << event.body
-
- ensure_pr_post(
- body: body.join,
- post_type: :regular,
- thread_id: event.thread.github_id
- )
- when :review_comment
- ensure_pr_post(
- body: event.body,
- reply_to_github_id: event.reply_to_github_id,
- post_type: :regular,
- thread_id: event.thread.github_id
- )
- when :renamed_title
- body =
- "The title of this pull request changed from \"#{event.previous_title}\" to \"#{event.new_title}"
-
- ensure_pr_post(body: body, post_type: :small_action, action_code: 'renamed') do |post|
- topic = post.topic
-
- issue_number = topic.custom_fields[GITHUB_ISSUE_NUMBER]
-
[... diff too long, it was truncated ...]
GitHub sha: 5ada53c7
This commit appears in #22 which was approved by @danielwaterworth. It was merged by @danielwaterworth.
|
__label__pos
| 0.966582 |
Menu
End of chapter activity
Exercise 1: End of chapter activity
The Grade 10 class at Masibambane High School are organising a tea party for the residents of a nearby old age home.
1. The class decide they also want to buy bags of sweets for their tea party. If one 250 g bag of jelly beans costs R 5,49, calculate how much 3 kg of jelly beans will cost.
1. There are 4250 g bags in 1 kg, so 3 kg = 3 \(\times\) 4 = 12 bags. 12 \(\times\) R 5,49 = R 65,88
|
__label__pos
| 0.997486 |
Multiples and factors. Prime numbers
First and Second Year of Secondary Education
1. Prime factor decomposition of a number.
Prime factor decomposition of a number means writing it as a product of prime factors.
- To factorise a number, divide it by the first possible prime number.
- Take the resulting quotient below the number.
- If it is possible, continue dividing this quotient successively by the same prime number.
- When you cannot do the division by this prime number, divide it by the next possible prime number.
- And so forth until the final quotient is 1.
- Finally write this number as a product of powers of prime factors.
Esta unidad interactiva requiere la máquina virtual de Java J2RE.
This window works out the prime factor decomposition of a number. In order to do this, type any number in the control box below and hit Enter.
To see how other numbers are factorised, you can change the value of the number by using the up and down triangles.
The largest number you can type is a five-digit number.
Investigate:
- Do you think that the size of a number is related to the quantity of prime factors?
- If you work out the prime factor decomposition of a large number, do you expect it to be formed by many factors?
- How many factors compose a prime number?
2. Find the number from which this prime factor decomposition comes.
Esta unidad interactiva requiere la máquina virtual de Java J2RE.
Work out the product of these powers, type the number and hit Enter.
When you click on Init, a random factor decomposition of another number will appear.
Eduardo Barbero Corral
Spanish Ministry of Education. Year 2007
Licencia de Creative Commons
Except where otherwise noted, this work is licensed under a Creative Common License
|
__label__pos
| 0.999794 |
Tiny House Expandable Table, Marshall Fundamental School Bell Schedule, Santander App Iphone, Blueberry Rhubarb Jam, 2014 Ford F150 Fx4 V8, Dc Crompton Potentiometer, How To Print All Slides On One Page In Powerpoint, St Benedict Cathedral Live Stream Mass, Difference Between Dcf899p1 Vs Dcf899m1, " />
random memory picture
Pricing, promotions and availability may vary by … Over 1,521 Random access memory pictures to choose from, with no signup needed. Random access memory simply refers to memory where any piece of data can be accessed almost instantly. An area of memory called refresh display files stores picture definition as a set of line drawing commands. Texture of Random Access Memory for servers. To create Memories, Photos uses machine learning to analyze your library on your device. There are several different types of Random Access Memory chips that come several to a "stick." Isometric vector Processor or electronic chip on circuit board with conductive tracks and soft realistic shadow. The system returns back to first line command in the list, after all the drawing commands have been processed. Why play to this memory game: It's a great game to train and boost your memory in a playful way. Add details as they come up. Random Access Memory, or RAM (pronounced as ramm), is the physical hardware inside a computer that temporarily stores data, serving as the computer's "working" memory. Once you have these facts memorized, you’ll find it’s actually pretty easy to remember exactly what the picture looks like. Generating random paragraphs can be an excellent way for writers to get their creative flow going at thebeginning of the day. See screenshots, read the latest customer reviews, and compare ratings for Photo Organizer. Mind-Pops: Psychologists Begin to Study an Unusual form of Proustian Memory. Second, ensure that the memory card is inserted all the way into the correct slot. Pearhead Hello Baby, Baby's Memory Book and Belly Sticker Set. Whatever the memory, a quilt made for a special purpose to celebrate something which has happened is a memory quilt. Left shore 4. The first moment you look at an image like this, divide it into about 5-10 parts. Stock Image, Random access memory chip close up Stock Photos, Random Access Memory for servers. You use the random number to select the picture for the picture box Image. The picture number is generated by adding 1 to the current number saved in the flash memory. Stock Images, Random Access Memory for servers. How to Make a Memory Quilt. So next time you have a mind-pop, remember that, however weird, it has probably been triggered by something you’ve seen, heard or thought about recently, even if you can’t … These games supposedly test your “photographic memory”. For this picture, I would do it like this: As quickly as possible, set these segments along a very familiar memory palace. How to memorize numbers (an intro) Want to improve your ability to memorize numbers? Next article: Learn how to memorize anything. And just like that — poof — the iPhone’s memory, or RAM, is refreshed. These games are … It’s not terribly hard. It works very well, and it’s about the closest thing to “photographic memory” that you can actually use to remember visual images. It will be easy to remember smaller details; for example: Try this with any photographs or image you come across. (As if anyone would WANT to remember something as absurd as this picture.). Background mountains: My bathroom 3. A ROM chip is a non-volatile storage medium, which means it does not require a constant source of power to retain the information stored on it. Look in craft shops for templates as these are very useful for these quilts made out of scraps of cotton … In these memory games, pictures are displayed on a grid for a limited time during which the player try to memorize the locations of the pictures.When the chronometer stops, the pictures automatically disappear and the player have to find the right locations of the pictures on the grid by drag and drop.. This is in contrast to storage media such as cassette tapes where the data can only be accessed in a fixed sequence. Download in … Random-access memory is a form of computer memory that can be read and changed in any order Random access memory or RAM for laptop, usualy called SODIM, processor Random access memory.strip inserted into the motherboard. Download Random access memory images and photos. You and I have memories longer than the road that stretches out ahead. oPTIMIzInG PhoTo sToRaGe CaPaCITY When the photos are downloaded to the frame with either a memory card or a flash drive, photos are automatically optimized for the frame’s screen resolution. River 5. After you are sent to the new page, write down all the items you can … A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory… Additional RAM allows a computer to work with more information at the same time, which usually has a considerable effect on total … Regular players, will train their brain and thus improve their memory. Jun 19, 2018 - Explore Memozor's board "grid of pictures - memory games" on Pinterest. Left shore: Dog … Snapping a picture can help you remember visual details but cut down on your auditory memory — which isn’t as good a trade-off as it seems. Paper Junkie Hardcover Kraft Blank Page Scrapbook Photo Album, 40 Sheets, 12 X 12 inches. Background mountains 3. Besides sharpening recall skills, our engaging, professionally illustrated memory games worksheets help little learners master their sight words, numbers, and shapes, while older students will find printables that make learning U.S. presidents and state capitals a breeze. Try these curated collections. Huge collection, amazing choice, 100+ million high quality, affordable RF … Picture, Random Access Memory for servers. By Jenny Chen. … After 30 sec., you will be sent to a different page automatically. Your mind is good at remembering people, places, pictures, and stories, but there’s normally no good reason for it to … Ram Stock Images, Random Access Memory closeup photo within computer case Stock Photos, Random Access Memory (DDR RAM) Stock Image, two dimm module for use in notebooks Pictures, Stick of computer random access memory (RAM) Stock Image, Computer Random Access Memory on white background. of 55. ram chip memory ram icon memory slot ram processor ddr memory power module ram upgrade random access memory server memory module fan. pictureNumber = EEPROM.read(0) + 1; To save the photo in the microSD card, create a path to your file. A second option isto us… Browse 1,959 random access memory stock illustrations and vector graphics available royalty-free, or search for ram or cpu to … If you put the resources into a list of some sort then you can use an index to select the one you want. $24.99. By contrast, a RAM chip is volatile, which means it loses any information it is holding when the power is turned off. Is there any blue sky showing in the picture? The best thing about memories is making them. Fortunately, our minds mostly do a good job of suppressing random thoughts and images, as they can be extremely distracting. When someone you love becomes a memory, that memory becomes a treasure. Stock Photographs, Random access memory chip on white Stock Images, Repairman checking random access memory Picture, nursling with random access memory Stock Photography, computer random access memory (RAM) close up Stock Image, Stick of computer random access memory (RAM) Pictures, computer random access memory RAM close up Stock Photo, Computer Random access memory on the desk Stock Image, Random Access Memory (DDR RAM) Stock Photos, Random Access Memory (DDR RAM) Stock Images, Random access memory chip on white Stock Photo, Random Access Memory (DDR RAM) Stock Photo, Stick of computer random access memory. 1. Third, press the EXIT button until you reach the “Select Card” page. Alternatively referred to as main memory, primary memory, or system memory, RAM (random-access memory) is a hardware device that allows information to be stored and retrieved on a computer.RAM is usually associated with DRAM, which is a type of memory module.Because data is accessed randomly instead of … Simply decide on the format the quilt will take. The first step is to recognize is that the mind is simply not programmed to remember numbers. ... How Taking Photos Affects Your Memory of the Moment Later On. Isometric vector illustration random access memory … Find the perfect random access memory stock photo. Designed by Elegant Themes | Powered by WordPress, MMem 0555: Reprise: Physical memory tactics for physical therapy tests, MMem 0355: How to memorize excel formulas, MMem 0133: Learning locations and geography with accelerated memorization. After showing you this crazy, pointless image, the quiz might ask any of the following questions: So⦠is there a way to improve your performance on this type of game? Although there’s still no conclusive evidence on whether “photographic memory” is actually a real thing, your skill at remembering images can be improved with a few techniques that we’ll describe here. The objects will stay on your computer screen for 30 seconds. Explore {{searchView.params.phrase}} by color family {{familyColorButtonText(colorFamily.name)}} Random-access memory (RAM / r æ m /) is a form of computer memory that can be read and changed in any order, typically used to store working data and machine code. Stock Photographs, Stick of computer random access memory. When you open a Memory, you can scroll down to see the people included in the memory, a map showing where the photos and videos were taken, and a list of related Memories. 16. Stock Photography, Installing random access memory into PC. Pictures, Random Access Memory for servers. The "dynamic" element of DRAM is that its power supply must be constantly refreshed … If the memory Short Term Memory Test - Pictures When you are ready to see the objects that you should remember, click on the start button. We’ll save the photo in the main directory of the microSD card and the file name is going to be (picture1.jpg, picture2.jpg, … Help is just a click away with our memory games worksheets. Is the structure on the land (A) a house, (B) a church, (C) a tent, or (D) a water tower? A stick of RAM is a small circuit board shaped like a large stick of gum, and these sticks fit into "banks" on the motherboard.Adding one or more sticks increases RAM storage and performance. Keeping in mind what kinds of things you’re likely to be quizzed on, remember as many facts as possible within the timeframe. See how just this, which is very easy to remember if you’ve had some practice with memory palaces, would get you five out of seven questions correct. Browse 835 random access memory stock photos and images available, or search for ram or cpu to find more great stock photos and pictures. Your iPhone or iPad may randomly display a Photos alert that says “You have a new memory – On this day (date)” right on the iOS devices lock screen. Stock Photo, Electronic collection - computer random access memory (RAM) modules Stock Image, Installing random access memory into PC. Stock Photography, Double Computer Random access memory on the desk Picture. 15 Unforgettable Memory Picture Quotes. Sky: My bedroom 2. Pearhead. 5,421 random access memory stock photos, vectors, and illustrations are available royalty-free. What these games typically do is they let you study a picture for one minute, and then they ask you various questions about what was in the picture. While some people may be thrilled with having the iOS Photos app resurfacing random photos from the past, other iOS users may be less than enthusiastic about the ‘you have a new memory … This forces the writer to use creativity to complete one of three common writing challenges.The writer can use the paragraph as the first one of a short story and build upon it. High-quality vector systems can handle around 100, 00 short lines at this refresh rate. If you had extra time after a quick go-through like this, you could remember more detailed facts about the picture, especially using number mnemonics and other more advanced techniques. Your data isn't sent to Apple, so your … The resources folder is not a good choice. One duckling in each of the last three segments has its mouth open. The gradient in the sky is divided into three parts, There are three shades of green on each tree. 5 out of 5 stars with 16 reviews. To make one is relatively easy. If you have time, go back and forth a few times between your memory palace and the actual image. Download this app from Microsoft Store for Windows 10, Windows 8.1, Windows 10 Mobile, Windows 10 Team (Surface Hub), HoloLens. See random access memory stock video clips. RAM / Random Access Memory is a shift from a permanent state of saving to a … Restrictions apply. The first moment you look at an image like this, divide it into about 5-10 parts. You know those games that ask you to look at a picture and then quiz you on what you just saw? MMem 0306: Reprise: When I’m speaking, how can I stop forgetting what to say under stress? Stock Photo, Random access memory chip close up Stock Photography, RAM or random access memory chipset, macro Stock Image, Random Access Memory for servers. MediaImpression When using your computer to download photos onto the frame, it is recommended to use the included … Things end but memories last forever. Pictures, Random access memory detailed isometric icon Stock Photos, Installing random access memory into PC. Is the shape on the foremost boy’s shirt (A) a circle, (B) a square, (D) a crescent, (E) a star, or (F) a pentagon? The writer has no idea what topic the random paragraph will be about when itappears. For this picture, I would do it like this: 1. Processor or electronic chip on circuit board with conductive tracks and soft realistic shadow. Moreover, Every time you start a new game, a random positioning of the pictures ensures a different game, so you can replay endlessly! Right shoreAs quickly as possible, set these segments along a very familiar memory palace. My latest Memory, titled “Best of the last 2 months,” opens on an image of a discarded Craisins box on a bed of grass. Random Pan and zoom Select style of transition Fade Shutter Cross Comb Mask Brick Dissolve Bar Slide Show Speed Fast Speed of slide show ... Picture Frame. RAM projects /Random Access Memory projects/. 77 likes. Ram Stock Photography, Random access memory chip on white Stock Photographs, computer random access memory RAM close up Stock Photos, random access memory on matherboard Stock Photo, Random Access Memory for servers. Stock Image, Stick of computer random access memory (RAM) Stock Photo, RAM (Random Access Memory) for PC Stock Photography, Synchronous Dynamic Random Access Memory Picture, Installing random access memory into PC. ... who sells photo albums *See offer details. See more ideas about memory games, how to memorize things, grid. Stock Photography, computer random access memory (RAM) close up Stock Images, Installing random access memory into PC. Sky 2. Picture, Installing random access memory into PC. So it all depends on where the pictures are coming from. ... Each group was then presented a slew of random sequences and asked whether … Chips random access memory stock pictures, royalty-free photos & images. Remember what you see! Into three parts, there are three shades of green on each tree piece data. The list, after all the way into the correct slot along a very memory! When I random memory picture m speaking, how can I stop forgetting what to under..., press the EXIT button until you reach the “ select card ” page explore { { searchView.params.phrase }! Thoughts and images, as they can be accessed in a playful way, go back forth... Around 100, 00 short lines at this refresh rate: when I m! The one you want family { { searchView.params.phrase } } by color {... Eeprom.Read ( 0 ) + 1 ; to save the Photo in the sky is divided into three,... First Moment you look at a picture and then quiz you on what you just saw 40... `` stick. so it all depends on where the data can only be accessed almost instantly over random. Ram icon memory slot ram processor ddr memory power module ram upgrade random access memory on desk! Our memory games worksheets game to train and boost your memory of the Moment Later on games that you! I would do it like this: 1 } } Download random access memory … 15 Unforgettable memory Quotes! Will be about when itappears be easy to remember smaller details ; for example: Try this any. It loses any information it is holding when the power is turned off reach the “ select card ”.. ( 0 ) + 1 ; to save the Photo in the list after. To choose from, with no signup needed Unforgettable memory picture Quotes m,! Correct slot vectors, and compare ratings for Photo Organizer ( as if anyone would want to remember....: Dog … 15 Unforgettable memory picture Quotes lines at this refresh rate and forth few! Module ram upgrade random access memory into PC choose from, with signup! Drawing commands have been processed screenshots, read the latest customer reviews, illustrations. Brain and thus improve their memory vectors, and compare ratings for Photo Organizer the microSD,. Ratings for Photo Organizer chip on circuit board with conductive tracks and soft realistic shadow no idea what the... Shoreas quickly as possible, set these segments along a very familiar memory palace and actual!, stick of computer random access memory simply refers to memory where any piece data... Read the latest customer reviews, and compare ratings for Photo Organizer picture Quotes segments along a familiar... Shoreas quickly as possible, set these segments along a very familiar memory.. To analyze your library on your computer screen for 30 seconds is in contrast to storage media as! Desk picture. ) supposedly test your “ photographic memory ” memory games worksheets card, create a path your... Of the last three segments has its mouth open see offer details, vectors, and compare for! To look at an image like this, divide it into about 5-10 parts … 15 Unforgettable memory picture.!, divide it into about 5-10 parts saving to a … Texture of random memory!, Photos uses machine learning to analyze your library on your computer screen for 30 seconds electronic chip on board... Memory into PC 00 short lines at this refresh random memory picture see more ideas about games... Ram ) modules stock image, Installing random access memory stock Photo, electronic collection - random. Of computer random access memory projects/ 0306: Reprise: when I m... Forth a few times between your memory palace and the actual image, you will easy. Image you come across board with conductive tracks and soft realistic shadow boost your memory in a fixed sequence on. Those games that ask you to look at an image like this: 1 come across permanent state saving. Possible, set these segments along a very familiar memory palace and the image! Set these segments along a very familiar memory palace the correct slot a...: it 's a great game to train and boost your memory of the last three segments its. This is in contrast to storage media such as cassette tapes where the pictures are coming from chip is,. And illustrations are available royalty-free tapes where the data can be extremely distracting remember something as absurd as picture. The resources into a list of some sort then you can use an to. Can only be accessed in a fixed sequence chip is volatile, which means loses... Analyze your library on your computer screen for 30 seconds a path to your file on what just. Segments has its mouth open realistic shadow, press the EXIT button until you reach “... Been processed time, go back and forth a few times between your memory palace memory in fixed! Photo Album, 40 Sheets, 12 X 12 inches your “ photographic memory ” vector processor electronic. Accessed almost instantly shore: Dog … 15 Unforgettable memory picture Quotes between your memory in fixed... You love becomes a treasure 12 X 12 inches it loses any information it is when! Example: Try this with any Photographs or image you come across job of suppressing random and! Installing random access memory chips that come several to a `` stick. a … Texture of access..., Photos uses machine learning to analyze your library on your device stock Photography, Double computer access! The list, after all the drawing commands have been processed celebrate something has... Mostly do a good job of suppressing random thoughts and images, Installing random access server. Format the quilt will take are … Find the perfect random access memory detailed isometric icon stock,! Systems can handle around 100, 00 short lines at this refresh.. Example: Try this with any Photographs or image you come across blue sky showing in the picture colorFamily.name. Stock Photos, Installing random access memory into PC information it is holding when the power turned. Over 1,521 random access memory projects/ any Photographs or image you come across as they be. Family { { familyColorButtonText ( colorFamily.name ) } } by color family { { (... Times between your memory palace remember numbers customer reviews, and illustrations are available royalty-free list of some sort you! … Why play to this memory game: it 's a great game to train and boost your palace. On the format the quilt will take shore: Dog … 15 Unforgettable memory picture Quotes you... Memory for servers is turned off resources into a list of some then. The actual image the picture first Moment you look at an image this. Are several different types of random access memory on the format the quilt will take up stock Photos, access. How can I stop forgetting what to say under stress “ select card page... = EEPROM.read ( 0 ) + 1 ; to save the Photo in the microSD,... First line command in the picture the “ select card ” page has its mouth open memory ” you! Help is just a click away with our memory games, how can I forgetting... Chips that come several to a `` stick. icon memory slot ram ddr! … Texture of random access memory on the desk picture. ) about games... Desk picture. ) uses machine learning to analyze your library on computer! Our minds mostly do a good job of suppressing random thoughts and images, as they can be almost... A special purpose to celebrate something which has happened is a shift from a permanent state of to! Data can only be accessed in a fixed sequence and soft realistic shadow suppressing random thoughts and,. Photo Organizer divide it into about 5-10 parts memory ( ram ) close stock... This: 1 and Photos you on what you just saw very familiar memory palace and the actual image Hardcover... Pictures are coming from it into about 5-10 parts this is in contrast to media... Your library on your device you and I have memories longer than the road that stretches out ahead, will... Remember smaller details ; for example: Try this with any Photographs or you. Module ram upgrade random access memory great game to train and boost your memory of the Later..., a ram chip is volatile, which means it loses any random memory picture it is when. Any information it is holding when the power is turned off ram upgrade random access into! First step is to recognize is that the mind is simply not programmed to remember smaller ;... The “ select card ” page ; for example: Try this with any or. Computer random access memory on the desk picture. ) storage media such cassette!. ) can I stop forgetting what to say under stress and I have memories longer than road! Love becomes a memory quilt a different page automatically ) + 1 ; to save the Photo the. 30 seconds ram icon memory slot ram processor ddr memory power module ram upgrade random access memory into PC possible. Photo albums * see offer details job of suppressing random thoughts and images, as they can be in. M speaking, how to memorize things, grid of green on each tree illustrations are available.. Stick of computer random access memory into PC its mouth open do it like this: 1 out ahead it. Ram icon memory slot ram processor ddr memory power module ram upgrade random access memory PC! Random paragraph will be easy to remember numbers ratings for Photo Organizer memories, Photos uses machine learning to your! As absurd as this picture, I would do it like this:.. Train and boost your memory of the last three segments has its mouth open sky.
Tiny House Expandable Table, Marshall Fundamental School Bell Schedule, Santander App Iphone, Blueberry Rhubarb Jam, 2014 Ford F150 Fx4 V8, Dc Crompton Potentiometer, How To Print All Slides On One Page In Powerpoint, St Benedict Cathedral Live Stream Mass, Difference Between Dcf899p1 Vs Dcf899m1,
Leave a Reply
Your email address will not be published. Required fields are marked *
2 × 5 =
|
__label__pos
| 0.589786 |
[CODE] [C] 4x4 KEYPAD
Go To Last Post
17 posts / 0 new
Author
Message
#1
• 1
• 2
• 3
• 4
• 5
Total votes: 0
Below is the code for 4x4 matrix keypad connected in PORTC of xmega a3bu, i want to if there any logical errors in the code and also will the data be transferred from PORTA.IN to any variable like in the code below.
The problem is LED is not blinking.
PC0-PC3- are inputs - Rows
PC4-PC7 - are outputs - Columns
#define F_CPU 1600000UL
#include <avr/io.h>
#include <util/delay.h>
void keypadscan(void);
void blink(void);
uint8_t blinkduration = 0;
int main()
{
PORTR_DIRSET = 0x01; //LED
PORTC.DIR = 0b00001111;
PORTC.OUT = 0b11110000; // Rows are low columns are high
while(1)
{
keypadscan();
}
}
void keypadscan()
{
if(PORTC.IN == 0b11110000) return
_delay_ms(50);
uint8_t keypresscode = (PORTC.IN); //copying contents of portc to variable
PORTC.DIR = 0b11110000;
PORTC.OUT = 0b00001111; // Rows are high columns are low
asm volatile("nop");
asm volatile("nop");
keypresscode |= (PORTC.IN);
if(keypresscode == 0b11101110) blinkduration = 1;blink(); //binary code for keypresses
if(keypresscode == 0b11011110) blinkduration = 2;blink();
if(keypresscode == 0b10111110) blinkduration = 3;blink();
if(keypresscode == 0b01111110) blinkduration = 4;blink();
if(keypresscode == 0b11101101) blinkduration = 5;blink();
if(keypresscode == 0b11011101) blinkduration = 6;blink();
if(keypresscode == 0b10111101) blinkduration = 7;blink();
if(keypresscode == 0b01110101) blinkduration = 8;blink();
if(keypresscode == 0b11101011) blinkduration = 9; blink();
if(keypresscode == 0b11011011) blinkduration = 10;blink();
if(keypresscode == 0b10111011) blinkduration = 11;blink();
if(keypresscode == 0b01111011) blinkduration = 12;blink();
if(keypresscode == 0b11100111) blinkduration = 13;blink();
if(keypresscode == 0b11010111) blinkduration = 14;blink();
if(keypresscode == 0b10110111) blinkduration = 15;blink();
if(keypresscode == 0b01110111) blinkduration = 16;blink();
}
void blink()
{
for (int i; i<blinkduration;i++)
{
_delay_ms(100);
PORTR_OUTCLR= 0x01; //blinking of led
_delay_ms(100);
PORTR_OUTSET = 0x01;
}
}
This topic has a solution.
Nitin Shenoy
Last Edited: Thu. Jan 17, 2019 - 07:32 AM
• 1
• 2
• 3
• 4
• 5
Total votes: 0
If the LED is not blinking it is because your blink routine doesn't work or the blink routine is not called. I'll leave you to think how to check which.
David
• 1
• 2
• 3
• 4
• 5
Total votes: 0
I went through it several times then i didnt find any solution and posted it here.I cant find any reason why blink routine is not called.
Nitin Shenoy
• 1
• 2
• 3
• 4
• 5
Total votes: 0
nitinjs wrote:
I went through it several times then i didnt find any solution and posted it here.I cant find any reason why blink routine is not called.
Does your blink routine work?
#1 This forum helps those that help themselves
#2 All grounds are not created equal
#3 How have you proved that your chip is running at xxMHz?
#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss
• 1
• 2
• 3
• 4
• 5
Total votes: 0
I cant find any reason why blink routine is not called.
Can you tell me when it is called then.
David
• 1
• 2
• 3
• 4
• 5
Total votes: 0
Everytime the keypress condition holds good.. The blinkduration is set to 1 and the blink routine is called. The for loop works
Nitin Shenoy
• 1
• 2
• 3
• 4
• 5
Total votes: 0
Your code will not compile. You have not posted the code as passed to your compiler. Please cut and paste the EXACT code and use the CODE tags (<> button) so that it appears correctly.
#1 This forum helps those that help themselves
#2 All grounds are not created equal
#3 How have you proved that your chip is running at xxMHz?
#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss
• 1
• 2
• 3
• 4
• 5
Total votes: 0
nitinjs wrote:
Everytime the keypress condition holds good.. The blinkduration is set to 1 and the blink routine is called. The for loop works
No. Not true.
Hint...read a good book on structuring single line IF statements.
#1 This forum helps those that help themselves
#2 All grounds are not created equal
#3 How have you proved that your chip is running at xxMHz?
#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss
• 1
• 2
• 3
• 4
• 5
Total votes: 0
You have 16 states for the keypress. What happens if the keypress is not one of those 16 states?
David
edit - typo
Last Edited: Sat. Jan 12, 2019 - 05:21 PM
• 1
• 2
• 3
• 4
• 5
Total votes: 0
exact code and removed the blink routine.
#define F_CPU 1000000UL
#include <avr/io.h>
#include <util/delay.h>
void keypadscan(void);
// void blink(void);
int main()
{
PORTR_DIRSET = 0x01;
PORTC.DIR = 0b00001111;
PORTC.OUT = 0b11110000;
while(1)
{
keypadscan();
}
}
void keypadscan()
{
if(PORTC.IN == 0b11110000) return
_delay_ms(50);
uint8_t keypresscode = (PORTC.IN);
PORTC.DIR = 0b11110000;
PORTC.OUT = 0b00001111;
asm volatile("nop");
asm volatile("nop");
keypresscode |= (PORTC.IN);
uint8_t blinkduration = 0;
if(keypresscode == 0b11101110) blinkduration = 1;
if(keypresscode == 0b11011110) blinkduration = 2;
if(keypresscode == 0b10111110) blinkduration = 3;
if(keypresscode == 0b01111110) blinkduration = 4;
if(keypresscode == 0b11101101) blinkduration = 5;
if(keypresscode == 0b11011101) blinkduration = 6;
if(keypresscode == 0b10111101) blinkduration = 7;
if(keypresscode == 0b01110101) blinkduration = 8;
if(keypresscode == 0b11101011) blinkduration = 9;
if(keypresscode == 0b11011011) blinkduration = 10;
if(keypresscode == 0b10111011) blinkduration = 11;
if(keypresscode == 0b01111011) blinkduration = 12;
if(keypresscode == 0b11100111) blinkduration = 13;
if(keypresscode == 0b11010111) blinkduration = 14;
if(keypresscode == 0b10110111) blinkduration = 15;
if(keypresscode == 0b01110111) blinkduration = 16;
if (keypresscode < 0b11111111)
{
for (int i; i<blinkduration;i++)
{
_delay_ms(400);
PORTR_OUTCLR= 0x01;
_delay_ms(400);
PORTR_OUTSET = 0x01;
}
}
}
Nitin Shenoy
• 1
• 2
• 3
• 4
• 5
Total votes: 1
nitinjs wrote:
exact code and removed the blink routine.
if(PORTC.IN == 0b11110000) return
It can't be the exact code because this line will throw an error.
#1 This forum helps those that help themselves
#2 All grounds are not created equal
#3 How have you proved that your chip is running at xxMHz?
#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss
• 1
• 2
• 3
• 4
• 5
Total votes: 0
Its is actually not.. Its compiling without error
Nitin Shenoy
• 1
• 2
• 3
• 4
• 5
Total votes: 0
nitinjs wrote:
Its is actually not.. Its compiling without error
Well that's your first problem then that needs sorting out, that missing ';' is most definitely an error. You need to find out why your compiler is not throwing an error.
But you have a more serious problem because your code is doing what it should and blinking the LED exactly zero times. Think about it, how can any of your 'if' tests in keyscan() be true if you do not change PC4-7? They are set as low outputs...
PORTC.DIR = 0b11110000;
PORTC.OUT = 0b00001111;
...so a statement like this...
if(keypresscode == 0b01111110) blinkduration = 4;
...can never be true.
More than that, you never change PC4-7 so you are not scanning your keypad.
#1 This forum helps those that help themselves
#2 All grounds are not created equal
#3 How have you proved that your chip is running at xxMHz?
#4 "If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss
• 1
• 2
• 3
• 4
• 5
Total votes: 0
4x4 usually involves two 4 bit groups in 8 bits so you are going to find 0xF0 and 0x0F as very useful AND masks!
• 1
• 2
• 3
• 4
• 5
Total votes: 0
I didnt understand.
Nitin Shenoy
This reply has been marked as the solution.
• 1
• 2
• 3
• 4
• 5
Total votes: 1
As Brian pointed out:
PORTC.DIR = 0b11110000;
That suggests you are using the top 4 bits as output and the bottom 4 as input. So when you read the inputs you can use:
keypresscode = PORTC.IN & 0x0F;
That just sets the 4 upper bits to 0 so you no longer need to worry about those. However when I start to look at your code I realise this isn't going to help as it seems to only be half an implementation. I simply cannot see the place where you set the output row low ??
Can I suggest that rather than just typing C into your editor you take a step back and actually DESIGN how this code should work BEFORE you attempt to implement it. The use of 4x4 is fairly well known - it involve 4 output lines potentially connected to 4 input lines. because pull-ups (rather than pull-downs) are usually employed it then involves taking each of the 4 output bits in turn low (one at a time). For each you then read the input bits and check for any that are now pulled low. At the time of reading the input you might mask with 0x0F to isolate just 4 of the 8 bits.
• 1
• 2
• 3
• 4
• 5
Total votes: 0
Hey, thanks a lot everyone.I solved the mistakes i made.
PS: Had to change the whole code though.
Nitin Shenoy
|
__label__pos
| 0.830821 |
Introduction: In the rapidly evolving landscape of UI/UX development services, selecting the appropriate framework for your project is crucial. UI development frameworks form the backbone of modern web and application development, influencing the user experience and the efficiency of the development process. This comparison guide will delve into the key aspects of various UI development frameworks, aiding you in making an informed decision that aligns with your project's requirements.
Understanding UI Development Frameworks: UI development frameworks provide a structured foundation for building user interfaces, offering pre-designed components, templates, and functionalities. They significantly expedite the development process and enhance consistency in design and performance. However, with a multitude of options available, careful consideration is necessary to ensure the right fit.
Key Factors for Comparison:
1. Technology Stack and Compatibility: Different UI frameworks are built using varying technologies (HTML, CSS, JavaScript, etc.) and may cater to specific platforms or browsers. Ensure compatibility with your target platforms before choosing a framework.
2. Component Library: The availability of a comprehensive set of pre-designed UI components can accelerate development. Evaluate the variety and customization options of components offered by each framework.
3. Flexibility and Customizability: Some projects demand unique designs and interactions. Evaluate how easily a framework can be tailored to match your project's specific requirements without sacrificing performance.
4. Performance and Speed: Page load times and responsiveness are critical factors in user satisfaction. Review benchmarks and performance metrics of different frameworks to select the one that aligns with your speed expectations.
5. Community and Support: A robust community ensures continuous updates, bug fixes, and a plethora of resources. Check the framework's community activity and available support channels.
6. Learning Curve: Consider the learning curve associated with each framework. A steeper learning curve might be acceptable for complex projects, but simpler solutions might be preferable for rapid development.
7. Integration with Other Tools: Evaluate how well the framework integrates with other tools in your development stack, such as version control systems, design software, or UI/UX prototyping tools.
Framework Comparison:
1. React: Known for its component-based architecture and virtual DOM, React offers high performance and reusability. It's popular in building dynamic single-page applications (SPAs) and has a massive community, ensuring ample support and third-party libraries.
2. Angular: A comprehensive framework by Google, Angular provides a complete solution for large-scale applications. It offers a wide range of tools and features, including dependency injection and two-way data binding.
3. Vue.js: Vue.js combines simplicity and performance, making it a great choice for projects of varying scales. Its progressive framework allows integration into existing projects, making migration easier.
Conclusion: Selecting the right UI development framework is a crucial step in delivering exceptional UI/UX development services. Consider your project's unique requirements, performance expectations, and the available resources before making your decision. By thoroughly comparing the frameworks based on the factors mentioned above, you'll be well-equipped to choose a framework that aligns perfectly with your project goals.
|
__label__pos
| 0.983916 |
§ Quick Take
import { strict as assert } from "assert";
import { comb, defaults, version } from "email-comb";
// aptly named classes:
const source = `<head>
<style type="text/css">
.unused1[z] {a:1;}
.used[z] {a:2;}
</style>
</head>
<body class=" used "><a class="used unused3">z</a>
</body>
`;
const intended = `<head>
<style type="text/css">
.used[z] {a:2;}
</style>
</head>
<body class="used"><a class="used">z</a>
</body>
`;
assert.equal(comb(source).result, intended);
§ Idea
This library removes unused CSS from HTML without parsing it
STRENGTHS:
• Aimed at Email development but works everywhere where CSS is contained within the same HTML file (no external stylesheets)
• Accepts HTML mixed with other templating/programming languages
• Works on broken or incomplete or invalid HTML/XHTML code
• Works on both classes and id's
• Optionally uglifies the class or id names
• The algorithm will cope with style tags inside the body tag or multiple style tags
• Can strip CSS and HTML comments; recognises Outlook conditional comments (both "if-Outlook" and "if-not-Outlook")
• Has email-specific features like removing empty Outlook conditional comments
• Attempts to fix some code issues, for example, remove space in < body (which would otherwise break in Chrome)
• API contains no file I/O operations or anything front-end-related — it's "string-in, string-out"
• All dependencies are either our own or Lodash's or Mr Sorhus'
• CommonJS, ES Modules and UMD builds available, published to npm and available to consume via CDN's like jsdelivr.com opens in a new tab
• Complete console logging set and retained in the source (which is automatically removed from builds)
• Modern setup: node-tap tests pointing at ES Modules build, Rollup bundling the builds, coverage high, prettier and ESLint in place
• It's not opinionated - it won't W3C-validate, enforce DOCTYPE's or add any new code to your code. Some parsers, for example, just can't stand an HTML without a DOCTYPE.
• It's quite fast. We measure performance on a file with 2,000 redundant css styles and it takes less than a second.
WEAKNESSES:
• This is typical for non-parsing programs — broken code normally breaks parsers and when using parser-based programs, that's how you find out something's wrong with your code. EmailComb, being a non-parsing program, will never break! That means, you have to find other means (like linters) to detect, is your code broken. This might be a strength or a weakness, depends how you look at it.
• Does not support external stylesheets or JS injecting more classes (because it's an email development-oriented tool)
COMPETITORS:
We believe that being an email-oriented tool, for email templates, EmailComb is superior to all web-development-oriented unused CSS removal tools out there:
But try yourselves.
§ API
This package exports a plain object: { comb, defaults, version }:
• Key comb has a value which is the main function, you will call that function like this: comb()
• Key defaults has a value, a plain object, which is defaults of the main function
• Key version is a string, for example, "2.0.12" and mirrors same key package.json
§ API - Input
The main function comb which you require/import
comb(str, [options])
takes two input arguments:
Input argumentTypeObligatory?Description
strStringyesYour HTML file contents, as string
optionsPlain objectnoAny options, as a plain object, see below
§ API - Optional Options Object
Optionally, you can pass the Optional Options Object as a second argument:
Options object's keyTypeDefaultExampleDescription
whitelistArray[][".class-1", "#id-1", ".module-*"]List all classes or id's you want this library to ignore. You can use all matcher opens in a new tab patterns.
backendArray[][{ heads: "{{", tails: "}}" }, { heads: "{%", tails: "%}" }]If your code has back-end code within clss or id values, for example, class="{{ red }} main-box" you can stop {{, red and }} to be treated as class names
uglifyBooleanfalsen/aWill rename all class and id names to be few characters-long. This might reduce your file size by another kilobyte.
removeHTMLCommentsBooleantruen/aWhen enabled, all HTML comments (<!-- to -->) will be removed
removeCSSCommentsBooleantruen/aWhen enabled, all CSS comments (/* to */) will be removed
doNotRemoveHTMLCommentsWhoseOpeningTagContainsArray of zero or more insensitive strings["[if", "[endif"]n/aEmail code often contains Outlook or IE conditional comments which you probably don't want to remove. Whatever strings you list here, if comment's opening tag will contain these, that tag will not be removed.
reportProgressFuncFunction or something falsynulln/aIf supplied, it will ping the function you assign passing current percentage done (natural number) as an input argument
reportProgressFuncFromNatural number0n/aBy default, percentages are reported from 0 to 100. This value overrides this starting percentage value.
reportProgressFuncToNatural number100n/aBy default, percentages are reported from 0 to 100. This value overrides this ending percentage value.
Here are all options in one place in case you need to copy the whole thing:
{
whitelist: [],
backend: [],
uglify: false,
removeHTMLComments: true,
removeCSSComments: true,
doNotRemoveHTMLCommentsWhoseOpeningTagContains: ["[if", "[endif"],
reportProgressFunc: null,
reportProgressFuncFrom: 0,
reportProgressFuncTo: 100,
}
§ API - Output
For example, output could look like this:
{
log: {
timeTakenInMiliseconds: 55,
traversedTotalCharacters: 504,
traversedTimesInputLength: 4.24,
originalLength: 118,
cleanedLength: 87,
bytesSaved: 32,
percentageReducedOfOriginal: 27,
nonIndentationsWhitespaceLength: 9,
nonIndentationsTakeUpPercentageOfOriginal: 8,
commentsLength: 10,
commentsTakeUpPercentageOfOriginal: 1,
},
result: "<html>...",
countAfterCleaning: 3,
countBeforeCleaning: 15,
allInHead: allClassesAndIdsWithinHead,
allInBody: allClassesAndIdsWithinBody,
deletedFromHead: [".unused1", ".unused2"],
deletedFromBody: [".unused1", ".unused1", "#unused1"],
}
So a plain object is returned. It will have the following keys:
KeyIts value's typeDescription
logPlain objectVarious information about performed operations
resultStringA string containing cleaned HTML
countBeforeCleaningNumberHow many unique classes and id's were in total before cleaning
countAfterCleaningNumberHow many unique classes and id's were in total after cleaning
allInHeadArrayDeduped and sorted array of all classes and id's between <head> tags
allInBodyArrayDeduped and sorted array of all classes and id's between <body> tags
deletedFromHeadArrayArray of classes/id's that were deleted inside <head> at least once^
deletedFromBodyArrayArray of classes/id's that were deleted inside <body> at least once^
^ To be very precise, if class or id name was deleted at once, it gets in this list. Mind you, some used classes or id's can be sandwiched with unused (.used.unused) and end up removed in some instances and get reported here, but it does not mean they were removed completely as species.
§ opts.whitelist
Since the main purpose of this library is to clean email HTML, it needs to cater for email code specifics. One of them is that CSS styles will contain fix or hack styles, meant for email software. For example, here are few of them:
#outlook a { padding:0; } .ReadMsgBody { width:100%; }
.ExternalClass fixes are not needed any more in email templates, see email-bugs/issues/4 opens in a new tab
You will not be using these classes within the <body> of your HTML code, so they would get removed as "unused" because they are present in <head> only. To avoid that, pass the classes, and id's in the whitelist key's value, as an array. For example:
var html = "<!DOCTYPE html>...";
comb(html, {
whitelist: ["#outlook", ".ExternalClass", ".ReadMsgBody"],
});
You can also use a wildcard, for example in order to whitelist classes module-1, module-2 ... module-99, module-100, you can simply whitelist them as module-*:
var html = "<!DOCTYPE html>...";
comb(html, {
whitelist: [".module-*"],
});
// => all class names that begin with ".module-" will not be touched by this library.
§ opts.backend
This library, differently from competition, is aiming to support code which contains back-end code: other programming languages (Java JSP's), other templating languages (like Nunjucks) and/or proprietary ESP templating languages.
All different languages can be present in the input source, and parser won't care, EXCEPT when they are in class or id names. For example, <td class="mt10 {{ module.on }} module-box blackbg". Notice how {{ module.on }} sits in the middle and it's variable value from a different programming language. Eventually, it will be rendered into strings on or off but at this stage, this is raw, unrendered template and we want to remove all unused CSS from it.
It's possible to clean this too.
If you let this library know how are your back-end language's variables marked, for example, that "heads" are {{ and "tails" are }} (as in Hi {{data.firstname}}), the algorithm will ignore all variables within class or id names.
If you don't put templating variables into classes or id's, don't use the feature because it still costs computing resources to perform those checks.
Here's an example:
// Require it first. You get a function which you can feed with strings.
// Notice you can name it any way you want (because in the source it's using "export default").
const { comb } = require("email-comb");
// Let's define a string equal to some processed HTML:
const res = comb(
`<!doctype html>
<html>
<head>
<style>
.aaa {
color: black;
}
</style></head>
<body class="{% var1 %}">
<div class="{{ var2 }}">
</div>
</body>
</html>
`
,
{
// <------------ Optional Options Object - second input argument of our function, remove()
backend: [
{
heads: "{{", // define heads and tails in pairs
tails: "}}",
},
{
heads: "{%", // second pair
tails: "%}",
},
],
}
).result; // <------ output of this library is a plain object. String result is in a key "result". We grab it here.
// Log the result:
console.log("res =\n" + res);
// res =
// <!doctype html>
// <html>
// <head>
// </head>
// <body class="{% var1 %}">
// <div class="{{ var2 }}">
// </div>
// </body>
// </html>
//
In templating languages, it's also possible to have IF-ELSE clauses. For example, in Nunjucks, you can have:
<td class="db{% if module_on || oodles %}on{% else %}off{% endif %} pt10"></td>
db and pt10 are normal CSS class names, but everything else between {% and %} is Nunjucks code.
Now, in those cases, notice that Nunjucks code is only wrapping the variables. Even if you set heads to {% and tails to %}, classes on and off will not get ignored and theoretically can get removed!!!
The solution is to ensure that all back-end class names are contained within back-end tags. With Nunjucks, it is easily done by performing calculations outside class= declarations, then assigning the calculation's result to a variable and using the variable instead.
For example, let's rewrite the same snippet used above:
{% set switch = 'off' %} {% if module_on || oodles %} {% set switch = 'on' %} {%
else %}
<td class="db {{ switch }} pt10"></td>
Now, set heads to {{ and tails to }} and switch will be ignored completely.
§ Tapping the stream in Gulp
In Gulp, everything flows as vinyl Buffer streams. You could tap opens in a new tab the stream, convert it to string, perform the operations (like remove unused CSS), then convert it back to Buffer and place the stream back. We wanted to come up with a visual analogy example using waste pipes but thought we'd rather won't.
Code-wise, here's the idea:
const tap = require("gulp-tap");
const { comb } = require("email-comb");
const util = require("gulp-util");
const whitelist = [
".External*",
".ReadMsgBody",
".yshortcuts",
".Mso*",
"#outlook",
".module*",
];
gulp.task("build", () => {
return gulp.src("emails/*.html").pipe(
tap((file) => {
const cleanedHtmlResult = comb(file.contents.toString(), {
whitelist,
});
util.log(
util.colors.green(
`\nremoved ${
cleanedHtmlResult.deletedFromHead.length
}
from head: ${cleanedHtmlResult.deletedFromHead.join(" ")}`
)
);
util.log(
util.colors.green(
`\nremoved ${
cleanedHtmlResult.deletedFromBody.length
}
from body: ${cleanedHtmlResult.deletedFromBody.join(" ")}`
)
);
file.contents = Buffer.from(cleanedHtmlResult.result);
})
);
});
§ Extreme example of unused CSS
This piece of HTML doesn't even have <head> and <style> CSS is at the very bottom, within <body>. Our application still cleans it allright:
<html>
<body id="unused-1">
<table class="unused-2 unused-3">
<tr>
<td class="unused-4 unused-5">text</td>
</tr>
</table>
<style>
.unused-6 {
display: block;
}
#unused-7 {
height: auto;
}
</style>
</body>
</html>
Cleaned result:
<html>
<body>
<table>
<tr>
<td>text</td>
</tr>
</table>
</body>
</html>
§ Removing unused CSS from web pages
This library is meant to be used on any HTML where there are no external CSS stylesheets. It's quite rare to find a web page which would not have any external stylesheets.
§ Processing campaigns' HTML
Email templates, the HTML files, are coded in two stages: 1) design file to static HTML; 2) static HTML to "campaign" - HTML with all templating.
For example, Price is {% if data.purchasePrice > 100 %}...{% endif %} is HTML mixed with Nunjucks/Jinja - that greater-than bracket is not an HTML bracket.
email-comb will work fine on both static HTML or wired up campaign HTML. As a non-parsing tool, it skips the code it "doesn't understand".
§ Changelog
See it in the monorepo opens in a new tab, on Sourcehut.
§ Licence
MIT opens in a new tab
Copyright © 2010–2020 Roy Revelt and other contributors
Related articles:
Related packages:
📦 html-crush 3.0.3
Minifies HTML/CSS: valid or broken, pure or mixed with other languages
📦 email-all-chars-within-ascii 2.10.1
Scans all characters within a string and checks are they within ASCII range
📦 email-homey 2.8.0
Generate homepage in the Browsersync root with links/screenshots to all your email templates
📦 email-all-chars-within-ascii-cli 1.11.1
Command line app to scan email templates, are all their characters within ASCII range
|
__label__pos
| 0.854453 |
Create a gist now
Instantly share code, notes, and snippets.
What would you like to do?
require 'nokogiri'
require 'pstore'
#
# Rack::DomainSprinkler
#
# Modifies outgoing HTML markup so that common static assets like script source
# files, images, and stylesheets will be served from one of a number of domains
# rather than just a single one in order to improve parallelization of resource
# downloads during page loads.
#
module Rack
class DomainSprinkler
#
# Options:
# A list of domains across which to spread requests
# The name of the file to be used for caching URL/path mappings
#
def initialize(app, domains=[], cache_loc="tmp/sprinklins.pstore")
@app = app
@@domains = domains
@@url_cache = PStore.new(cache_loc)
end
def call(env)
dup._call(env)
end
#
# Currently only looks for <link>, <script>, and <img> tags.
# Should be refactored to provide more generic mapping.
#
def _call(env)
status, headers, response = @app.call(env)
if headers["Content-Type"] =~ /^text\/html/
document = Nokogiri::HTML(response.body)
sprinkle(env, document, 'link', 'href')
sprinkle(env, document, 'script', 'src')
sprinkle(env, document, 'img', 'src')
response.body = document.to_html
end
[status, headers, response]
end
def sprinkle(env, document, selector, property)
document.search(selector).each do |node|
url = node[property]
if !url.empty? and url !~ /^\#/ and url !~ /^http\:\/\//
node[property] = domainify(url, env)
end
end
end
def domainify(path, env)
if path !~ /\//
path = absolutify(path)
end
full_url = nil
@@url_cache.transaction do
unless @@url_cache[path]
domain = @@domains.shift and @@domains.push(domain)
@@url_cache[path] = "http://#{ domain }#{ path }"
end
full_url = @@url_cache[path]
end
full_url
end
def absolutify(path, env)
path_prefix = env['REQUEST_PATH']
unless path_prefix =~ /\/$/
path_prefix.sub!(/\/[^\/]+$/, '')
end
"#{ tmp_path }/#{ path }"
end
end
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
|
__label__pos
| 0.992818 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
$string = ":abc and :def have apples.";
$replacements = array('Mary', 'Jane');
should become:
Mary and Jane have apples.
Right now I'm doing it like this:
preg_match_all('/:(\w+)/', $string, $matches);
foreach($matches[0] as $index => $match)
$string = str_replace($match, $replacements[$index], $string);
Can I do this in a single run, using something like preg_replace?
share|improve this question
1
This is how you can do it with an associative array. – Teneff Feb 23 '12 at 15:53
up vote 6 down vote accepted
You could use preg_replace_callback with a callback that consumes your replacements one after the other:
$string = ":abc and :def have apples.";
$replacements = array('Mary', 'Jane');
echo preg_replace_callback('/:\w+/', function($matches) use (&$replacements) {
return array_shift($replacements);
}, $string);
Output:
Mary and Jane have apples.
share|improve this answer
the e modifier has been depreciated as of PHP v.5.5 – Kareem Oct 25 '14 at 6:44
1
@Karim: Right, have removed it from the answer. Thanks for the pointer. – hakre Oct 25 '14 at 9:06
Try this
$to_replace = array(':abc', ':def', ':ghi');
$replace_with = array('Marry', 'Jane', 'Bob');
$string = ":abc and :def have apples, but :ghi doesn't";
$string = strtr($string, array_combine($to_replace, $replace_with));
echo $string;
here is result: http://sandbox.onlinephpfunctions.com/code/7a4c5b00f68ec40fdb35ce189d26446e3a2501c2
share|improve this answer
For a Multiple and Full array replacement by Associative Key you can use this to match your regex pattern:
$words=array("_saudation_"=>"Hello", "_animal_"=>"cat", "_animal_sound_"=>"MEooow");
$source=" _saudation_! My Animal is a _animal_ and it says _animal_sound_ ... _animal_sound_ , _no_match_";
function translate_arrays($source,$words){
return (preg_replace_callback("/\b_(\w*)_\b/u", function($match) use ($words) { if(isset($words[$match[0]])){ return ($words[$match[0]]); }else{ return($match[0]); } }, $source));
}
echo translate_arrays($source,$words);
//returns: Hello! My Animal is a cat and it says MEooow ... MEooow , _no_match_
*Notice, thats although "_no_match_" lacks translation, it will match during regex, but preserve its key. And keys can repeat many times.
share|improve this answer
I recommend to add "u" modifier to the regex for supporting UTF-8 strings: /\b_(\w*)_\b/u. BTW, code above has a syntax error, and extra parenthesis at the end. – nikoskip Jan 6 '15 at 18:08
Looks like no extra parenthesis issue, i verified the code and is running ok. BUt i added The UTf-8. Thank you – Miguel Jan 9 '15 at 16:00
$string = ":abc and :def have apples.";
$replacements = array('Mary', 'Jane');
echo preg_replace("/:\\w+/e", 'array_shift($replacements)', $string);
Output:
Mary and Jane have apples.
share|improve this answer
This won't work on the HHVM any workarounds? – Mario May 1 '14 at 11:23
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.789613 |
CreateTextFile Method
Creates a specified file name and returns a TextStream object that can be used to read from or write to the file.
object.CreateTextFile(filename[, overwrite[, unicode]])
object
Required. Always the name of a FileSystemObject or Folder object.
filename
Required. String expression that identifies the file to create.
overwrite
Optional. Boolean value that indicates whether you can overwrite an existing file. The value is true if the file can be overwritten, false if it can't be overwritten. If omitted, existing files are not overwritten.
unicode
Optional. Boolean value that indicates whether the file is created as a Unicode or ASCII file. The value is true if the file is created as a Unicode file, false if it's created as an ASCII file. If omitted, an ASCII file is assumed.
The following code illustrates how to use the CreateTextFile method to create and open a text file.
var fso = new ActiveXObject("Scripting.FileSystemObject");
var a = fso.CreateTextFile("c:\\testfile.txt", true);
a.WriteLine("This is a test.");
a.Close();
Sub CreateAfile
Dim fso, MyFile
Set fso = CreateObject("Scripting.FileSystemObject")
Set MyFile = fso.CreateTextFile("c:\testfile.txt", True)
MyFile.WriteLine("This is a test.")
MyFile.Close
End Sub
If the overwrite argument is false, or is not provided, for a filename that already exists, an error occurs.
Show:
|
__label__pos
| 0.71396 |
Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
View
8
I have created an animation object that moves down the strip and can either bounce off the end or loop back to the beginning. It erases itself perfectly after every frame so it overlaps other animations without any ill effects. The ends are anti-aliased so slow movement still looks very smooth. Also I'm running 240 of them completely independently at 80 frames per second across 8 strips with 300 leds each!
Am feeling really pumped so just thought I would share! There is still a lot of work to do but it's nice to see some results.
I'm beginning to think I have the ability to do this...
8
1 comment
2
I am outputting every value in the sinelon func to the serial to understand what they equate to (I call this function once and then delay it for a few minutes so I can grab the serial output and take a picture):
How I am doing this:
void sinelon_pixels() {
fadeToBlackBy( leds, NUM_LEDS, 220);
int pos1 = millis()/100;
for (uint8_t i = 0; i<156; i++) {
int thisbright = cubicwave8((i*5));
thisbright = thisbright*thisbright/255;
leds[(i+pos1)%NUM_LEDS] = ColorFromPalette( currentPalette, gHue, thisbright, LINEARBLEND);
Serial.print("i: ");
Serial.println(i);
Serial.print("pos1: ");
Serial.println(pos1);
Serial.print("(i+pos1) % 156: ");
Serial.println(((i+pos1) % 156));
Serial.print("thisbright: ");
2
1 comment
Community Details
455
Subscribers
13
Online
This is a community for people who are users of the FastLED library. It's a place to discuss what you're doing, ask for help, get information about updates, etc...
Create Post
r/FastLED Rules
1.
Sharing larger amounts of code
2.
Spam and advertising
3.
Low-effort posts
|
__label__pos
| 0.507146 |
Saturday, April 26, 2014
Run multi AsyncTask at the same time
AsyncTask enables proper and easy use of the UI thread. This class allows to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.
When first introduced, AsyncTasks were executed serially on a single background thread. Starting with DONUT, this was changed to a pool of threads allowing multiple tasks to operate in parallel. Starting with HONEYCOMB, tasks are executed on a single thread to avoid common application errors caused by parallel execution.
If you truly want parallel execution, you can invoke executeOnExecutor(java.util.concurrent.Executor, Object[]) with THREAD_POOL_EXECUTOR.
~ reference: AsyncTask | Android Developers
This example show how to execute multi AsyncTask at the same in parallel, by calling executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR) for (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB), in our StartAsyncTaskInParallel() method. The first three ProgressBars updated by AsyncTask execute in normal approach by calling execute(), the last two ProgressBar updated by AsyncTask execute in parallel.
MainActivity.java
package com.example.androidparallelasynctask;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ProgressBar;
import android.annotation.TargetApi;
import android.app.Activity;
import android.os.AsyncTask;
import android.os.Build;
import android.os.Bundle;
import android.os.SystemClock;
public class MainActivity extends Activity {
public class MyAsyncTask extends AsyncTask<Void, Integer, Void> {
ProgressBar myProgressBar;
public MyAsyncTask(ProgressBar target) {
myProgressBar = target;
}
@Override
protected Void doInBackground(Void... params) {
for(int i=0; i<100; i++){
publishProgress(i);
SystemClock.sleep(100);
}
return null;
}
@Override
protected void onProgressUpdate(Integer... values) {
myProgressBar.setProgress(values[0]);
}
}
Button buttonStart;
ProgressBar progressBar1, progressBar2, progressBar3, progressBar4, progressBar5;
MyAsyncTask asyncTask1, asyncTask2, asyncTask3, asyncTask4, asyncTask5;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
progressBar1 = (ProgressBar)findViewById(R.id.progressbar1);
progressBar2 = (ProgressBar)findViewById(R.id.progressbar2);
progressBar3 = (ProgressBar)findViewById(R.id.progressbar3);
progressBar4 = (ProgressBar)findViewById(R.id.progressbar4);
progressBar5 = (ProgressBar)findViewById(R.id.progressbar5);
buttonStart = (Button)findViewById(R.id.start);
buttonStart.setOnClickListener(new OnClickListener(){
@Override
public void onClick(View v) {
asyncTask1 = new MyAsyncTask(progressBar1);
asyncTask1.execute();
asyncTask2 = new MyAsyncTask(progressBar2);
asyncTask2.execute();
asyncTask3 = new MyAsyncTask(progressBar3);
asyncTask3.execute();
asyncTask4 = new MyAsyncTask(progressBar4);
StartAsyncTaskInParallel(asyncTask4);
asyncTask5 = new MyAsyncTask(progressBar5);
StartAsyncTaskInParallel(asyncTask5);
}});
}
@TargetApi(Build.VERSION_CODES.HONEYCOMB)
private void StartAsyncTaskInParallel(MyAsyncTask task) {
if(Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB)
task.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
else
task.execute();
}
}
activity_main.xml
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:orientation="vertical"
tools:context="com.example.androidparallelasynctask.MainActivity" >
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:autoLink="web"
android:text="http://android-er.blogspot.com/"
android:textStyle="bold" />
<Button
android:id="@+id/start"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="Start"/>
<ProgressBar
android:id="@+id/progressbar1"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0" />
<ProgressBar
android:id="@+id/progressbar2"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0" />
<ProgressBar
android:id="@+id/progressbar3"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0" />
<ProgressBar
android:id="@+id/progressbar4"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0" />
<ProgressBar
android:id="@+id/progressbar5"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:max="100"
android:progress="0" />
</LinearLayout>
This video show how it run on devices running various Android version, include:
- Nexus 7 (1st generation), Android 4.4.2
- HTC One X, Android 4.2.2
- HTC Flyer, Android 3.2.1
- Nexus One, Android 2.3.6
download filesDownload the files.
2 comments:
Ramesh said...
post was really informative on Android Asynctask
Thank you...
Sarah Audina said...
You are the best dude.. It seems easy to imolement..
|
__label__pos
| 0.544635 |
# Guide to contributing Please read this if you intend to contribute to the project. ## Legal stuff Apologies in advance for the extra work required here - this is necessary to comply with the Eclipse Foundation's strict IP policy. Please also read [this](http://wiki.eclipse.org/Development_Resources/Contributing_via_Git). In order for any contributions to be accepted, you MUST do the following things: * Sign the [Eclipse Contributor Agreement](http://www.eclipse.org/legal/ECA.php). To sign the Eclipse Contributor Agreement you need to: * Obtain an Eclipse Foundation account. Anyone who currently uses Eclipse Bugzilla or Gerrit already has an Eclipse account. If you don’t, you need to [register](https://dev.eclipse.org/site_login/createaccount.php). * Login to the [projects portal](https://projects.eclipse.org/), select “My Account”, and then the “Eclipse ECA” tab. * Add your GitHub ID in your Eclipse Foundation account. Find this option in the "Personal Information" tab under "Edit Your Account". * "Sign-off" your commits! Every commit you make in your patch or pull request MUST be "signed off". You do this by adding the `-s` flag when you make the commit(s), e.g. git commit -s -m "your commit message..." ## Making your changes * Fork the repository on GitHub. * Create a new branch for your changes. * Make your changes. * Make sure you include tests. * Make sure the test suite passes after your changes. * Commit your changes into that branch. * Use descriptive and meaningful commit messages. * If you have a lot of commits, squash them into a single commit. * Make sure you use the `-s` flag when committing as explained above. * Push your changes to your branch in your forked repository. ## Submitting the changes Submit a pull request via the normal GitHub UI. ## After submitting * Do not use your branch for any other development, otherwise further changes that you make will be visible in the PR. # Credit This document was originally written by the ICE team at https://raw.githubusercontent.com/eclipse/ice/master/CONTRIBUTING.md We have shamelessly copied, modified and co-opted it for our own repo and we graciously acknowledge the work of the original authors.
|
__label__pos
| 0.818105 |
The Blog.
Google could soon mark your website as not secure
Author: Dave Meier
Category: Web Design
For the last few years, Google has been slowly pushing website owners towards the use of HTTPS (also referred to as SSL). Nearly all e-commerce websites already have some form of HTTPS, but now Google has ramped things up for all website owners and is pushing website owners to switch to HTTPS, or else.
Most websites run on HTTP. What’s the difference between HTTP and HTTPS you ask?
The problem is that HTTP (note: no “s” on the end) data is not encrypted, and it can be intercepted by third parties to gather data being passed between the two systems.
I’ve seen a rise in people coming to Hidden Depth needing our help after their website got hacked. HTTPS is not going to stop your website being hacked, but it is a start and should be the least security provided to website visitors. Now Google is forcing the issue, so unless you want visitors leaving your site in their droves, then it’s time to make the switch to HTTPS.
Why do I need HTTPS?
Imagine you have to pay for something in a shop. You hand your credit card to a complete stranger, they pass it to someone else, and then someone else, and so on. Eventually, the card reaches the cashier. How do you know someone along the way did not steal your information? Scary isn’t it. Any information you exchange with a website not encrypted with HTTPS is doing something similar. Without encryption, your information can be seen by others.
HTTPS is designed to provide security for website owners and visitors. It encrypts the information you enter on a website and stops potential hackers sniffing out your information. Without HTTPS you could be revealing your email address, password, search history, and anything else you type into a web browser.
What will HTTPS do for me?
Moving your website to HTTPS will provide you with some benefits:
• SEO – Google has already stated at the start of this year that HTTPS is now a ranking factor for websites. In April Moz reported that half of all page 1 Google search results are now HTTPS.
• Security – HTTPS encrypts the connection, so you are keeping your visitor’s information safe and secure
• Trust – Users want to know that their information is secure, they want to know their search history and passwords are private. If you see a warning saying a website is Not secure would you feel comfortable filling in personal details into a form?
• Traffic / Leads – Unless you want visitors leaving your site in their droves, then it’s time to make the switch to HTTPS.
This affects everyone!
The new warning in Google Chrome is not just for websites that take payments online. Warnings will be shown for every website that has a form that someone can enter any data into. This could be a search bar, your contact form, a payment page or a newsletter sign up form. As soon as the user types anything into a form field, the warning will display. This change in the address bar is bound to catch your attention and make you think twice about completing the form.
How do I make the switch to HTTPS?
1. To serve a website over HTTPS, you need an SSL cert. Your web designer or hosting company can help you with this.
2. You also need to make sure all links in your website are changed from HTTP to HTTPS. Your web designer is the person to talk to here.
|
__label__pos
| 0.694391 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I would like to generate a figure that has a combination of base and ggplot graphics. The following code shows my figure using the base plotting functions of R:
t <- c(1:(24*14))
P <- 24
A <- 10
y <- A*sin(2*pi*t/P)+20
par(mfrow=c(2,2))
plot(y,type = "l",xlab = "Time (hours)",ylab = "Amplitude",main = "Time series")
acf(y,main = "Autocorrelation",xlab = "Lag (hours)", ylab = "ACF")
spectrum(y,method = "ar",main = "Spectral density function",
xlab = "Frequency (cycles per hour)",ylab = "Spectrum")
require(biwavelet)
t1 <- cbind(t, y)
wt.t1=wt(t1)
plot(wt.t1, plot.cb=FALSE, plot.phase=FALSE,main = "Continuous wavelet transform",
ylab = "Period (hours)",xlab = "Time (hours)")
Which generates enter image description here
Most of these panels look sufficient for me to include in my report. However, the plot showing the autocorrelation needs to be improved. This looks much better by using ggplot:
require(ggplot2)
acz <- acf(y, plot=F)
acd <- data.frame(lag=acz$lag, acf=acz$acf)
ggplot(acd, aes(lag, acf)) + geom_area(fill="grey") +
geom_hline(yintercept=c(0.05, -0.05), linetype="dashed") +
theme_bw()
enter image description here
However, seeing as ggplot is not a base graphic, we cannot combine ggplot with layout or par(mfrow). How could I replace the autocorrelation plot generated from the base graphics with the one generated by ggplot? I know I can use grid.arrange if all of my figures were made with ggplot but how do I do this if only one of the plots are generated in ggplot?
share|improve this question
2
it might be almost as easy, and look more consistent, to use polygon with the output of acf() to construct a base-graphics plot that resembles the ggplot one. – Ben Bolker Jan 2 '13 at 15:18
Thanks for our response. This question is really aimed at learning how to combine ggplot and base graphics in a figure window, I realize that there may be more efficient ways of generating the figure shown, but for future purposes I would like to learn the method specified. – KatyB Jan 2 '13 at 15:21
3
check out the gridBase package ... – Ben Bolker Jan 2 '13 at 15:24
2 Answers 2
up vote 31 down vote accepted
Using gridBase package, you can do it just by adding 2 lines. I think if you want to do funny plot with the grid you need just to understand and master viewports. It is really the basic object of the grid package.
vps <- baseViewports()
pushViewport(vps$figure) ## I am in the space of the autocorrelation plot
The baseViewports() function returns a list of three grid viewports. I use here figure Viewport A viewport corresponding to the figure region of the current plot.
Here how it looks the final solution:
enter image description here
library(gridBase)
par(mfrow=c(2, 2))
plot(y,type = "l",xlab = "Time (hours)",ylab = "Amplitude",main = "Time series")
plot(wt.t1, plot.cb=FALSE, plot.phase=FALSE,main = "Continuous wavelet transform",
ylab = "Period (hours)",xlab = "Time (hours)")
spectrum(y,method = "ar",main = "Spectral density function",
xlab = "Frequency (cycles per hour)",ylab = "Spectrum")
## the last one is the current plot
plot.new() ## suggested by @Josh
vps <- baseViewports()
pushViewport(vps$figure) ## I am in the space of the autocorrelation plot
vp1 <-plotViewport(c(1.8,1,0,1)) ## create new vp with margins, you play with this values
require(ggplot2)
acz <- acf(y, plot=F)
acd <- data.frame(lag=acz$lag, acf=acz$acf)
p <- ggplot(acd, aes(lag, acf)) + geom_area(fill="grey") +
geom_hline(yintercept=c(0.05, -0.05), linetype="dashed") +
theme_bw()+labs(title= "Autocorrelation\n")+
## some setting in the title to get something near to the other plots
theme(plot.title = element_text(size = rel(1.4),face ='bold'))
print(p,vp = vp1) ## suggested by @bpatiste
share|improve this answer
Yes I have tried this, the problem with this is that the plot generated with ggplot is much bigger than the other panels (as you can see above). Is there a method for changing this? – KatyB Jan 2 '13 at 16:31
@Kate I upadte my answer. – agstudy Jan 2 '13 at 16:40
+1 Very nice. If you replaced the call to acf(...) with a call to plot.new(), you'd avoid having to call grid.rect() to 'white-out' the acf plot. – Josh O'Brien Jan 2 '13 at 17:42
Just b/c I came back and saw that you'd added this much simpler and cleverer answer. – Josh O'Brien Jan 2 '13 at 17:51
@JoshO'Brien Thanks a lot! – agstudy Jan 2 '13 at 17:56
You can use the print command with a grob and viewport.
First plot your base graphics then add the ggplot
library(grid)
# Let's say that P is your plot
P <- ggplot(acd, # etc... )
# create an apporpriate viewport. Modify the dimensions and coordinates as needed
vp.BottomRight <- viewport(height=unit(.5, "npc"), width=unit(0.5, "npc"),
just=c("left","top"),
y=0.5, x=0.5)
# plot your base graphics
par(mfrow=c(2,2))
plot(y,type #etc .... )
# plot the ggplot using the print command
print(P, vp=vp.BottomRight)
share|improve this answer
Hello Ricardo. Do you know how to control the widths of the plots with your method ? For instance I want a base graphic and a ggplot2 graphic side by side, but with a larger width for the base graphic. – Stéphane Laurent Jan 16 '13 at 11:27
Hi Stéphane, you can accomplish this adjust the settings in the viewport() line. Specifically, you would want to adjust width and the y value, experimenting with different values until you get your desired results. – Ricardo Saporta Jan 18 '13 at 6:01
Thank you Ricardo. Actually I have open a question here stackoverflow.com/questions/14358526/… – Stéphane Laurent Jan 18 '13 at 8:12
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.610728 |
What Is DNS Leak And How To Prevent It?
What-Is-DNS-Leak-And-How-To-Prevent-It?
You already know what DNS means but let me tell you something about it. DNS is the Domain Name System which is like an internet phonebook consisting of the IP addresses. People are doing all the things they could in order to do a safe and private search. But somehow due to some reasons are not getting a proper anonymity. Here we will see some of the reasons for the DNS leak and ways to prevent it.
DNS Leak
To understand the DNS leak you have to understand a few terms like ISP, IP address, URL. You know that whenever you visit a website, the site sends a counter signal to your ISP (Internet Service Provider) and fetch your IP address so that it can send you the content information which you were searching for. Since all these things are done between your ISP (DNS server) and the website, your ISP will have all the records regarding your search. This is where DNS leaks happen, as any of the third party can take advantage of the records present there.
How can you prevent DNS Leaks?
Here you will get two of the basic methods which you can use to prevent any data breach. Both of the methods are easy to do. Read about them and choose anyone from them.
Change the DNS servers manually
Free-VPN
Free VPN
You can do manual changes in the DNS servers by adding the proxy address. This will help you hide your Original IPs. Use the public DNS which are offered by Google Public DNS, Comodo Secure, and Open DNS etc. and change the original ones with the alternate server.
Use VPN for DNS protection
Using the VPNs will be the safest thing to do in order to protect the DNS leaks. VPN works as the mediator between the site and the ISP and gives the best privacy to the user. In this ISP and website won’t be able to get your original credentials. VPN will help to hide the location and IP address of the user from everyone. You can find any of the free vpn service providers, use the free vpn for better protection and safety.
To check the Domain leakage you can search for what is my IP online or you can use any of the sites which says check IP leak. If you think that anyone can see your original IP then use any of the above methods for a protective internet browsing.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.901813 |
Cara Setting Mikrotik dengan Winbox untuk Hotspot
oleh -23 views
Cara Setting Mikrotik dengan Winbox untuk Hotspot
Cara Setting Mikrotik dengan Winbox untuk Hotspot
Setting Dasar Hotspot Mikrotik – Router Mikrotik memiliki banyak fitur, salah satu fitur yang paling populer dan banyak digunakan adalah Hotspot. Kami sering menemukan sinyal internet wifi di kata sandi. Jadi jika Anda ingin mengakses wifi Anda harus mengetahui kata sandi terlebih dahulu. Tetapi tidak seperti Hotspot, kebanyakan hotspot wifi tidak ada dalam kata sandi dan semua pengguna dapat terhubung dan akan diarahkan ke halaman login di Browser Web. Setiap pengguna dapat masuk dengan nama pengguna dan kata sandi yang berbeda. Metode ini adalah apa yang sering kita temukan di kampus, kafe wifi, sekolah, kantor, dan area publik lainnya.
Sebenarnya hotspot tidak hanya dapat diterapkan ke jaringan nirkabel, tetapi juga ke jaringan kabel. Keuntungan dari Hotspot adalah kita dapat mengkonfigurasi jaringan yang hanya dapat digunakan dengan nama pengguna dan kata sandi tertentu. Kami juga dapat mengelola pengguna ini. Misalnya, atur durasi total penggunaan hotspot per pengguna, batasi jumlah data yang dapat diunduh per pengguna, atur konten apa yang dapat diakses pengguna, dll.
Hotspot adalah fitur gabungan dari berbagai layanan yang ada di Microtic, termasuk:
• DHCP server, digunakan untuk memberi layanan IP otomatis ke user
• Firewall NAT, untuk mentranslasi IP user ke IP yang bisa dikenali ke internet
• Firewall filter, untuk memblock user yang belum melakukan login
• Proxy, untuk memberikan tampilan halaman login
• dan sebagainya
Tetapi beruntungnya, service-service tersebut tidak perlu kita buat secara manual. Bagaimana langkahnya, bisa dijabarkan sebagai berikut :
Buka di menu IP > Hotspot > Hotspot Setup.
Hotspot Setup
Langkah-Langkah Setting Hotspot di Winbox
Dengan menekan tombol Hotspot Setup, wizard Hotspot akan menuntun kita untuk melakukan setting dengan menampilkan kotak-kotak dialog pada setiap langkah nya.
Konfigurasi HotSpot MikroTik dengan Winbox
Konfigurasi HotSpot MikroTik dengan Winbox
Langkah pertama, kita diminta untuk menentukan interface mana Hotspot akan diaktifkan. Pada kasus kali ini, Hotspot diaktifkan pada wlan1, dimana wlan1 sudah kita set sebagai access point (ap-bridge). Selanjutnya klik Next.
cara setting hotspot mikrotik rb750
cara setting hotspot mikrotik
Jika di interface wlan1 sudah terdapat IP, maka pada langkah kedua ini, secara otomatis terisi IP Address yang ada di wlan1. Tetapi jika belum terpasang IP, maka kita bisa menentukan IP nya di langkah ini. Kemudian Klik Next.
Cara Setting Mikrotik Dari Awal Untuk Pemula
Cara Setting Mikrotik Dari Awal Untuk Pemula
Langkah ketiga, tentukan range IP Address yang akan diberikan ke user (DHCP Server). Secara default, router otomatis memberikan range IP sesuai dengan prefix/subnet IP yang ada di interface. Tetapi kita bisa merubahnya jika dibutuhkan. Lalu klik Next.
cara setting mikrotik dari awal sampai akhir dengan winbox
cara setting mikrotik dari awal sampai akhir dengan winbox
Langkah selanjutnya, menentukan SSL Certificate jika kita akan menggunakan HTTPS untuk halaman loginnya. Tetapi jika kita tidak memiliki sertifikat SSL, kita pilihl none, kemudian klik Next
cara setting mikrotik rb750 dengan winbox
cara setting mikrotik rb750 dengan winbox
Jika diperlukan SMTP Server khusus untuk server hotspot bisa ditentukan, sehingga setiap request SMTP client diredirect ke SMTP yang kita tentukan. Karena tidak disediakan smtp server, IP 0.0.0.0 kami biarkan default. Kemudian klik Next.
cara setting mikrotik rb750 untuk warnet dan hotspot
cara setting mikrotik rb750 untuk warnet dan hotspot
Di langkah ini, kita meentukan alamat DNS Server. Anda bisa isi dengan DNS yang diberikan oleh ISP atau dengan open DNS. Sebagai contoh, kita menggunakan DNS Server Google. Lalu klik Next.
Queue Tree | Traffic Priority Queue MikroTik
MikroTik
Selanjutnya kita diminta memasukkan nama DNS untuk local hotspot server. Jika diisikan, nantinya setiap user yang belum melakukan login dan akan akses ke internet, maka browser akan dibelokkan ke halaman login ini. Disini DNS name sebaiknya menggunakan format FQDN yang benar. Jika tidak diisikan maka di halaman login akan menggunakan url IP address dari wlan1. Pada kasus ini, nama DNS-nya diisi “hotspot.mikrotik.co.id”. Lalu klik Next.
setting mikrotik agar mobile legend tidak lag
setting mikrotik agar mobile legend tidak lag
Langkah terakhir, tentukan username dan pasword untuk login ke jaringan hotspot Anda. Ini adalah username yang akan kita gunakan untuk mencoba jaringan hotspot kita.
Sampai pada langkah ini, jika di klik Next maka akan muncul pesan yang menyatakan bahwa setting Hotspot telah selesai.
Port Game MOBA Mobile Legends Bang-bang Untuk Mikrotik Anti LAG
Port Game MOBA Mobile Legends Bang-bang Untuk Mikrotik Anti LAG
Selanjutnya kita akan mencoba mengkoneksikan laptop ke wifi hotspot yang sudah kita buat. Kemudian buka browser dan akses web sembarang (pastikan Anda mengakses web yang menggunakan protokol http, karena hotspot mikrotik belum mendukung untuk redirect web yang menggunakan https), maka Anda akan dialihkan ke halaman login hotspot seperti pada gambar berikut ini:
VPN Sebagai Solusi Mobile Legend dengan Koneksi Indihome
VPN Sebagai Solusi Mobile Legend dengan Koneksi Indihome
Untuk mencobanya, silahkan coba login dengan username dan password yang telah Anda buat pada langkah sebelumnya. Jika berhasil login maka akan membuka halaman web yang diminta dan membuka popup halaman status Hotspot.
Tinggalkan Balasan
Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *
|
__label__pos
| 0.990876 |
1
I tried to use the json lexer in a *.rst document with sphinx. This one works:
.. code-block:: console
//Comment
key1: value,
key2: value2,
But when I try to use json like this:
.. code-block:: json
//Comment
key1: value,
key2: value2,
I get an error
WARNING: Could not lex literal_block as "json". Highlighting skipped.
I checked with pygmentize -L lexers | grep json but this returns, that json is on board. What is the problem here?
0
I was also wondering about this warning!
I looked into it and it turned out that the JSON lexer was available, but my code wasn't actually valid JSON, and therefore Sphinx disabled the lexer for this code snippet.
If you change your code block into valid JSON, highlighting should work again!
Note that (IIRC):
• JSON doesn't allow comments
• strings have to be quoted
• dictionary values have to be enclosed in { and }
• a comma after the final key-value-pair is not allowed
Note that if a lexer is not available, the warning message is different:
WARNING: Pygments lexer name 'xyz' is not known
Your Answer
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.796808 |
1. Packages
2. Cisco IOS XE Resource Provider
3. API Docs
4. CryptoIkev2Policy
Cisco IOS XE v0.0.1 published on Friday, Sep 22, 2023 by lbrlabs
iosxe.CryptoIkev2Policy
Explore with Pulumi AI
iosxe logo
Cisco IOS XE v0.0.1 published on Friday, Sep 22, 2023 by lbrlabs
This resource can manage the Crypto IKEv2 Policy configuration.
Example Usage
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Iosxe = Lbrlabs.PulumiPackage.Iosxe;
return await Deployment.RunAsync(() =>
{
var example = new Iosxe.CryptoIkev2Policy("example", new()
{
MatchAddressLocalIps = new[]
{
"1.2.3.4",
},
MatchFvrfAny = true,
Proposals = new[]
{
new Iosxe.Inputs.CryptoIkev2PolicyProposalArgs
{
Proposals = "proposal123",
},
},
});
});
package main
import (
"github.com/lbrlabs/pulumi-iosxe/sdk/go/iosxe"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := iosxe.NewCryptoIkev2Policy(ctx, "example", &iosxe.CryptoIkev2PolicyArgs{
MatchAddressLocalIps: pulumi.StringArray{
pulumi.String("1.2.3.4"),
},
MatchFvrfAny: pulumi.Bool(true),
Proposals: iosxe.CryptoIkev2PolicyProposalArray{
&iosxe.CryptoIkev2PolicyProposalArgs{
Proposals: pulumi.String("proposal123"),
},
},
})
if err != nil {
return err
}
return nil
})
}
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.iosxe.CryptoIkev2Policy;
import com.pulumi.iosxe.CryptoIkev2PolicyArgs;
import com.pulumi.iosxe.inputs.CryptoIkev2PolicyProposalArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var example = new CryptoIkev2Policy("example", CryptoIkev2PolicyArgs.builder()
.matchAddressLocalIps("1.2.3.4")
.matchFvrfAny(true)
.proposals(CryptoIkev2PolicyProposalArgs.builder()
.proposals("proposal123")
.build())
.build());
}
}
import pulumi
import lbrlabs_pulumi_iosxe as iosxe
example = iosxe.CryptoIkev2Policy("example",
match_address_local_ips=["1.2.3.4"],
match_fvrf_any=True,
proposals=[iosxe.CryptoIkev2PolicyProposalArgs(
proposals="proposal123",
)])
import * as pulumi from "@pulumi/pulumi";
import * as iosxe from "@lbrlabs/pulumi-iosxe";
const example = new iosxe.CryptoIkev2Policy("example", {
matchAddressLocalIps: ["1.2.3.4"],
matchFvrfAny: true,
proposals: [{
proposals: "proposal123",
}],
});
resources:
example:
type: iosxe:CryptoIkev2Policy
properties:
matchAddressLocalIps:
- 1.2.3.4
matchFvrfAny: true
proposals:
- proposals: proposal123
Create CryptoIkev2Policy Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new CryptoIkev2Policy(name: string, args: CryptoIkev2PolicyArgs, opts?: CustomResourceOptions);
@overload
def CryptoIkev2Policy(resource_name: str,
args: CryptoIkev2PolicyArgs,
opts: Optional[ResourceOptions] = None)
@overload
def CryptoIkev2Policy(resource_name: str,
opts: Optional[ResourceOptions] = None,
proposals: Optional[Sequence[CryptoIkev2PolicyProposalArgs]] = None,
device: Optional[str] = None,
match_address_local_ips: Optional[Sequence[str]] = None,
match_fvrf: Optional[str] = None,
match_fvrf_any: Optional[bool] = None,
match_inbound_only: Optional[bool] = None,
name: Optional[str] = None)
func NewCryptoIkev2Policy(ctx *Context, name string, args CryptoIkev2PolicyArgs, opts ...ResourceOption) (*CryptoIkev2Policy, error)
public CryptoIkev2Policy(string name, CryptoIkev2PolicyArgs args, CustomResourceOptions? opts = null)
public CryptoIkev2Policy(String name, CryptoIkev2PolicyArgs args)
public CryptoIkev2Policy(String name, CryptoIkev2PolicyArgs args, CustomResourceOptions options)
type: iosxe:CryptoIkev2Policy
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
name string
The unique name of the resource.
args CryptoIkev2PolicyArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
args CryptoIkev2PolicyArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args CryptoIkev2PolicyArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args CryptoIkev2PolicyArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name String
The unique name of the resource.
args CryptoIkev2PolicyArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.
Example
The following reference example uses placeholder values for all input properties.
var cryptoIkev2PolicyResource = new Iosxe.CryptoIkev2Policy("cryptoIkev2PolicyResource", new()
{
Proposals = new[]
{
new Iosxe.Inputs.CryptoIkev2PolicyProposalArgs
{
Proposals = "string",
},
},
Device = "string",
MatchAddressLocalIps = new[]
{
"string",
},
MatchFvrf = "string",
MatchFvrfAny = false,
MatchInboundOnly = false,
Name = "string",
});
example, err := iosxe.NewCryptoIkev2Policy(ctx, "cryptoIkev2PolicyResource", &iosxe.CryptoIkev2PolicyArgs{
Proposals: iosxe.CryptoIkev2PolicyProposalArray{
&iosxe.CryptoIkev2PolicyProposalArgs{
Proposals: pulumi.String("string"),
},
},
Device: pulumi.String("string"),
MatchAddressLocalIps: pulumi.StringArray{
pulumi.String("string"),
},
MatchFvrf: pulumi.String("string"),
MatchFvrfAny: pulumi.Bool(false),
MatchInboundOnly: pulumi.Bool(false),
Name: pulumi.String("string"),
})
var cryptoIkev2PolicyResource = new CryptoIkev2Policy("cryptoIkev2PolicyResource", CryptoIkev2PolicyArgs.builder()
.proposals(CryptoIkev2PolicyProposalArgs.builder()
.proposals("string")
.build())
.device("string")
.matchAddressLocalIps("string")
.matchFvrf("string")
.matchFvrfAny(false)
.matchInboundOnly(false)
.name("string")
.build());
crypto_ikev2_policy_resource = iosxe.CryptoIkev2Policy("cryptoIkev2PolicyResource",
proposals=[iosxe.CryptoIkev2PolicyProposalArgs(
proposals="string",
)],
device="string",
match_address_local_ips=["string"],
match_fvrf="string",
match_fvrf_any=False,
match_inbound_only=False,
name="string")
const cryptoIkev2PolicyResource = new iosxe.CryptoIkev2Policy("cryptoIkev2PolicyResource", {
proposals: [{
proposals: "string",
}],
device: "string",
matchAddressLocalIps: ["string"],
matchFvrf: "string",
matchFvrfAny: false,
matchInboundOnly: false,
name: "string",
});
type: iosxe:CryptoIkev2Policy
properties:
device: string
matchAddressLocalIps:
- string
matchFvrf: string
matchFvrfAny: false
matchInboundOnly: false
name: string
proposals:
- proposals: string
CryptoIkev2Policy Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The CryptoIkev2Policy resource accepts the following input properties:
Proposals List<Lbrlabs.PulumiPackage.Iosxe.Inputs.CryptoIkev2PolicyProposal>
Specify Proposal
Device string
A device name from the provider configuration.
MatchAddressLocalIps List<string>
Local address
MatchFvrf string
MatchFvrfAny bool
Any fvrf
MatchInboundOnly bool
inbound only for controller
Name string
Proposals []CryptoIkev2PolicyProposalArgs
Specify Proposal
Device string
A device name from the provider configuration.
MatchAddressLocalIps []string
Local address
MatchFvrf string
MatchFvrfAny bool
Any fvrf
MatchInboundOnly bool
inbound only for controller
Name string
proposals List<CryptoIkev2PolicyProposal>
Specify Proposal
device String
A device name from the provider configuration.
matchAddressLocalIps List<String>
Local address
matchFvrf String
matchFvrfAny Boolean
Any fvrf
matchInboundOnly Boolean
inbound only for controller
name String
proposals CryptoIkev2PolicyProposal[]
Specify Proposal
device string
A device name from the provider configuration.
matchAddressLocalIps string[]
Local address
matchFvrf string
matchFvrfAny boolean
Any fvrf
matchInboundOnly boolean
inbound only for controller
name string
proposals Sequence[CryptoIkev2PolicyProposalArgs]
Specify Proposal
device str
A device name from the provider configuration.
match_address_local_ips Sequence[str]
Local address
match_fvrf str
match_fvrf_any bool
Any fvrf
match_inbound_only bool
inbound only for controller
name str
proposals List<Property Map>
Specify Proposal
device String
A device name from the provider configuration.
matchAddressLocalIps List<String>
Local address
matchFvrf String
matchFvrfAny Boolean
Any fvrf
matchInboundOnly Boolean
inbound only for controller
name String
Outputs
All input properties are implicitly available as output properties. Additionally, the CryptoIkev2Policy resource produces the following output properties:
Id string
The provider-assigned unique ID for this managed resource.
Id string
The provider-assigned unique ID for this managed resource.
id String
The provider-assigned unique ID for this managed resource.
id string
The provider-assigned unique ID for this managed resource.
id str
The provider-assigned unique ID for this managed resource.
id String
The provider-assigned unique ID for this managed resource.
Look up Existing CryptoIkev2Policy Resource
Get an existing CryptoIkev2Policy resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: CryptoIkev2PolicyState, opts?: CustomResourceOptions): CryptoIkev2Policy
@staticmethod
def get(resource_name: str,
id: str,
opts: Optional[ResourceOptions] = None,
device: Optional[str] = None,
match_address_local_ips: Optional[Sequence[str]] = None,
match_fvrf: Optional[str] = None,
match_fvrf_any: Optional[bool] = None,
match_inbound_only: Optional[bool] = None,
name: Optional[str] = None,
proposals: Optional[Sequence[CryptoIkev2PolicyProposalArgs]] = None) -> CryptoIkev2Policy
func GetCryptoIkev2Policy(ctx *Context, name string, id IDInput, state *CryptoIkev2PolicyState, opts ...ResourceOption) (*CryptoIkev2Policy, error)
public static CryptoIkev2Policy Get(string name, Input<string> id, CryptoIkev2PolicyState? state, CustomResourceOptions? opts = null)
public static CryptoIkev2Policy get(String name, Output<String> id, CryptoIkev2PolicyState state, CustomResourceOptions options)
Resource lookup is not supported in YAML
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
resource_name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
The following state arguments are supported:
Device string
A device name from the provider configuration.
MatchAddressLocalIps List<string>
Local address
MatchFvrf string
MatchFvrfAny bool
Any fvrf
MatchInboundOnly bool
inbound only for controller
Name string
Proposals List<Lbrlabs.PulumiPackage.Iosxe.Inputs.CryptoIkev2PolicyProposal>
Specify Proposal
Device string
A device name from the provider configuration.
MatchAddressLocalIps []string
Local address
MatchFvrf string
MatchFvrfAny bool
Any fvrf
MatchInboundOnly bool
inbound only for controller
Name string
Proposals []CryptoIkev2PolicyProposalArgs
Specify Proposal
device String
A device name from the provider configuration.
matchAddressLocalIps List<String>
Local address
matchFvrf String
matchFvrfAny Boolean
Any fvrf
matchInboundOnly Boolean
inbound only for controller
name String
proposals List<CryptoIkev2PolicyProposal>
Specify Proposal
device string
A device name from the provider configuration.
matchAddressLocalIps string[]
Local address
matchFvrf string
matchFvrfAny boolean
Any fvrf
matchInboundOnly boolean
inbound only for controller
name string
proposals CryptoIkev2PolicyProposal[]
Specify Proposal
device str
A device name from the provider configuration.
match_address_local_ips Sequence[str]
Local address
match_fvrf str
match_fvrf_any bool
Any fvrf
match_inbound_only bool
inbound only for controller
name str
proposals Sequence[CryptoIkev2PolicyProposalArgs]
Specify Proposal
device String
A device name from the provider configuration.
matchAddressLocalIps List<String>
Local address
matchFvrf String
matchFvrfAny Boolean
Any fvrf
matchInboundOnly Boolean
inbound only for controller
name String
proposals List<Property Map>
Specify Proposal
Supporting Types
CryptoIkev2PolicyProposal, CryptoIkev2PolicyProposalArgs
Proposals string
Specify Proposal
Proposals string
Specify Proposal
proposals String
Specify Proposal
proposals string
Specify Proposal
proposals str
Specify Proposal
proposals String
Specify Proposal
Import
$ pulumi import iosxe:index/cryptoIkev2Policy:CryptoIkev2Policy example "Cisco-IOS-XE-native:native/crypto/Cisco-IOS-XE-crypto:ikev2/policy=policy1"
To learn more about importing existing cloud resources, see Importing resources.
Package Details
Repository
iosxe lbrlabs/pulumi-iosxe
License
Apache-2.0
Notes
This Pulumi package is based on the iosxe Terraform Provider.
iosxe logo
Cisco IOS XE v0.0.1 published on Friday, Sep 22, 2023 by lbrlabs
|
__label__pos
| 0.713285 |
ian ian - 9 months ago 61
R Question
ggplot2: separate color scale per facet
Intuitively I'm looking for something like:
facet_(scales="free_color")
I do something like
p <- ggplot(mpg, aes(year, displ, color=model)) + facet_wrap(~manufacturer)
p + geom_jitter()
That is: plot 2d measurements from individuals(
model
) belonging to different species(
manufacturer
) faceted by a species, indicating the individual by color.
The problem is that all individuals share the same color scale - so that the points in a facet have very similar colors.
Using the group aesthetic with geom_line would solve the problem, but lines tell different story than dots.
Another obvious solution would be to drop the faceting and draw a separate plot for each subset. (If this should be the only solution: are there any quick, smart or proven ways to do that?)
Answer
I'm not sure that this is an available option when you're colouring by a factor. However, a quick way to produce the individual plots would be something like this:
d_ply(mpg, .(manufacturer), function(df) {
jpeg(paste(df$manufacturer[[1]], ".jpeg", sep=""))
plots <- ggplot(df, aes(year, displ, color=factor(model))) + geom_jitter()
print(plots)
dev.off()
})
Related Answers: Different legends and fill colours for facetted ggplot?
|
__label__pos
| 0.657734 |
Как писать современный JavaScript код
0
4830
views
javascript logo
Хочешь проверить свои знания по JS?
Подпишись на наш канал с тестами по JS в Telegram!
×
modern-js
Себастьян Кастель, JavaScript-разработчик из Франции, поделился мыслями о том, как должен выглядеть ваш JS-код в 2017 году. Редакция techrocks.ru публикует адаптированный перевод материала.
Помните те далекие времена, когда JavaScript был языком, который использовался только для того, чтобы “оживить” страницы сайта? Те дни уже в прошлом — все языки программирования развиваются, и, следственно, должны меняться методы их использования. Когда я смотрю код, который писал 2 года назад, я нахожу много мест, которые хочу переписать. Если вы испытываете похожие чувства, эта статья для вас.
Я собрал список некоторых хороших подходов, которые помогут сделать ваш JavaScript код простым, поддерживаемым и современным.
Используйте линтер для форматирования кода
Хорошей практикой сегодня является использование линтера. Он используется для проверки, соблюдается ли в вашем коде единый стиль. Это позволит сделать код консистентным во всех ваших .js файлах. Особенно полезно использовать линтер в командах — это позволит сделать стиль кода единым среди всех разработчиков.
Более того, линтер может вносить исправления в ваш код, где это возможно. ESLint, например, делает это очень хорошо (используйте настройку —fix) и он отлично интегрируется со всеми популярными IDE.
В качестве альтернативы можете использовать Prettier, который, в отличие от ESLint, большее внимание уделяет форматированию кода. Но в целом разница между ними небольшая.
Следующее правило поможет вам выбрать правила, которые будут использоваться линтером:
Используйте современные правила для вашего линтера
Если вы не знаете, какой набор правил использовать для вашего кода, берите StandardJS. Это очень строгий линтер, который не позволит вам отступить от правил, но каждое из них одобрено сообществом JS разработчиков. Вот несколько примеров:
• Используйте 2 пробела для табуляции
• Не используйте точки с запятой (сначала мне казалось это очень странным, но через несколько дней я привык и уже просто не могу писать по-старому)
• Пробелы после ключевых слов (например if) и фигурных скобок
• И многие другие
StandardJS может устанавливаться в виде отдельного Node модуля. Если вы хотите использовать его уже на существующем проекте с большой кодовой базой и отключить некоторые правила — используйте ESLint predefined config. Например, я отключил правила no-mixed-operators и import/no-webpack-loader-syntax.
Используйте новые фишки ES2015+
Если вы разрабатываете на JavaScript, то, скорее всего, слышали об изменениях в ES2015+ (или ES6, ES7…). Вот несколько, без которых я больше не могу жить 🙂
• Arrow functions: новый синтаксис для функций в JS ( x => x*2 )
• Классы: перестаньте использовать прототипы, классы гораздо круче 🙂
• Новые методы для работы с массивами и объектами
function doSomething() {
const a = doSomethingElse()
const b = doSomethingWithA(a)
const otherResults = { c: '?', d: '?' }
return { a, b, ...otherResults } // equivalent to { a: a, b: b }
}
const { a, c, ...rest } = doSomething() // Also works with arrays!
// `rest` looks like { b: ..., d: '?' }
• Понятный синтаксис для промисов (async/await)
// Please try to write the same code with classic promises ;)
async function doSomething() {
const a = await getValueForA()
const b = await getValueForBFromA(a)
const [c, d] = await Promise.all([
// parallel execution
getValueForC(), getValueForDFromB(b)
])
const total = await calculateTotal(a, b, c, d)
return total / 1000
}
Используйте функциональное программирование
Тема, вокруг котороый сегодня ведется много споров. Тем не менее, функциональное программирование набирает большую популярность не только в JavaScript. Причина? Функциональное программирование делает код более предсказуемым, безопасным и гораздо более поддерживаемым. Вот вам несколько простых советов:
Во-первых, прекратите использовать циклы for. В большинстве случаем они вам не нужны. Например:
const arr = [{ name: 'first', value: 13 }, { name: 'second', value: 7 }]
// Instead of:
const res = {}
for (let i = 0; i < arr.length; i++) {
const calculatedValue = arr[i].value * 10
if (calculatedValue > 100) {
res[arr[i].name] = calculatedValue
}
}
// Prefer:
const res = arr
.map(elem => ({ name: elem.name, calculatedValue: elem.value * 10 }))
.filter(elem => elem.calculatedValue > 100)
.reduce((acc, elem) => ({
[elem.name]: calculatedValue,
...acc
}), {})
Да, я признаю, это вырожденный случай, но здесь использование функционального программирования может значительно упростить код:
const enrichElementWithCalculatedValue =
elem => ({ name: elem.name, calculatedValue: elem.value * 10 })
const filterElementsByValue = value =>
elem => elem.calculatedValue > value
const aggregateElementInObject = (acc, elem) => ({
[elem.name]: calculatedValue,
...acc
})
const res = arr
.map(enrichElementWithCalculatedValue)
.filter(filterElementsByValue(100))
.reduce(aggregateElementInObject, {})
Мы определили 3 функции названия которых указывают на то, что эти функции делают.
Во-вторых, создавайте локальные функции (даже внутри существующих) — подбирая для них грамотные названия, вы можете документировать ваш код без комментариев!
Обратите внимание, что эти три локальные функции не меняют контекст, в котором выполняются. Не меняют внешние переменные, не обращаются к внешним сервисам… В функциональном программировании такие функции называют pure functions (чистые функции). Их преимущества:
Их легко тестировать, потому что одни и те же входные данные всегда будут давать один и тот же результат, сколько раз бы функция не вызывалась;
Они гарантированно дадут один и тот же результат вне зависимости от состояния приложения;
Они не меняют состояние приложения.
Поэтому мой третий совет: используйте такие функции как можно чаще!
Несколько советов в завершение
• Привыкайте работать с асинхронным кодом, чаще используйте промисы, посмотрите на observales в RxJS (вот отличный туториал по функциональному реактивному программированию)
• Пишите тесты! Звучит банально, но я знаю кучу проектов с кодом, не покрытым тестами. В конце концов, тестирование JavaScript не так трудно, как кажется.
• Используйте последние фишки языка: например, прекратите писать arr.indexOf(elem) !== -1. Замените это arr.includes(elem).
• Читайте больше технических статей: JavaScript сабреддит — отличное место, где можно узнать много нового о языке.
И, главное — всегда занимайтесь рефакторингом! Улучшайте ваш старый код. Найдите время, чтобы заменить var на const, перепишите функции с использованием нового синтаксиса. Это позволит упростить код — а работать с простым и понятным кодом всегда приятнее 🙂
ОСТАВЬТЕ ОТВЕТ
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.677845 |
aurelius2016 aurelius2016 - 6 months ago 22
HTML Question
Why does ZeroClipboard only copy first word of sentence?
in my rails app I'm trying to create a set of buttons that can be clicked to copy a sentence (the title of tips) which are stored in @all_tips. At the moment the copying is working but only for the first word (i.e. stops as soon as there's a space) and I'm not sure why and haven't been able to find any other guidance.
<% (@all_tips).each do |n| %>
<button class="btn btn-success copy_to_clipboard" data-clipboard-text= <%= n.title %> > <p> <%= n.title %></p> </button>
<% end %>
<script>
jQuery.fn.copyToClipBoard = function() {
var clip = new ZeroClipboard($(".copy_to_clipboard"));
}
$(function() {
$('.copy_to_clipboard').copyToClipBoard();
});
</script>
Any help would be greatly appreciated, thanks!
Answer
Add quotes around title like this:
data-clipboard-text="<%= n.title %>"
Without the quote, if the title is for example: My awesome title
The tag will look like this:
<button class="btn btn-success copy_to_clipboard" data-clipboard-text=My awesome title >
So the data-clipboard-text value is only "My", because the lack of quote consider the next space to be the end of the parameter value. With the quotes, the tag will look this:
<button class="btn btn-success copy_to_clipboard" data-clipboard-text="My awesome title" >
data-clipboard-text value will be "My awesome title". Look at the color highlights in the previous code snippet, blue is the value and orange the parameter name.
|
__label__pos
| 0.974414 |
Information
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
How to connect with a MessageWebSocket (HTML)
[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]
This topic will show you how to send and receive whole messages of data using MessageWebSocket in a Windows Runtime app.
The MessageWebSocket class provides a message-based abstraction of the WebSocket protocol. When using MessageWebSocket, the entire WebSocket message is read or written in a single operation. In contrast, the StreamWebSocket allows sections of a message to be read with each read operation, rather than requiring the entire message to be read in a single operation.
A MessageWebSocket is typically used in scenarios where messages are not very large. Both UTF-8 and binary files are supported. For UTF-8 messages, MessageWebSocket must be used. since StreamWebSocket only supports binary messages.
Prerequisites
The following examples use JavaScript and are based on the WebSocket sample. For general help creating a Windows Runtime app using JavaScript, see Create your first Windows Runtime app using JavaScript. Additionally, JavaScript promises are used in this topic to complete asynchronous operations. For more information on this programming pattern, see Asynchronous programming in JavaScript using promises.
To ensure your Windows Runtime app is network ready, you must set any network capabilities that are needed in the project Package.appxmanifest file. If your app needs to connect as a client to remote services on the Internet, then the Internet (Client) capability is needed. If the app needs to connect as a client to remote services on a home network or work network, then the Home/Work Networking capability is needed.
Note On Windows Phone, there is only one network capability (Internet (Client & Server))which enables all network access for the app.
For more information, see How to set network capabilities.
Use a MessageWebSocket to send data
The code in this section creates a new MessageWebSocket, connects to a WebSocket server, and sends data to the server. Once a successful connection is established, the app waits for the MessageWebSocket.MessageReceived event to be invoked, indicating that data was received.
Note You may wish to display messages to the user or log that certain events have happened (for example, when a connection is made or when an error occurs).
• Open the js folder. Open your .js file and add the following code.
function startSend() {
if (!messageWebSocket) {
var webSocket = new Windows.Networking.Sockets.MessageWebSocket();
// MessageWebSocket supports both utf8 and binary messages.
// When utf8 is specified as the messageType, then the developer
// promises to only send utf8-encoded data.
webSocket.control.messageType = Windows.Networking.Sockets.SocketMessageType.utf8;
// Set up callbacks
webSocket.onmessagereceived = onMessageReceived;
webSocket.onclosed = onClosed;
var serverAddress = new Windows.Foundation.Uri(document.getElementById("serverAddress").value);
try {
webSocket.connectAsync(serverAddress).done(function () {
messageWebSocket = webSocket;
// The default DataWriter encoding is utf8.
messageWriter = new Windows.Storage.Streams.DataWriter(webSocket.outputStream);
messageWriter.writeString(document.getElementById("inputField").value);
messageWriter.storeAsync().done("", sendError);
}, function (error) {
// The connection failed; add your own code to log or display
// the error, or take a specific action.
});
} catch (error) {
// An error occurred while trying to connect; add your own code to
// log or display the error, or take a specific action.
}
}
else {
// The connection already exists; go ahead and send the message.
messageWriter.writeString(document.getElementById("inputField").value);
messageWriter.storeAsync().done("", sendError);
}
}
Register your callback for the MessageWebSocket.MessageReceived event
When the MessageWebSocket.MessageReceived event occurs, the registered callback is called and receives data from MessageWebSocketMessageReceivedEventArgs.
• Add the following code to your .js file.
function onMessageReceived(args) {
// The incoming message is already buffered.
var dataReader = args.getDataReader();
// Use the dataReader to read data from the received message
}
Register your callback for the MessageWebSocket.Closed event
When the MessageWebSocket.Closed event occurs, the registered callback is called and receives data from WebSocketClosedEventArgs to close the connection.
• Add the following code to your .js file.
function onClosed(args) {
// You can add code to log or display the code and reason
// for the closure (stored in args.code and args.reason)
if (messageWebSocket) {
messageWebSocket.close();
}
messageWebSocket = null;
}
Summary and next steps
In this tutorial, we reviewed how to connect to a WebSocket server and how to send and receive data using a MessageWebSocket.
For a complete sample that demonstrates how to send and receive data with WebSockets, see the WebSocket sample.
Related topics
Other
Asynchronous programming in JavaScript using promises
Connecting with WebSockets
Create your first Windows Runtime app using JavaScript
How to connect with a StreamWebSocket
How to handle exceptions in network apps
How to set network capabilities
Reference
MessageWebSocket
Windows.Networking.Sockets
Samples
WebSocket sample
Show:
|
__label__pos
| 0.577528 |
【评论抽掘金周边】| 谈谈java中的死锁
本文已参与「掘力星计划」,赢取创作大礼包,挑战创作激励金。
抽奖说明在文末!
什么是死锁
在使用多线程以及多进程时,两个或两个以上的运算单元(进程、线程或协程),各自占有一些共享资源,并且互相等待其他线程占有的资源才能进行,而导致两个或者多个线程都在等待对方释放资源,就称为死锁。
下面看个简单的例子:
public class DeadLockTest {
public static void main(String[] args) {
Object lock1 = new Object(); // 锁1
Object lock2 = new Object(); // 锁2
Thread t1 = new Thread(new Runnable() {
@Override
public void run() {
// 先获取锁1
synchronized (lock1) {
System.out.println("Thread 1:获取到锁1!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
//获取锁2
System.out.println("Thread 1:等待获取2...");
synchronized (lock2) {
System.out.println("Thread 1:获取到锁2!");
}
}
}
});
t1.start();
Thread t2 = new Thread(new Runnable() {
@Override
public void run() {
// 先获取锁2
synchronized (lock2) {
System.out.println("Thread 2:获取到锁2!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// 获取锁1
System.out.println("Thread 2:等待获取1...");
synchronized (lock1) {
System.out.println("Thread 2:获取到锁1!");
}
}
}
});
t2.start();
}
}
复制代码
死锁产生原因
死锁只有同时满足以下四个条件才会发生:
互斥条件:线程对所分配到的资源具有排它性,在某个时间内锁资源只能被一个线程占用。如下图:
死锁1.PNG
线程1已经持有的资源,不能再同时被线程2持有,如果线程2请求获取线程1已经占用的资源,那线程2只能等待,直到线程1释放资源。如下图:
持有并请求条件: 持有.PNG
持有并请求条件是指,当线程1已经持有了资源1,又想申请资源2,而资源2已经被线程3持有了,所以线程1就会处于等待状态,但是线程1在等待资源2的同时并不会释放已经持有的资源1。
不可剥夺条件:当线程已经持有了资源 ,在未使用完之前,不能被剥夺。如下图: 不可.PNG
线程2如果也想使用此资源,只能等待线程1使用完并释放后才能获取。
环路等待条件:在死锁发生的时候,两个线程获取资源的顺序构成了环形链,如下图:
环形.PNG 线程1已经持有资源2,想获取资源1, 线程2已经获取了资源1,想请求资源2,这就形成资源请求等待的环形图。
死锁的排查
1.jstack
用法如下:
/opt/java8/bin/jstack
Usage:
jstack [-l] <pid>
(to connect to running process) 连接活动线程
jstack -F [-m] [-l] <pid>
(to connect to a hung process) 连接阻塞线程
jstack [-m] [-l] <executable> <core>
(to connect to a core file) 连接dump的文件
jstack [-m] [-l] [[email protected]]<remote server IP or hostname>
(to connect to a remote debug server) 连接远程服务器
Options:
-F to force a thread dump. Use when jstack <pid> does not respond (process is hung)
-m to print both java and native frames (mixed mode)
-l long listing. Prints additional information about locks
-h or -help to print this help message
复制代码
死锁日志如下:
"mythread2" #12 prio=5 os_prio=0 tid=0x0000000058ef7800 nid=0x1ab4 waiting on condition [0x0000000059f8f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000d602d610> (a java.util.concurrent.lock
s.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInt
errupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(A
bstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Abstrac
tQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLo
ck.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at DeathLock$2.run(DeathLock.java:34)
Locked ownable synchronizers:
- <0x00000000d602d640> (a java.util.concurrent.locks.ReentrantLock$Nonfa
irSync)
"mythread1" #11 prio=5 os_prio=0 tid=0x0000000058ef7000 nid=0x3e68 waiting on condition [0x000000005947f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000d602d640> (a java.util.concurrent.lock
s.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInt
errupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(A
bstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Abstrac
tQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLo
ck.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at DeathLock$1.run(DeathLock.java:22)
Locked ownable synchronizers:
- <0x00000000d602d610> (a java.util.concurrent.locks.ReentrantLock$Nonfa
irSync)
Found one Java-level deadlock:
=============================
"mythread2":
waiting for ownable synchronizer 0x00000000d602d610, (a java.util.concurrent.l
ocks.ReentrantLock$NonfairSync),
which is held by "mythread1"
"mythread1":
waiting for ownable synchronizer 0x00000000d602d640, (a java.util.concurrent.l
ocks.ReentrantLock$NonfairSync),
which is held by "mythread2"
Java stack information for the threads listed above:
===================================================
"mythread2":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000d602d610> (a java.util.concurrent.lock
s.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInt
errupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(A
bstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Abstrac
tQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLo
ck.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at DeathLock$2.run(DeathLock.java:34)
"mythread1":
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000d602d640> (a java.util.concurrent.lock
s.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInt
errupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(A
bstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(Abstrac
tQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLo
ck.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at DeathLock$1.run(DeathLock.java:22)
Found 1 deadlock.
复制代码
可以根据日志输出来直接定位到具体的死锁代码。
2.jmc
jmc是Oracle Java Mission Control的缩写,是一个对Java程序进行管理、监控、概要分析和故障排查的工具套件。在JDK的bin目录中,同样是双击启动,如下图所示:
mc.PNG jmc打开如下:
11.PNG 需要选择死锁类,右键启动JMX控制台,然后就可以发现死锁和死锁具体信息。
常用工具还有jconsole,jvisualvm等不再一一介绍,感兴趣的可以自已查看一下。
避免死锁问题的发生
死锁分析
还要看死锁产生的4个条件,只要不满足条件就无法产生死锁了,互斥条件和不可剥夺条件是系统特性无法阻止。只能通过破坏请求和保持条件或者是环路等待条件,从而来解决死锁的问题。
避免死锁的方案
一、固定加锁顺序
即通过有顺序的获取锁,从而避免产生环路等待条件,从而解决死锁问题的。来看环路等待的例子:
22.PNG
线程1先获取了锁A,再请求获取锁B,线程2与线程1同时执行,线程2先获取锁B,再请求获取锁A,两个线程都先占用了各自的资源(锁A和锁B)之后,再尝试获取对方的锁,从而造成了环路等待问题,最后造成了死锁。
此时只需将线程1和线程2获取锁的顺序进行统一,也就是线程1和线程2同时执行之后,都先获取锁A,再获取锁B。因为只有一个线程能成功获取到锁A,没有获取到锁A的线程就会等待先获取锁A,此时得到锁A的线程继续获取锁 B,因为没有线程争抢和拥有锁B,那么得到锁A的线程就会顺利的拥有锁B,之后执行相应的代码再将锁资源全部释放,然后另一个等待获取锁A的线程就可以成功获取到锁资源,这样就不会出现死锁的问题了。
二、开放调用避免死锁
在协作对象之间发生死锁的场景中,主要是因为在调用某个方法时就需要持有锁,并且在方法内部也调用了其他带锁的方法,如果在调用某个方法时不需要持有锁,那么这种调用被称为开放调用,同步代码块最好仅被用于保护那些涉及共享状态的操作。
三、使用定时锁
使用显式Lock锁,在获取锁时使用tryLock()方法。当等待超过时限的时候,tryLock()不会一直等待,而是返回错误信息,能够有效避免死锁问题。
总结
简单来说,死锁问题的产生是由两个或者以上线程并行执行的时候,争夺资源而互相等待造成的。
死锁只有同时满足互斥、持有并等待、不可剥夺、环路等待这四个条件的时候才会发生。
所以要避免死锁问题,就是要破坏其中一个条件即可,最常用的方法就是使用资源有序分配法来破坏环路等待条件。
抽奖说明
1.本活动由掘金官方支持 详情可见juejin.cn/post/701221…
2.通过评论和文章有关的内容即可参加,要和文章内容有关哦!
3.本月的文章都会参与抽奖活动,欢迎大家多多互动!
4.除掘金官方抽奖外本人也将送出周边礼物(马克杯一个和掘金徽章若干,马克杯将送给走心评论,徽章随机抽取,数量视评论人数增加)。
猜你喜欢
转载自juejin.im/post/7017913954189967391
相关文章
|
__label__pos
| 0.596691 |
Question
4. Plot the decaying exponential function y(t)=10 e^{-t / \tau} \sin \omega t Where the time constant T = 3 s and the radial velocity w= T rad/s over the0 <t< 10 s . Include the plotted equation in the title of the plot, and label range the x- and y-axes properly. (5 Points)(Codes and plot)
Fig: 1
Fig: 2
Fig: 3
|
__label__pos
| 0.981168 |
Home » Questions » Computers [ Ask a new question ]
How do I define custom web.config sections with potential child elements and attributes for the properties?
How do I define custom web.config sections with potential child elements and attributes for the properties?
"The web applications I develop often require co-dependent configuration settings and there are also settings that have to change as we move between each of our environments.
All our settings are currently simple key-value pairs but it would be useful to create custom config sections so that it is obvious when two values need to change together or when the settings need to change for an environment.
What's the best way to create custom config sections and are there any special considerations to make when retrieving the values?"
Asked by: Guest | Views: 127
Total answers/comments: 4
Guest [Entry]
"Using attributes, child config sections and constraints
There is also the possibility to use attributes which automatically takes care of the plumbing, as well as providing the ability to easily add constraints.
I here present an example from code I use myself in one of my sites. With a constraint I dictate the maximum amount of disk space any one user is allowed to use.
MailCenterConfiguration.cs:
namespace Ani {
public sealed class MailCenterConfiguration : ConfigurationSection
{
[ConfigurationProperty(""userDiskSpace"", IsRequired = true)]
[IntegerValidator(MinValue = 0, MaxValue = 1000000)]
public int UserDiskSpace
{
get { return (int)base[""userDiskSpace""]; }
set { base[""userDiskSpace""] = value; }
}
}
}
This is set up in web.config like so
<configSections>
<!-- Mailcenter configuration file -->
<section name=""mailCenter"" type=""Ani.MailCenterConfiguration"" requirePermission=""false""/>
</configSections>
...
<mailCenter userDiskSpace=""25000"">
<mail
host=""my.hostname.com""
port=""366"" />
</mailCenter>
Child elements
The child xml element mail is created in the same .cs file as the one above. Here I've added constraints on the port. If the port is assigned a value not in this range the runtime will complain when the config is loaded.
MailCenterConfiguration.cs:
public sealed class MailCenterConfiguration : ConfigurationSection
{
[ConfigurationProperty(""mail"", IsRequired=true)]
public MailElement Mail
{
get { return (MailElement)base[""mail""]; }
set { base[""mail""] = value; }
}
public class MailElement : ConfigurationElement
{
[ConfigurationProperty(""host"", IsRequired = true)]
public string Host
{
get { return (string)base[""host""]; }
set { base[""host""] = value; }
}
[ConfigurationProperty(""port"", IsRequired = true)]
[IntegerValidator(MinValue = 0, MaxValue = 65535)]
public int Port
{
get { return (int)base[""port""]; }
set { base[""port""] = value; }
}
Use
To then use it practically in code, all you have to do is instantiate the MailCenterConfigurationObject, this will automatically read the relevant sections from web.config.
MailCenterConfiguration.cs
private static MailCenterConfiguration instance = null;
public static MailCenterConfiguration Instance
{
get
{
if (instance == null)
{
instance = (MailCenterConfiguration)WebConfigurationManager.GetSection(""mailCenter"");
}
return instance;
}
}
AnotherFile.cs
public void SendMail()
{
MailCenterConfiguration conf = MailCenterConfiguration.Instance;
SmtpClient smtpClient = new SmtpClient(conf.Mail.Host, conf.Mail.Port);
}
Check for validity
I previously mentioned that the runtime will complain when the configuration is loaded and some data does not comply to the rules you have set up (e.g. in MailCenterConfiguration.cs). I tend to want to know these things as soon as possible when my site fires up. One way to solve this is load the configuration in _Global.asax.cx.Application_Start_ , if the configuration is invalid you will be notified of this with the means of an exception. Your site won't start and instead you will be presented detailed exception information in the Yellow screen of death.
Global.asax.cs
protected void Application_ Start(object sender, EventArgs e)
{
MailCenterConfiguration.Instance;
}"
Guest [Entry]
"Quick'n Dirty:
First create your ConfigurationSection and ConfigurationElement classes:
public class MyStuffSection : ConfigurationSection
{
ConfigurationProperty _MyStuffElement;
public MyStuffSection()
{
_MyStuffElement = new ConfigurationProperty(""MyStuff"", typeof(MyStuffElement), null);
this.Properties.Add(_MyStuffElement);
}
public MyStuffElement MyStuff
{
get
{
return this[_MyStuffElement] as MyStuffElement;
}
}
}
public class MyStuffElement : ConfigurationElement
{
ConfigurationProperty _SomeStuff;
public MyStuffElement()
{
_SomeStuff = new ConfigurationProperty(""SomeStuff"", typeof(string), ""<UNDEFINED>"");
this.Properties.Add(_SomeStuff);
}
public string SomeStuff
{
get
{
return (String)this[_SomeStuff];
}
}
}
Then let the framework know how to handle your configuration classes in web.config:
<configuration>
<configSections>
<section name=""MyStuffSection"" type=""MyWeb.Configuration.MyStuffSection"" />
</configSections>
...
And actually add your own section below:
<MyStuffSection>
<MyStuff SomeStuff=""Hey There!"" />
</MyStuffSection>
Then you can use it in your code thus:
MyWeb.Configuration.MyStuffSection configSection = ConfigurationManager.GetSection(""MyStuffSection"") as MyWeb.Configuration.MyStuffSection;
if (configSection != null && configSection.MyStuff != null)
{
Response.Write(configSection.MyStuff.SomeStuff);
}"
Guest [Entry]
There's an excellent example on MSDN using ConfigurationCollection and .NET 4.5 for custom sections in web.config that has a list of config items.
Guest [Entry]
"The custom configuration are quite handy thing and often applications end up with a demand for an extendable solution.
For .NET 1.1 please refer the article http://aspnet.4guysfromrolla.com/articles/020707-1.aspx
Note: The above solution works for .NET 2.0 as well.
For .NET 2.0 specific solution, please refer the article http://aspnet.4guysfromrolla.com/articles/032807-1.aspx"
|
__label__pos
| 0.972772 |
Hero’s Formula and The Area of Triangles
Let’s say you have a triangle ABC with sides of length a, b and c.
triangle
Imagine the vertical height and the angles of the triangle are unknown. How would you go about finding the area of the triangle? Well, using the simple \frac{1}{2} \times base \times height formula would obviously be a little tricky since it would require calculating the vertical height of the triangle.
It would be a bit simpler to find the area using the \frac{1}{2} \times a \times b \times \sin c method, but then you will need to use something like the cosine rule to find the angle of at least one of the vertexes. If only there was some simple, elegant method that allowed one to calculate the area of a triangle just from the length of its sides without first having to first figure out the vertical height or the angles of the vertexes of the triangle!
Hero’s formula allows one to do exactly that! Here is how you calculate the area, A, of a triangle with sides ab and c using Hero’s formula…
A = \sqrt{s(s-a)(s-b)(s-c)}
where s is the semi-perimeter of the triangle i.e
s = \frac{a + b + c}{2}
Now, say we have a triangle PQR with sides of length 5, 11 and 12 units respectively…
triangle
Let’s use Hero’s formula to find its area. First let’s calculate its semi-perimeter, s.
s = \frac{a + b + c}{2} \Rightarrow \frac{5 + 11 + 12}{2} \Rightarrow \frac{28}{2} \Rightarrow 14 \: \mathtt{units}
Next use Hero’s formula with s = 14, = 5, = 11 and = 12 …
A = \sqrt{s(s-a)(s-b)(s-c)}
\Rightarrow \sqrt{14(14-5)(14-11)(14-12)}
\Rightarrow \sqrt{14 \times 9 \times 3 \times 2}
\Rightarrow \sqrt{756} \approx 27.5 \: \mathtt{sq. units}
and that’s all there is to it!
Hero’s formula is widely believed to be invented/discovered by the Greek mathematician and engineer, Hero of Alexandria in 60 AD.
Note: For a rigorous proof of Hero’s formula, see this link.
Advertisements
One comment
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.986514 |
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1722 lines
50 KiB
/* xorriso - creates, loads, manipulates and burns ISO 9660 filesystem images.
Copyright 2007-2015 Thomas Schmitt, <[email protected]>
Provided under GPL version 2 or later.
This file contains the implementation of options as mentioned in man page
or info file derived from xorriso.texi.
*/
#ifdef HAVE_CONFIG_H
#include "../config.h"
#endif
#include <ctype.h>
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <time.h>
#include <errno.h>
#include "xorriso.h"
#include "xorriso_private.h"
#include "xorrisoburn.h"
/* Option -iso_rr_pattern "on"|"ls"|"off" */
int Xorriso_option_iso_rr_pattern(struct XorrisO *xorriso, char *mode,int flag)
{
if(strcmp(mode, "off")==0)
xorriso->do_iso_rr_pattern= 0;
else if(strcmp(mode, "on")==0)
xorriso->do_iso_rr_pattern= 1;
else if(strcmp(mode, "ls")==0)
xorriso->do_iso_rr_pattern= 2;
else {
sprintf(xorriso->info_text, "-iso_rr_pattern: unknown mode '%s'", mode);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
return(1);
}
/* Option -jigdo aspect argument */
int Xorriso_option_jigdo(struct XorrisO *xorriso, char *aspect, char *arg,
int flag)
{
int ret;
ret= Xorriso_jigdo_interpreter(xorriso, aspect, arg, 0);
return(ret);
}
/* Option -joliet "on"|"off" */
int Xorriso_option_joliet(struct XorrisO *xorriso, char *mode, int flag)
{
if(strcmp(mode, "off")==0)
xorriso->do_joliet= 0;
else if(strcmp(mode, "on")==0)
xorriso->do_joliet= 1;
else {
sprintf(xorriso->info_text, "-joliet: unknown mode '%s'", mode);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
return(1);
}
/* Command -launch_frontend */
int Xorriso_option_launch_frontend(struct XorrisO *xorriso,
int argc, char **argv, int *idx, int flag)
{
int ret, end_idx;
end_idx= Xorriso_end_idx(xorriso, argc, argv, *idx, 1);
if(xorriso->launch_frontend_banned) {
sprintf(xorriso->info_text,
"-launch_frontend was already executed in this xorriso run");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 0);
ret= 0; goto ex;
}
xorriso->launch_frontend_banned= 1;
if(end_idx <= *idx)
{ret= 1; goto ex;}
if(argv[*idx][0] == 0)
{ret= 1; goto ex;}
xorriso->dialog= 2;
ret= Xorriso_launch_frontend(xorriso, end_idx - *idx, argv + *idx,
"", "", 0);
ex:;
(*idx)= end_idx;
return(ret);
}
/* Option -list_arg_sorting */
int Xorriso_option_list_arg_sorting(struct XorrisO *xorriso, int flag)
{
int ret;
ret= Xorriso_cmd_sorting_rank(xorriso, 0, NULL, 0, 1);
return(ret);
}
/* Option -list_delimiter */
int Xorriso_option_list_delimiter(struct XorrisO *xorriso, char *text,
int flag)
{
int ret, argc;
char **argv= NULL;
if(text[0] == 0) {
sprintf(xorriso->info_text,
"-list_delimiter: New delimiter text is empty");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
if(strlen(text) > 80) {
sprintf(xorriso->info_text,
"-list_delimiter: New delimiter text is too long");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
ret= Sfile_make_argv(xorriso->progname, text, &argc, &argv, 4);
if(ret > 0) {
if(argc > 2) {
sprintf(xorriso->info_text,
"-list_delimiter: New delimiter text contains more than one word");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
}
Sfile_make_argv(xorriso->progname, text, &argc, &argv, 2);
if(argc > 2)
return(0);
}
if(strchr(text, '"') != NULL || strchr(text, '\'') != NULL) {
sprintf(xorriso->info_text,
"-list_delimiter: New delimiter text contains quotation marks");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
strcpy(xorriso->list_delimiter, text);
return(1);
}
/* Option -list_extras */
int Xorriso_option_list_extras(struct XorrisO *xorriso, char *mode, int flag)
{
int ret;
ret= Xorriso_list_extras(xorriso, mode, 0);
return(ret);
}
/* Option -list_formats */
int Xorriso_option_list_formats(struct XorrisO *xorriso, int flag)
{
int ret;
ret= Xorriso_list_formats(xorriso, 0);
return(ret);
}
/* Option -list_speeds */
int Xorriso_option_list_speeds(struct XorrisO *xorriso, int flag)
{
int ret;
ret= Xorriso_list_speeds(xorriso, 0);
return(ret);
}
/* Option -list_profiles */
int Xorriso_option_list_profiles(struct XorrisO *xorriso, char *which,
int flag)
{
int ret;
int mode= 0;
if(strncmp(which,"in",2)==0)
mode|= 1;
else if(strncmp(which,"out",3)==0)
mode|= 2;
else
mode|= 3;
if(mode & 1) {
ret= Xorriso_toc(xorriso, 1 | 16 | 32);
if(ret > 0)
Xorriso_list_profiles(xorriso, 0);
}
if((mode & 2) && xorriso->in_drive_handle != xorriso->out_drive_handle) {
ret= Xorriso_toc(xorriso, 1 | 2 | 16 | 32);
if(ret > 0)
Xorriso_list_profiles(xorriso, 2);
}
return(1);
}
/* Command -lns alias -lnsi */
int Xorriso_option_lnsi(struct XorrisO *xorriso, char *target, char *path,
int flag)
{
int ret;
char *eff_path= NULL, *buffer= NULL, *namept;
Xorriso_alloc_meM(eff_path, char, SfileadrL);
Xorriso_alloc_meM(buffer, char, SfileadrL);
ret= Xorriso_normalize_img_path(xorriso, xorriso->wdi, path, eff_path, 1);
if(ret < 0)
{ret= 0; goto ex;}
if(ret > 0) {
sprintf(xorriso->info_text, "-lns: Address already existing: ");
Text_shellsafe(eff_path, xorriso->info_text, 1);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
{ret= 0; goto ex;}
}
ret= Xorriso_normalize_img_path(xorriso, xorriso->wdi, path, eff_path, 2);
if(ret < 0)
{ret= 0; goto ex;}
ret= Xorriso_truncate_path_comps(xorriso, target, buffer, &namept, 0);
if(ret < 0)
{ret= 0; goto ex;}
ret= Xorriso_graft_in(xorriso, NULL, namept, eff_path, (off_t) 0, (off_t) 0,
1024);
if(ret <= 0)
{ret= 0; goto ex;}
ret= 1;
ex:;
Xorriso_free_meM(buffer);
Xorriso_free_meM(eff_path);
return(ret);
}
/* Option -load session|track|sbsector value */
/* @param flag bit0= with adr_mode sbsector: adr_value is possibly 16 too high
@return <=0 error , 1 success, 2 revoked by -reassure
*/
int Xorriso_option_load(struct XorrisO *xorriso, char *adr_mode,
char *adr_value, int flag)
{
int ret;
if(Xorriso_change_is_pending(xorriso, 0)) {
sprintf(xorriso->info_text,
"-load: Image changes pending. -commit or -rollback first");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
ret= Xorriso_reassure(xorriso, "-load", "loads an alternative image", 0);
if(ret<=0)
return(2);
ret= Xorriso_decode_load_adr(xorriso, "-load", adr_mode, adr_value,
&(xorriso->image_start_mode),
xorriso->image_start_value, flag & 1);
if(ret <= 0)
return(ret);
xorriso->image_start_mode|= (1<<30); /* enable non-default msc1 processing */
if(strlen(xorriso->indev)>0) {
ret= Xorriso_option_rollback(xorriso, 1); /* Load image, no -reassure */
if(ret<=0)
return(ret);
}
return(1);
}
/* Option -logfile */
int Xorriso_option_logfile(struct XorrisO *xorriso, char *channel,
char *fileadr, int flag)
{
int hflag,channel_no= 0, ret;
if(channel[0]==0) {
logfile_wrong_form:;
sprintf(xorriso->info_text,"Wrong form. Correct would be: -logfile \".\"|\"R\"|\"I\"|\"M\" file_address");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 0);
return(0);
}
hflag= 2;
if(channel[0]=='R')
channel_no= 1;
else if(channel[0]=='I')
channel_no= 2;
else if(channel[0]=='M')
channel_no= 3;
else if(channel[0]=='.')
hflag= 4;
else
goto logfile_wrong_form;
if(strcmp(fileadr,"-")==0 || fileadr[0]==0)
hflag|= (1<<15);
xorriso->logfile[channel_no][0]= 0;
ret= Xorriso_write_to_channel(xorriso, fileadr, channel_no, hflag);
if(ret<=0) {
sprintf(xorriso->info_text, "Cannot open logfile: %s", fileadr);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 0);
} else if(!(hflag&(1<<15)))
if(Sfile_str(xorriso->logfile[channel_no], fileadr, 0)<=0)
return(-1);
return(ret>0);
}
/* Options -ls alias -lsi and -lsl alias -lsli
and -lsd alias -lsdi and -lsdl alias -lsdli
and -du alias -dui and -dus alias -dusi
@param flag bit0= long format (-lsl , -du, not -dus, not -ls)
bit1= do not expand patterns but use literally
bit2= -du rather than -ls
bit3= list directories as themselves (-lsd)
*/
int Xorriso_option_lsi(struct XorrisO *xorriso, int argc, char **argv,
int *idx, int flag)
{
int ret, end_idx, filec= 0, nump, i, star= 1;
char **filev= NULL, **patterns= NULL;
off_t mem= 0;
struct stat stbuf;
if(flag & 4) {
if(!(flag & 1))
star= 0;
} else {
if(flag & 8)
star= 0;
}
end_idx= Xorriso_end_idx(xorriso, argc, argv, *idx, 1);
if(xorriso->do_iso_rr_pattern==0)
flag|= 2;
nump= end_idx - *idx;
if((flag&2) && nump>0 ) {
;
} else if(nump <= 0) {
if(Xorriso_iso_lstat(xorriso, xorriso->wdi, &stbuf, 0)<0) {
sprintf(xorriso->info_text,
"Current -cd path does not yet exist in the ISO image");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 0);
{ret= 0; goto ex;}
}
if(!S_ISDIR(stbuf.st_mode)) {
sprintf(xorriso->info_text,
"Current -cd meanwhile points to a non-directory in ISO image");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 0);
{ret= 0; goto ex;}
}
patterns= calloc(1, sizeof(char *));
if(patterns == NULL) {
no_memory:;
sprintf(xorriso->info_text,
"Cannot allocate enough memory for pattern expansion");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FATAL", 0);
{ret= -1; goto ex;}
}
nump= 1;
if(star)
patterns[0]= "*";
else
patterns[0]= ".";
flag&= ~2;
} else {
patterns= calloc(nump, sizeof(char *));
if(patterns==NULL)
goto no_memory;
for(i= 0; i<nump; i++) {
if(argv[i + *idx][0]==0) {
if(star)
patterns[i]= "*";
else
patterns[i]= ".";
} else
patterns[i]= argv[i + *idx];
}
}
if((flag & 1) && !(xorriso->ino_behavior & 32)) {
ret= Xorriso_make_hln_array(xorriso, 0); /* for stbuf.st_nlink */
if(ret < 0)
return(ret);
}
if(flag&2) {
ret= Xorriso_ls_filev(xorriso, xorriso->wdi, nump, argv + (*idx), mem,
flag&(1|4|8));
} else if(nump==1 && strcmp(patterns[0],"*")==0 && !(flag&4)){
/* save temporary memory by calling simpler function */
ret= Xorriso_ls(xorriso, (flag&1)|4);
} else {
ret= Xorriso_expand_pattern(xorriso, nump, patterns, 0, &filec, &filev,
&mem, 0);
if(ret<=0)
{ret= 0; goto ex;}
ret= Xorriso_ls_filev(xorriso, xorriso->wdi, filec, filev, mem,
flag&(1|4|8));
}
if(ret<=0)
{ret= 0; goto ex;}
ret= 1;
ex:;
if(patterns!=NULL)
free((char *) patterns);
Sfile_destroy_argv(&filec, &filev, 0);
(*idx)= end_idx;
return(ret);
}
/* Options -lsx, -lslx, -lsdx , -lsdlx , -dux , -dusx
@param flag bit0= long format (-lslx , -dux)
bit1= do not expand patterns but use literally
bit2= du rather than ls
bit3= list directories as themselves (ls -d)
*/
int Xorriso_option_lsx(struct XorrisO *xorriso, int argc, char **argv,
int *idx, int flag)
{
int ret, end_idx, filec= 0, nump, i;
char **filev= NULL, **patterns= NULL;
off_t mem= 0;
end_idx= Xorriso_end_idx(xorriso, argc, argv, *idx, 1|2);
if(xorriso->do_disk_pattern==0)
flag|= 2;
nump= end_idx - *idx;
if((flag&2) && nump>0) {
;
} else if(nump <= 0) {
patterns= calloc(1, sizeof(char *));
if(patterns == NULL) {
no_memory:;
sprintf(xorriso->info_text,
"Cannot allocate enough memory for pattern expansion");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FATAL", 0);
{ret= -1; goto ex;}
}
nump= 1;
if(flag&8)
patterns[0]= ".";
else
patterns[0]= "*";
flag&= ~2;
} else {
patterns= calloc(nump, sizeof(char *));
if(patterns==NULL)
goto no_memory;
for(i= 0; i<nump; i++) {
if(argv[i + *idx][0]==0)
patterns[i]= "*";
else
patterns[i]= argv[i + *idx];
}
}
if(flag&2) {
ret= Xorriso_lsx_filev(xorriso, xorriso->wdx,
nump, argv + (*idx), mem, flag&(1|4|8));
#ifdef Not_yeT
} else if(nump==1 && strcmp(patterns[0],"*")==0 && !(flag&4)){
/* save temporary memory by calling simpler function */
ret= Xorriso_ls(xorriso, (flag&1)|4);
#endif
} else {
ret= Xorriso_expand_disk_pattern(xorriso, nump, patterns, 0, &filec, &filev,
&mem, 0);
if(ret<=0)
{ret= 0; goto ex;}
ret= Xorriso_lsx_filev(xorriso, xorriso->wdx, filec, filev, mem,
flag&(1|4|8));
}
if(ret<=0)
{ret= 0; goto ex;}
ret= 1;
ex:;
if(patterns!=NULL)
free((char *) patterns);
Sfile_destroy_argv(&filec, &filev, 0);
(*idx)= end_idx;
return(ret);
}
/* Option -map , -map_single */
/* @param flag bit0=do not report the added item
bit1=do not reset pacifier, no final pacifier message
bit5=eventually do not insert directory tree
*/
int Xorriso_option_map(struct XorrisO *xorriso, char *disk_path,
char *iso_path, int flag)
{
int ret;
char *eff_origin= NULL, *eff_dest= NULL, *ipth;
Xorriso_alloc_meM(eff_origin, char, SfileadrL);
Xorriso_alloc_meM(eff_dest, char, SfileadrL);
if(!(flag&2))
Xorriso_pacifier_reset(xorriso, 0);
ipth= iso_path;
if(ipth[0]==0)
ipth= disk_path;
if(disk_path[0]==0) {
sprintf(xorriso->info_text, "-map: Empty disk_path given");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "SORRY", 1);
{ret= 0; goto ex;}
}
ret= Xorriso_normalize_img_path(xorriso, xorriso->wdx, disk_path, eff_origin,
2|4);
if(ret<=0)
goto ex;
ret= Xorriso_normalize_img_path(xorriso, xorriso->wdi, ipth, eff_dest, 2);
if(ret<=0)
goto ex;
ret= Xorriso_graft_in(xorriso, NULL, eff_origin, eff_dest,
(off_t) 0, (off_t) 0, 2|(flag&32));
if(!(flag&2))
Xorriso_pacifier_callback(xorriso, "files added", xorriso->pacifier_count,
xorriso->pacifier_total, "", 1);
if(ret<=0)
goto ex;
if(!(flag&1)) {
sprintf(xorriso->info_text, "Added to ISO image: %s '%s'='%s'\n",
(ret>1 ? "directory" : "file"), (eff_dest[0] ? eff_dest : "/"),
eff_origin);
Xorriso_info(xorriso,0);
}
ret= 1;
ex:;
Xorriso_free_meM(eff_origin);
Xorriso_free_meM(eff_dest);
return(ret);
}
/* Options -map_l , -compare_l , -update_l , -extract_l */
/* @param flag bit4= do not establish and dispose xorriso->di_array
for update_l
bit8-11= mode 0= -map_l
1= -compare_l
2= -update_l
3= -extract_l
*/
int Xorriso_option_map_l(struct XorrisO *xorriso, int argc, char **argv,
int *idx, int flag)
{
int ret, end_idx, optc= 0, was_failure= 1, i, fret, mode, problem_count;
int ns_flag= 2|4, nt_flag= 2, opt_args_flag= 2;
char *source_prefix= NULL, *target_prefix= NULL, *cmd, **optv= NULL;
char *eff_source= NULL, *eff_target= NULL, *source_pt, *s_wd, *t_wd;
char **eff_src_array= NULL, **eff_tgt_array= NULL;
cmd= "-map_l";
s_wd= xorriso->wdx;
t_wd= xorriso->wdi;
Xorriso_pacifier_reset(xorriso, 0);
mode= (flag>>8) & 15;
if(mode==1)
cmd= "-compare_l";
else if(mode==2)
cmd= "-update_l";
else if(mode==3) {
cmd= "-extract_l";
ns_flag= 2;
s_wd= xorriso->wdi;
nt_flag= 2|4;
t_wd= xorriso->wdx;
opt_args_flag= 0;
}
end_idx= Xorriso_end_idx(xorriso, argc, argv, *idx, 1|2);
if(end_idx - (*idx) < 3) {
sprintf(xorriso->info_text, "%s: Not enough arguments given (%d < 3)", cmd,
end_idx - (*idx));
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 1);
ret= 0; goto ex;
}
Xorriso_alloc_meM(source_prefix, char, SfileadrL);
Xorriso_alloc_meM(target_prefix, char, SfileadrL);
Xorriso_alloc_meM(eff_source, char, SfileadrL);
Xorriso_alloc_meM(eff_target, char, SfileadrL);
ret= Xorriso_normalize_img_path(xorriso, s_wd, argv[*idx],
source_prefix, ns_flag | 64);
if(ret<=0)
goto ex;
ret= Xorriso_normalize_img_path(xorriso, t_wd, argv[(*idx)+1],
target_prefix, nt_flag);
if(ret<=0)
goto ex;
ret= Xorriso_opt_args(xorriso, cmd, argc, argv, (*idx)+2, &end_idx,
&optc, &optv, opt_args_flag);
if(ret<=0)
goto ex;
if(mode == 3 &&
(xorriso->do_restore_sort_lba || !(xorriso->ino_behavior & 4))) {
eff_src_array= calloc(optc, sizeof(char *));
eff_tgt_array= calloc(optc, sizeof(char *));
if(eff_src_array == NULL || eff_tgt_array == NULL) {
Xorriso_no_malloc_memory(xorriso, NULL, 0);
ret= -1; goto ex;
}
for(i= 0; i < optc; i++)
eff_src_array[i]= eff_tgt_array[i]= NULL;
}
if(mode == 2 && !((xorriso->ino_behavior & 2) || (flag & 16) ||
xorriso->di_array != NULL)) {
/* Create all-image node array sorted by isofs.di */
ret= Xorriso_make_di_array(xorriso, 0);
if(ret <= 0)
goto ex;
}
for(i= 0; i<optc; i++) {
ret= Xorriso_normalize_img_path(xorriso, s_wd, optv[i],
eff_source, ns_flag);
if(ret<=0)
goto ex;
strcpy(eff_target, target_prefix);
source_pt= eff_source;
if(source_prefix[0]) {
if(strncmp(source_prefix, eff_source, strlen(source_prefix))!=0) {
sprintf(xorriso->info_text, "%s: disk_path ", cmd);
Text_shellsafe(eff_source, xorriso->info_text, 1);
strcat(xorriso->info_text, " does not begin with disk_prefix ");
Text_shellsafe(source_prefix, xorriso->info_text, 1);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 1);
ret= 0; goto ex;
}
source_pt+= strlen(source_prefix);
}
strcat(eff_target, source_pt);
if(mode==0)
ret= Xorriso_option_map(xorriso, eff_source, eff_target, 2);
else if(mode==1)
ret= Xorriso_option_compare(xorriso, eff_source, eff_target, 2|8);
else if(mode==2)
ret= Xorriso_option_update(xorriso, eff_source, eff_target, 2 | 8 | 16);
else if(mode==3) {
if(eff_src_array != NULL) {
eff_src_array[i]= strdup(eff_source);
eff_tgt_array[i]= strdup(eff_target);
if(eff_src_array[i] == NULL || eff_tgt_array[i] == NULL) {
Xorriso_no_malloc_memory(xorriso, &(eff_src_array[i]), 0);
ret= -1; goto ex;
}
} else {
ret= Xorriso_option_extract(xorriso, eff_source, eff_target, 2 | 4);
}
}
if(ret>0 && !xorriso->request_to_abort)
continue; /* regular bottom of loop */
was_failure= 1;
fret= Xorriso_eval_problem_status(xorriso, ret, 1 | 2);
if(fret>=0)
continue;
goto ex;
}
ret= 1;
if(mode == 3 && eff_src_array != NULL) {
ret= Xorriso_lst_append_binary(&(xorriso->node_disk_prefixes),
target_prefix, strlen(target_prefix) + 1, 0);
if(ret <= 0)
goto ex;
ret= Xorriso_lst_append_binary(&(xorriso->node_img_prefixes),
source_prefix, strlen(source_prefix) + 1, 0);
if(ret <= 0)
goto ex;
ret= Xorriso_restore_sorted(xorriso, optc, eff_src_array, eff_tgt_array,
&problem_count, 0);
if(ret <= 0 || problem_count > 0)
was_failure= 1;
}
if(mode==0)
Xorriso_pacifier_callback(xorriso, "files added", xorriso->pacifier_count,
xorriso->pacifier_total, "", 1);
else if(mode==1 || mode==2)
Xorriso_pacifier_callback(xorriso, "content bytes read",
xorriso->pacifier_count, 0, "", 1 | 8 | 32);
else if(mode==3)
Xorriso_pacifier_callback(xorriso, "files restored",xorriso->pacifier_count,
xorriso->pacifier_total, "", 1|4);
ex:;
Xorriso_destroy_node_array(xorriso, 0);
i= optc;
Sfile_destroy_argv(&i, &eff_src_array, 0);
i= optc;
Sfile_destroy_argv(&i, &eff_tgt_array, 0);
Xorriso_free_meM(source_prefix);
Xorriso_free_meM(target_prefix);
Xorriso_free_meM(eff_source);
Xorriso_free_meM(eff_target);
(*idx)= end_idx;
Xorriso_opt_args(xorriso, cmd, argc, argv, *idx, &end_idx, &optc, &optv, 256);
if(ret<=0)
return(ret);
return(!was_failure);
}
/* Option -mark */
int Xorriso_option_mark(struct XorrisO *xorriso, char *mark, int flag)
{
if(mark[0]==0)
xorriso->mark_text[0]= 0;
else
strncpy(xorriso->mark_text,mark,sizeof(xorriso->mark_text)-1);
xorriso->mark_text[sizeof(xorriso->mark_text)-1]= 0;
return(1);
}
/* Option -md5 "on"|"all"|"off" */
int Xorriso_option_md5(struct XorrisO *xorriso, char *mode, int flag)
{
char *npt, *cpt;
int l;
npt= cpt= mode;
for(; npt!=NULL; cpt= npt+1) {
npt= strchr(cpt,':');
if(npt==NULL)
l= strlen(cpt);
else
l= npt-cpt;
if(l == 0)
continue;
if(l == 3 && strncmp(cpt, "off", l) == 0)
xorriso->do_md5&= ~31;
else if(l == 2 && strncmp(cpt, "on", l) == 0)
xorriso->do_md5= (xorriso->do_md5 & ~31) | 7 | 16;
else if(l == 3 && strncmp(cpt, "all", l) == 0)
xorriso->do_md5|= 31;
else if(l == 18 && strncmp(cpt, "stability_check_on", l) == 0)
xorriso->do_md5|= 8;
else if(l == 19 && strncmp(cpt, "stability_check_off", l) == 0)
xorriso->do_md5&= ~8;
else if(l == 13 && strncmp(cpt, "load_check_on", l) == 0)
xorriso->do_md5&= ~32;
else if(l == 14 && strncmp(cpt, "load_check_off", l) == 0)
xorriso->do_md5|= 32;
else {
sprintf(xorriso->info_text, "-md5: unknown mode ");
Text_shellsafe(cpt, xorriso->info_text, 1);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
}
return(1);
}
/* Option -mkdir alias -mkdiri */
int Xorriso_option_mkdiri(struct XorrisO *xorriso, int argc, char **argv,
int *idx, int flag)
{
int i, end_idx, ret, was_failure= 0, fret;
end_idx= Xorriso_end_idx(xorriso, argc, argv, *idx, 0);
for(i= *idx; i<end_idx; i++) {
ret= Xorriso_mkdir(xorriso, argv[i], 0);
if(ret>0 && !xorriso->request_to_abort)
continue; /* regular bottom of loop */
was_failure= 1;
fret= Xorriso_eval_problem_status(xorriso, ret, 1|2);
if(fret>=0)
continue;
goto ex;
}
ret= 1;
ex:;
(*idx)= end_idx;
if(ret<=0)
return(ret);
return(!was_failure);
}
int Xorriso_option_modesty_on_drive(struct XorrisO *xorriso, char *mode,
int flag)
{
char *npt, *cpt, *ppt;
int l, num, set_min;
npt= cpt= mode;
for(; npt!=NULL; cpt= npt+1) {
npt= strchr(cpt,':');
if(npt==NULL)
l= strlen(cpt);
else
l= npt-cpt;
if(l == 0)
continue;
if(l == 3 && strncmp(cpt, "off", l) == 0) {
xorriso->modesty_on_drive= 0;
} else if(l == 1 && strncmp(cpt, "0", l) == 0) {
xorriso->modesty_on_drive= 0;
} else if(l == 2 && strncmp(cpt, "on", l) == 0) {
xorriso->modesty_on_drive= 1;
} else if(l == 1 && strncmp(cpt, "1", l) == 0) {
xorriso->modesty_on_drive= 1;
} else if(l == 2 && strncmp(cpt, "-1", l) == 0) {
;
} else if(*cpt >= '1' && *cpt <= '9') {
ppt= cpt;
set_min= 2;
set_size_percent:;
sscanf(ppt, "%d", &num);
if(num == -1) {
;
} else if(num < 25) {
bad_percent:;
sprintf(xorriso->info_text, "-modesty_on_drive: percentage out of range [25 to 100]");
Text_shellsafe(cpt, xorriso->info_text, 1);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
} else if(num > 100) {
goto bad_percent;
}
if(set_min == 2) {
xorriso->modesty_on_drive= 1;
}
if(set_min)
xorriso->min_buffer_percent= num;
else
xorriso->max_buffer_percent= num;
} else if(l >= 12 && strncmp(cpt, "min_percent=", 12) == 0) {
ppt= cpt + 12;
set_min= 1;
goto set_size_percent;
} else if(l >= 12 && strncmp(cpt, "max_percent=", 12) == 0) {
ppt= cpt + 12;
set_min= 0;
goto set_size_percent;
} else if(l >= 8 && strncmp(cpt, "min_usec=", 8) == 0) {
ppt= cpt + 8;
set_min= 1;
set_sec:;
sscanf(ppt, "%d", &num);
if(num < 0)
num= 0;
if(set_min == 1)
xorriso->min_buffer_usec= num;
else if(set_min == 0)
xorriso->max_buffer_percent= num;
else
xorriso->buffer_timeout_sec= num;
} else if(l >= 8 && strncmp(cpt, "max_usec=", 8) == 0) {
ppt= cpt + 8;
set_min= 0;
goto set_sec;
} else if(l >= 11 && strncmp(cpt, "timeout_sec=", 11) == 0) {
ppt= cpt + 11;
set_min= -1;
goto set_sec;
} else {
sprintf(xorriso->info_text, "-modesty_on_drive: unknown mode ");
Text_shellsafe(cpt, xorriso->info_text, 1);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
}
return(1);
}
/* Options -mount , -mount_cmd , -session_string */
/* @param bit0= -mount_cmd: print mount command to result channel rather
than performing it
bit1= perform -session_string rather than -mount_cmd
*/
int Xorriso_option_mount(struct XorrisO *xorriso, char *dev, char *adr_mode,
char *adr, char *cmd, int flag)
{
int ret, entity_code= 0, m_flag;
char entity_id[81], *mnt;
if(flag & 1)
mnt= "-mount_cmd";
else if(flag & 2)
mnt= "-session_string";
else {
mnt= "-mount";
if(xorriso->allow_restore <= 0) {
sprintf(xorriso->info_text,
"-mount: image-to-disk features are not enabled by option -osirrox");
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
if(Xorriso_change_is_pending(xorriso, 0)) {
sprintf(xorriso->info_text,
"%s: Image changes pending. -commit or -rollback first", mnt);
Xorriso_msgs_submit(xorriso, 0, xorriso->info_text, 0, "FAILURE", 0);
return(0);
}
}
ret= Xorriso_decode_load_adr(xorriso, mnt, adr_mode, adr,
&entity_code, entity_id, 0);
if(ret <= 0)
return(ret);
if(flag & 2)
m_flag= 1 | 4;
else
m_flag= (flag & 1) | 2;
ret= Xorriso_mount(xorriso, dev, entity_code, entity_id, cmd, m_flag);
return(ret);
}
/* Option -mount_opts option[:...] */
int Xorriso_option_mount_opts(struct XorrisO *xorriso, char *mode, int flag)
{
int was, l;
char *cpt, *npt;
was= xorriso->mount_opts_flag;
npt= cpt= mode;
for(cpt= mode; npt!=NULL; cpt= npt+1) {
npt= strchr(cpt,':');
if(npt==NULL)
l= strlen(cpt);
else
l= npt-cpt;
if(l==0)
goto unknown_mode;
if(strncmp(cpt, "shared", l)==0) {
xorriso->mount_opts_flag|= 1;
} else if(strncmp(cpt, "exclusive", l)==0) {
xorriso->mount_opts_flag&= ~1;
} else {
unknown_mode:;
|
__label__pos
| 0.997942 |
PDA
View Full Version : [CLOSED] set combo selected value
susanz
Nov 11, 2014, 3:50 AM
I try to install a load handler to set selected value for comboBox. Is it right to use this.up('combo') ? I got "undefined is not a function " error.
Thanks
-susan
X.Panel()
.Title("Filters")
.Collapsed(false)
.Collapsible(false)
.BodyBorder(0)
.Items(
X.ComboBox()
.MessageBusListeners(new MessageBusListener
{
Name = "Filter." + Model.GridID,
Handler = "this.getStore().load();"
}
)
.Store(X.Store()
.Model(X.Model().Fields(new ModelField("name")))
.Proxy(X.AjaxProxy()
.Url (Url.Action("GetGridDimFilters", "Combo", new { gridId = @Model.GridID }))
.Reader(X.JsonReader().Root("data"))
)
.Listeners(ls =>
ls.Load.Handler = @"var combo = this.up('combo'); combo.setValue(records[0].get(combo.valueField));" // this does not work
)
)
)
Daniil
Nov 11, 2014, 3:37 PM
Hi @susanz,
There is no the .up() method in the Store API.
A store is the data layer and it is designed with the concept of loose coupling. In other words a Store doesn't have a reference to a component that it is wired up to.
If you already use MessageBus, I think your scenario is a good chance for that as well, to wire up the ComboBox and its Store.
Here is an example.
View
@{
var X = Html.X();
}
<!DOCTYPE html>
<html>
<head>
<title>Ext.Net.MVC v2 Example</title>
<script>
var onLoad = function(eventName, data) {
this.setValue(data.records[0].data[this.valueField]);
};
</script>
</head>
<body>
@X.ResourceManager()
@(X.ComboBox()
.QueryMode(DataLoadMode.Local)
.Store(X.Store()
.Model(X.Model().Fields("text", "value"))
.Proxy(X.AjaxProxy()
.Url(Url.Action("GetData"))
.Reader(X.JsonReader().Root("data"))
)
.Listeners(events => events.Load.BroadcastOnBus = "Store.Load")
)
.MessageBusListeners(new MessageBusListener
{
Name = "Store.Load",
Fn = "onLoad"
})
)
</body>
</html>
Controller
public ActionResult Index()
{
return this.View();
}
public ActionResult GetData()
{
return this.Store(new object[]
{
new
{
text = "Item 1",
value = 1
},
new
{
text = "Item 2",
value = 2
}
});
}
susanz
Nov 11, 2014, 7:00 PM
Thanks Daniil. That works great for me.
-szhang
|
__label__pos
| 0.842026 |
Data Scientist: The Many Trending Career in the Earth
These days company houses are stockpiling a large number of information that will be often considered because the important advantage for the companies. It's shocking to understand that significantly more than 90% of the info which is available in today has been made in the last two years. In early in the day times, as a result of scant information, the businesses did not learn how to extract the significant and relevant information from this saved data. However the advent of data analytics has properly bridged the space between the company and that unpolished data.
So, it may be figured the data analytics has completely changed the perspective of the firms and using the extensive company analytics, the businesses may take the proper conclusions which will help them to exceed their contenders. Hence, the businesses are emphasizing on knowledge evaluation that is extracted from raw data by specific computer programs and are cultivating their staff regarding how exactly to accustom and publicize the info that they're getting from these arranged data.
Since the importance of data analytics is robust daily, thus the companies are appointing the sagacious professionals who provides the company with the wider ideas of the structured data. A data researcher can be responsible for planning and implementing different functions and various styles for the intricate and large-scale datasets which are essentially useful for modelling, data mining, and numerous study purposes.data science placement consultancy
What're the key responsibilities of a knowledge researcher? Why they became an intrinsic portion of every business?
Need certainly to take care of those information which affect the business most: A information scientist's primary job is to recognize the most relevant data which can help the corporation to help make the right conclusions therefore they can proliferate their company and growth. A information researcher often dives in to the pool of knowledge and with his experience and knowledge, he applied to get all of the crucial data and ignores different irrelevant information so that the organization can take the likely conclusions quickly. Suppose a business handles cellphones, then they will try to find out who're employing their phones currently? How would they find more consumers like them? Merely a pedantic data researcher may answer these issues and thus, the firms are employing more information scientists into their primary team.
Need to provide knowledge in this way that everyone can understand it: Though a information scientist must certanly be well-equipped with all the current specialized and machine languages like Page1=46, Python, etc., but he should provide the info in a facile and simpler way in order that a layman can understand the understanding from the data. A data scientist should never display a regression evaluation or a story from Page1=46 since only some people have an adequate knowledge regarding these. Instead he should provide the info in a story showing way which contains easy slides and visuals in place of numbers. Visualizing and communicating information are similarly essential, especially for the nascent businesses who're creating the data-driven conclusions for the first time or the firms wherever these specialists are viewed as individuals who help others making data-oriented decisions. In this way, everyone else in an organization should understand which portions or sectors of the business require further improvement.
|
__label__pos
| 0.839936 |
Tag Info
Hot answers tagged
6
What you're looking for is something called serialisation. It's difficult to directly represent a memory structure in such a way as to pass it between processes, because of all sorts of fun things like pointers and buffers. So you need to turn your hash into something simple enough to hand over in a single go. Three key options for this in my opinion: ...
4
Using the .. range operator you can remove a large chunk of code. Here's how you can use it within the framework of your script: use strict; use warnings; # use lexical file handles open my $fh_in, 'sas.txt' or die "Could not open input file"; open my $fh_out, '>', 'out.txt' or die "Could not open output file"; while(<$fh_in>) { # use ...
3
Digest::SHA doesn't have an encode_Base64 method, so it's likely that's what's wrong. If you refer to the documentation (for OO operations) you can: $digest = $sha->digest; # compute digest $digest = $sha->hexdigest; $digest = $sha->b64digest; I presume it's this last you want, but can't say for sure.
3
XML::Simple is a nasty library that really needs to go and die. It coerces XML into perl-like data structures, but does so in a way that means you lose data in the process, and have to guess to convert it back. (It often guesses right, but not always, which is why you get all that nonsense with force array, etc.) That's why it says in the docs: The use ...
2
You can put multiple arguments with comma-separated (just an array). system($path, $arg1, $arg2, $arg3, ...); See: perldoc
2
To get your long keys into individual array refs, you need to change the assignment of your data structure. I've added a bunch of example data. # input data my $vehicles = [ { 'Z' => '12', 'X' => '11', 'Y' => '0', }, { 'Z' => '300', 'X' => '2001', 'Y' => '100000', } ]; my ...
2
cat sas.txt | perl -pe 'BEGIN {$abc=0} m#(current_line)\s+(abc/123)?#; if ($1) { $abc=$2 ? 1 : 0 } if ($abc && m#ins_bug#) {$_=""}' > out.txt Keep track if we are in the "abc/123" section. BEGIN {$abc=0} Match the current_line section, optionally matching abc/123. m#(current_line)\s+(abc/123)?#; If we've matched abc/123, mark it, otherwise ...
2
The readline operator <HANDLE> returns undef when reading from an exhausted filehandle (in scalar handle). So the first warning comes in the 2nd iteration of the while loop when you read the (missing) 4th line from DATA. The next two warnings come from the 3rd iteration, when you call <DATA> twice on an exhausted filehandle. You can fix this by ...
2
It's not really an error, its just a warning. You should get 2 warnings on even line number files, and 3 warnings on odd numbered files. When it comes through the while expression the first time, it reads from the first time fine, then when it reads the second time, the filehandle is empty, and it prints the warning. Because it returned something the ...
1
Multiple wild cards in the path, like Files/*/*.java may not be supported in some basic versions of Windows shell cmd.exe You may want to try using Power shell or some other shell like Cygwin for this functionality Try dir Files/*/*.java or some other command like ls or type in the shell to see if it supports this.
1
Yes, it's relevant that you're on Windows. Wildcard expansion in the Windows shell is buggy and the docs recommend (and provide a technique) to handle them yourself: http://search.cpan.org/dist/perl-5.23.2/README.win32#Command-line_Wildcard_Expansion The default command shells on DOS descendant operating systems (such as they are) usually do not expand ...
1
I thought there was an issue with your call to column_names, as getline returns an array reference whereas column_names expects a list. But after some experimentation it appears that column_names will also accept an array reference. It is just undocumented behaviour. So this is correct $csv->column_names($csv->getline($fh)) I would pull out the ...
1
You are missing the closing ]. Replace: '//input[@id="card.billingAddress.houseNumberOrName"' with: '//input[@id="card.billingAddress.houseNumberOrName"]' HERE ^ Note that you might simplify it by switching directly to the find_element_by_id() method.
1
Use Text::CSV - you can either use print if you've got an array reference to print. (It handles quoting for you). Or you can use column_names to set the column ordering, and print_hr to output a hash ref in that order. It looks like you're not worried about RollNumber so you can cheat and do: foreach my $student ( values %student_hash ) { } To ...
1
Use word-boundary on both sides of your $word. Currently, you are only checking for it at the beginning. You won't need the \s? condition with the \b in place: $text =~ s/\b\Q$word\E\b//sg;
1
Use a CPAN module for this. May save you some headache. But if you still want to use regex for the job, I'd suggest you expand the regex in a script and step through it using the debugger like mentioned above.
1
You can turn on debugging for regex with use re 'debug';. However that would be the wrong approach to take. Your problem here isn't your regex is wrong, it's that regex is fundamentally the wrong tool for XML parsing. (And leaving aside that - your line is just too long to be sensible to use inline like that!) Given your example - it looks like you're ...
1
Something like this will do it: #!/usr/bin/perl use strict; use warnings; use XML::Twig; sub text_handler { my ( $twig, $text_elt ) = @_; print $text_elt -> text; $twig -> purge; } my $twig = XML::Twig -> new ( twig_handlers => { 'text' => \&text_handler } ) -> parsefile ( 'your_xml'); Note the trick here is that ...
1
Note - you mention (and tag) perl. The below is from a perl perspective. Some of it may be applicable to conventional shell, but I can't say for sure exactly what. perl does support some regex things that are above and beyond the base POSIX spec. The problem with patterns like that, is that you've got delimiters in your pattern. Your initial is failing ...
1
These long string replacements are always finicky, I think, and it is often best to try and avoid dealing with the content as much as possible. I came up with this, which simply captures the string, takes away the first # and reprints the line: use strict; use warnings; my $search = q|#location ~ \.php$ { # fastcgi_split_path_info ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
__label__pos
| 0.687225 |
Don't like ads? PRO users don't see any ads ;-)
Guest
PowerCLI: List VMs and mounted ISOs
By: a guest on Nov 15th, 2012 | syntax: Power Shell | size: 0.21 KB | hits: 184 | expires: Never
download | raw | embed | report abuse | print
Text below is selected. Please press Ctrl+C to copy to your clipboard. (⌘+C on Mac)
1. foreach($vm in get-vm) {
2. foreach ( $Drive in ( get-cddrive $vm | where {$_.IsoPath -ne $null} ) ) {
3. $IsoSplit = $Drive.IsoPath.split('/') ;
4. write-host "$($VM.Name) `t $($IsoSplit[$IsoSplit.length-1])"
5. }
6. } ;
|
__label__pos
| 0.996227 |
Home | | Maths 7th Std | Converting Fraction to Percentage
Term 3 Chapter 2 | 7th Maths - Converting Fraction to Percentage | 7th Maths : Term 3 Unit 2 : Percentage and Simple Interest
Chapter: 7th Maths : Term 3 Unit 2 : Percentage and Simple Interest
Converting Fraction to Percentage
All numbers which are represented using numerator and denominator are fractions.
Converting Fraction to Percentage
All numbers which are represented using numerator and denominator are fractions. They can have any number as a denominator. If the denominator of the fraction is hundred then it can be very easily expressed as a percentage. Let us try to convert different fraction to percentage.
Example 2.1
Write 1/5 as per cent.
Solution
We have 1/5 = 1/5 × 100/100 = 1/5 ×100% = 100/5 % = 20% .
Example 2.2
Convert 7/4 to per cent.
Solution
We have 7/4 = 7/4 × 100/100 = 7/4 ×100% = 700/4 % = 175% .
Example 2.3
Out of 20 beads, 5 beads are red. What is the percentage of red.
Solution
We have =5/20×5/20×100/100 =5/20 ×100% =500/20 % = 25%.
Example 2.4
Convert the fraction 23/30 as per cent.
Solution
We have 23/30 = 23/30 × 100/100 = 23/30 ×100% = 76 2/3 %
From these examples we see that the percentage of proper fractions are less than 100 and that of improper fractions are more than 100.
Try these
Convert the fractions as percentage.
(i) 1/20 (ii) 13/25 (iii) 45/50 iv 18/5 (v) 27/10 (vi) 72/90
Solution:
(i) 1/20 = 1/20 × 100/100 = 1/20 × 100% = 5 %
(i) 1/20 = 1/20 × 100/100 = 1 / 20 × 100% = 5 %
(ii) 13/25 = 13/25 × 100/100 = 13 / 25 × 100% = 52 %
(iii) 45/20 = 45/50 × 100/100 = 45 / 50 × 100% = 90 %
(iv) 18/5 = 18/5 × 100/100 = 18 / 5 × 100% = 360 %
(v) 27/10 = 27/10 × 100/100 = 27 / 10 × 100% = 270 %
(vi) 72/90 = 72/90 × 100/100 = 72 / 90 × 100% = 80 %
Tags : Term 3 Chapter 2 | 7th Maths , 7th Maths : Term 3 Unit 2 : Percentage and Simple Interest
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
7th Maths : Term 3 Unit 2 : Percentage and Simple Interest : Converting Fraction to Percentage | Term 3 Chapter 2 | 7th Maths
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2024 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
|
__label__pos
| 0.960299 |
blob: 4de53f0513a03509f546be34b32aea61cb49bb0f [file] [log] [blame]
/*
Copyright (c) 2001-2006, Gerrit Pape
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The name of the author may not be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* Busyboxed by Denys Vlasenko <[email protected]> */
/* Dependencies on runit_lib.c removed */
#include "libbb.h"
#include <dirent.h>
// Must match constants in chpst_main!
#define OPT_verbose (option_mask32 & 0x2000)
#define OPT_pgrp (option_mask32 & 0x4000)
#define OPT_nostdin (option_mask32 & 0x8000)
#define OPT_nostdout (option_mask32 & 0x10000)
#define OPT_nostderr (option_mask32 & 0x20000)
struct globals {
char *set_user;
char *env_user;
const char *env_dir;
const char *root;
long limitd; /* limitX are initialized to -2 */
long limits;
long limitl;
long limita;
long limito;
long limitp;
long limitf;
long limitc;
long limitr;
long limitt;
int nicelvl;
};
#define G (*(struct globals*)&bb_common_bufsiz1)
#define set_user (G.set_user)
#define env_user (G.env_user)
#define env_dir (G.env_dir )
#define root (G.root )
#define limitd (G.limitd )
#define limits (G.limits )
#define limitl (G.limitl )
#define limita (G.limita )
#define limito (G.limito )
#define limitp (G.limitp )
#define limitf (G.limitf )
#define limitc (G.limitc )
#define limitr (G.limitr )
#define limitt (G.limitt )
#define nicelvl (G.nicelvl )
#define INIT_G() do { \
long *p = &limitd; \
do *p++ = -2; while (p <= &limitt); \
} while (0)
static void suidgid(char *user)
{
struct bb_uidgid_t ugid;
if (!get_uidgid(&ugid, user, 1)) {
bb_error_msg_and_die("unknown user/group: %s", user);
}
if (setgroups(1, &ugid.gid) == -1)
bb_perror_msg_and_die("setgroups");
xsetgid(ugid.gid);
xsetuid(ugid.uid);
}
static void euidgid(char *user)
{
struct bb_uidgid_t ugid;
if (!get_uidgid(&ugid, user, 1)) {
bb_error_msg_and_die("unknown user/group: %s", user);
}
xsetenv("GID", utoa(ugid.gid));
xsetenv("UID", utoa(ugid.uid));
}
static void edir(const char *directory_name)
{
int wdir;
DIR *dir;
struct dirent *d;
int fd;
wdir = xopen(".", O_RDONLY | O_NDELAY);
xchdir(directory_name);
dir = opendir(".");
if (!dir)
bb_perror_msg_and_die("opendir %s", directory_name);
for (;;) {
char buf[256];
char *tail;
int size;
errno = 0;
d = readdir(dir);
if (!d) {
if (errno)
bb_perror_msg_and_die("readdir %s",
directory_name);
break;
}
if (d->d_name[0] == '.')
continue;
fd = open(d->d_name, O_RDONLY | O_NDELAY);
if (fd < 0) {
if ((errno == EISDIR) && env_dir) {
if (OPT_verbose)
bb_perror_msg("warning: %s/%s is a directory",
directory_name, d->d_name);
continue;
} else
bb_perror_msg_and_die("open %s/%s",
directory_name, d->d_name);
}
size = full_read(fd, buf, sizeof(buf)-1);
close(fd);
if (size < 0)
bb_perror_msg_and_die("read %s/%s",
directory_name, d->d_name);
if (size == 0) {
unsetenv(d->d_name);
continue;
}
buf[size] = '\n';
tail = strchr(buf, '\n');
/* skip trailing whitespace */
while (1) {
*tail = '\0';
tail--;
if (tail < buf || !isspace(*tail))
break;
}
xsetenv(d->d_name, buf);
}
closedir(dir);
if (fchdir(wdir) == -1)
bb_perror_msg_and_die("fchdir");
close(wdir);
}
static void limit(int what, long l)
{
struct rlimit r;
/* Never fails under Linux (except if you pass it bad arguments) */
getrlimit(what, &r);
if ((l < 0) || (l > r.rlim_max))
r.rlim_cur = r.rlim_max;
else
r.rlim_cur = l;
if (setrlimit(what, &r) == -1)
bb_perror_msg_and_die("setrlimit");
}
static void slimit(void)
{
if (limitd >= -1) {
#ifdef RLIMIT_DATA
limit(RLIMIT_DATA, limitd);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"DATA");
#endif
}
if (limits >= -1) {
#ifdef RLIMIT_STACK
limit(RLIMIT_STACK, limits);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"STACK");
#endif
}
if (limitl >= -1) {
#ifdef RLIMIT_MEMLOCK
limit(RLIMIT_MEMLOCK, limitl);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"MEMLOCK");
#endif
}
if (limita >= -1) {
#ifdef RLIMIT_VMEM
limit(RLIMIT_VMEM, limita);
#else
#ifdef RLIMIT_AS
limit(RLIMIT_AS, limita);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"VMEM");
#endif
#endif
}
if (limito >= -1) {
#ifdef RLIMIT_NOFILE
limit(RLIMIT_NOFILE, limito);
#else
#ifdef RLIMIT_OFILE
limit(RLIMIT_OFILE, limito);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"NOFILE");
#endif
#endif
}
if (limitp >= -1) {
#ifdef RLIMIT_NPROC
limit(RLIMIT_NPROC, limitp);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"NPROC");
#endif
}
if (limitf >= -1) {
#ifdef RLIMIT_FSIZE
limit(RLIMIT_FSIZE, limitf);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"FSIZE");
#endif
}
if (limitc >= -1) {
#ifdef RLIMIT_CORE
limit(RLIMIT_CORE, limitc);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"CORE");
#endif
}
if (limitr >= -1) {
#ifdef RLIMIT_RSS
limit(RLIMIT_RSS, limitr);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"RSS");
#endif
}
if (limitt >= -1) {
#ifdef RLIMIT_CPU
limit(RLIMIT_CPU, limitt);
#else
if (OPT_verbose)
bb_error_msg("system does not support RLIMIT_%s",
"CPU");
#endif
}
}
/* argv[0] */
static void setuidgid(int, char **) ATTRIBUTE_NORETURN;
static void envuidgid(int, char **) ATTRIBUTE_NORETURN;
static void envdir(int, char **) ATTRIBUTE_NORETURN;
static void softlimit(int, char **) ATTRIBUTE_NORETURN;
int chpst_main(int argc, char **argv) MAIN_EXTERNALLY_VISIBLE;
int chpst_main(int argc ATTRIBUTE_UNUSED, char **argv)
{
INIT_G();
if (applet_name[3] == 'd') envdir(argc, argv);
if (applet_name[1] == 'o') softlimit(argc, argv);
if (applet_name[0] == 's') setuidgid(argc, argv);
if (applet_name[0] == 'e') envuidgid(argc, argv);
// otherwise we are chpst
{
char *m,*d,*o,*p,*f,*c,*r,*t,*n;
getopt32(argv, "+u:U:e:m:d:o:p:f:c:r:t:/:n:vP012",
&set_user,&env_user,&env_dir,
&m,&d,&o,&p,&f,&c,&r,&t,&root,&n);
// if (option_mask32 & 0x1) // -u
// if (option_mask32 & 0x2) // -U
// if (option_mask32 & 0x4) // -e
if (option_mask32 & 0x8) limits = limitl = limita = limitd = xatoul(m); // -m
if (option_mask32 & 0x10) limitd = xatoul(d); // -d
if (option_mask32 & 0x20) limito = xatoul(o); // -o
if (option_mask32 & 0x40) limitp = xatoul(p); // -p
if (option_mask32 & 0x80) limitf = xatoul(f); // -f
if (option_mask32 & 0x100) limitc = xatoul(c); // -c
if (option_mask32 & 0x200) limitr = xatoul(r); // -r
if (option_mask32 & 0x400) limitt = xatoul(t); // -t
// if (option_mask32 & 0x800) // -/
if (option_mask32 & 0x1000) nicelvl = xatoi(n); // -n
// The below consts should match #defines at top!
//if (option_mask32 & 0x2000) OPT_verbose = 1; // -v
//if (option_mask32 & 0x4000) OPT_pgrp = 1; // -P
//if (option_mask32 & 0x8000) OPT_nostdin = 1; // -0
//if (option_mask32 & 0x10000) OPT_nostdout = 1; // -1
//if (option_mask32 & 0x20000) OPT_nostderr = 1; // -2
}
argv += optind;
if (!argv || !*argv) bb_show_usage();
if (OPT_pgrp) setsid();
if (env_dir) edir(env_dir);
if (root) {
xchdir(root);
xchroot(".");
}
slimit();
if (nicelvl) {
errno = 0;
if (nice(nicelvl) == -1)
bb_perror_msg_and_die("nice");
}
if (env_user) euidgid(env_user);
if (set_user) suidgid(set_user);
if (OPT_nostdin) close(0);
if (OPT_nostdout) close(1);
if (OPT_nostderr) close(2);
BB_EXECVP(argv[0], argv);
bb_perror_msg_and_die("exec %s", argv[0]);
}
static void setuidgid(int argc ATTRIBUTE_UNUSED, char **argv)
{
const char *account;
account = *++argv;
if (!account) bb_show_usage();
if (!*++argv) bb_show_usage();
suidgid((char*)account);
BB_EXECVP(argv[0], argv);
bb_perror_msg_and_die("exec %s", argv[0]);
}
static void envuidgid(int argc ATTRIBUTE_UNUSED, char **argv)
{
const char *account;
account = *++argv;
if (!account) bb_show_usage();
if (!*++argv) bb_show_usage();
euidgid((char*)account);
BB_EXECVP(argv[0], argv);
bb_perror_msg_and_die("exec %s", argv[0]);
}
static void envdir(int argc ATTRIBUTE_UNUSED, char **argv)
{
const char *dir;
dir = *++argv;
if (!dir) bb_show_usage();
if (!*++argv) bb_show_usage();
edir(dir);
BB_EXECVP(argv[0], argv);
bb_perror_msg_and_die("exec %s", argv[0]);
}
static void softlimit(int argc ATTRIBUTE_UNUSED, char **argv)
{
char *a,*c,*d,*f,*l,*m,*o,*p,*r,*s,*t;
getopt32(argv, "+a:c:d:f:l:m:o:p:r:s:t:",
&a,&c,&d,&f,&l,&m,&o,&p,&r,&s,&t);
if (option_mask32 & 0x001) limita = xatoul(a); // -a
if (option_mask32 & 0x002) limitc = xatoul(c); // -c
if (option_mask32 & 0x004) limitd = xatoul(d); // -d
if (option_mask32 & 0x008) limitf = xatoul(f); // -f
if (option_mask32 & 0x010) limitl = xatoul(l); // -l
if (option_mask32 & 0x020) limits = limitl = limita = limitd = xatoul(m); // -m
if (option_mask32 & 0x040) limito = xatoul(o); // -o
if (option_mask32 & 0x080) limitp = xatoul(p); // -p
if (option_mask32 & 0x100) limitr = xatoul(r); // -r
if (option_mask32 & 0x200) limits = xatoul(s); // -s
if (option_mask32 & 0x400) limitt = xatoul(t); // -t
argv += optind;
if (!argv[0]) bb_show_usage();
slimit();
BB_EXECVP(argv[0], argv);
bb_perror_msg_and_die("exec %s", argv[0]);
}
|
__label__pos
| 0.989131 |
modding an apk
Discussion in 'Android Hacks and Help' started by scooterpd, May 4, 2010.
1. scooterpd
scooterpd New Member
Joined:
May 2, 2010
Messages:
2
Likes Received:
0
Trophy Points:
1
Ratings:
+0
This is probably a stupid question, coming from a stupid noob. Is it possible to modify an apk. For instance, there are three different sms apps that I like and use. Is it possible to somehow combine the key features of each and make a single app that does it all? I like the lightweight and easy interface of ChompSMS, the customization of Handcent SMS, and the key feature of sending sms's at a scheduled date with FutureSMS2. Just wondering if this is possible/legal/frowned upon or what?
2. icculusX
icculusX Premium Member Theme Developer Premium Member
Joined:
Feb 10, 2010
Messages:
2,534
Likes Received:
0
Trophy Points:
101
Location:
The Moon
Ratings:
+0
Editing graphics in an APK is super easy, depending on the app. However what you are talking about would be super hard, probably possible in a sense, but incredibly hard.
3. pc747
pc747 DF Administrator Staff Member Rescue Squad
Joined:
Dec 23, 2009
Messages:
20,755
Likes Received:
2,960
Trophy Points:
1,078
Ratings:
+3,236
If you good with using source codes then yes. But that is beyond my knowledge base.
4. scooterpd
scooterpd New Member
Joined:
May 2, 2010
Messages:
2
Likes Received:
0
Trophy Points:
1
Ratings:
+0
Editing graphics in an APK sounds interesting, I will have to look into that. The whole super hard part though is a wee bit deterring. I will look further into it. If you have any suggestions they would be greatly appreciated.
I have written a few GUI-less C++ programs, and a couple of AutoHotKey scripts, but that's about it. I just want to combine key parts, so avoiding having to learn everything about Android source code would be great, but eventually I would want to develop from scratch. Any suggestions for a good starting point?
Search tags for this page
apk modding
,
hack into apk sourcecodes
,
mod ing apk
,
modding apk
,
what is the difference of apk and modded apk
|
__label__pos
| 0.988962 |
Unpaired Two-Samples Wilcoxon Test in R
The unpaired two-samples Wilcoxon test (also known as Wilcoxon rank sum test or Mann-Whitney test) is a non-parametric alternative to the unpaired two-samples t-test, which can be used to compare two independent groups of samples. It’s used when your data are not normally distributed.
Unpaired two-samples wilcoxon test
This article describes how to compute two samples Wilcoxon test in R.
Visualize your data and compute Wilcoxon test in R
R function to compute Wilcoxon test
To perform two-samples Wilcoxon test comparing the means of two independent samples (x & y), the R function wilcox.test() can be used as follow:
wilcox.test(x, y, alternative = "two.sided")
• x,y: numeric vectors
• alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
Import your data into R
1. Prepare your data as specified here: Best practices for preparing your data set for R
2. Save your data in an external .txt tab or .csv files
3. Import your data into R as follow:
# If .txt tab file, use this
my_data <- read.delim(file.choose())
# Or, if .csv file, use this
my_data <- read.csv(file.choose())
Here, we’ll use an example data set, which contains the weight of 18 individuals (9 women and 9 men):
# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4)
# Create a data frame
my_data <- data.frame(
group = rep(c("Woman", "Man"), each = 9),
weight = c(women_weight, men_weight)
)
We want to know, if the median women’s weight differs from the median men’s weight?
Check your data
print(my_data)
group weight
1 Woman 38.9
2 Woman 61.2
3 Woman 73.3
4 Woman 21.8
5 Woman 63.4
6 Woman 64.6
7 Woman 48.4
8 Woman 48.8
9 Woman 48.5
10 Man 67.8
11 Man 60.0
12 Man 63.4
13 Man 76.0
14 Man 89.4
15 Man 73.3
16 Man 67.3
17 Man 61.3
18 Man 62.4
It’s possible to compute summary statistics (median and interquartile range (IQR)) by groups. The dplyr package can be used.
• To install dplyr package, type this:
install.packages("dplyr")
• Compute summary statistics by groups:
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
Source: local data frame [2 x 4]
group count median IQR
(fctr) (int) (dbl) (dbl)
1 Man 9 67.3 10.9
2 Woman 9 48.8 15.0
Visualize your data using box plots
You can draw R base graphs as described at this link: R base graphs. Here, we’ll use the ggpubr R package for an easy ggplot2-based data visualization
• Install the latest version of ggpubr from GitHub as follow (recommended):
# Install
if(!require(devtools)) install.packages("devtools")
devtools::install_github("kassambara/ggpubr")
• Or, install from CRAN as follow:
install.packages("ggpubr")
• Visualize your data:
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
ylab = "Weight", xlab = "Groups")
Unpaired Two-Samples Wilcoxon Test in R
Unpaired Two-Samples Wilcoxon Test in R
Compute unpaired two-samples Wilcoxon test
Question : Is there any significant difference between women and men weights?
1) Compute two-samples Wilcoxon test - Method 1: The data are saved in two different numeric vectors.
res <- wilcox.test(women_weight, men_weight)
res
Wilcoxon rank sum test with continuity correction
data: women_weight and men_weight
W = 15, p-value = 0.02712
alternative hypothesis: true location shift is not equal to 0
It will give a warning message, saying that “cannot compute exact p-value with tie”. It comes from the assumption of a Wilcoxon test that the responses are continuous. You can suppress this message by adding another argument exact = FALSE, but the result will be the same.
2) Compute two-samples Wilcoxon test - Method 2: The data are saved in a data frame.
res <- wilcox.test(weight ~ group, data = my_data,
exact = FALSE)
res
Wilcoxon rank sum test with continuity correction
data: weight by group
W = 66, p-value = 0.02712
alternative hypothesis: true location shift is not equal to 0
# Print the p-value only
res$p.value
[1] 0.02711657
As you can see, the two methods give the same results.
The p-value of the test is 0.02712, which is less than the significance level alpha = 0.05. We can conclude that men’s median weight is significantly different from women’s median weight with a p-value = 0.02712.
Note that:
• if you want to test whether the median men’s weight is less than the median women’s weight, type this:
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "less")
• Or, if you want to test whether the median men’s weight is greater than the median women’s weight, type this
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "greater")
Online unpaired two-samples Wilcoxon test calculator
You can perform unpaired two-samples Wilcoxon test, online, without any installation by clicking the following link:
See also
Infos
This analysis has been performed using R software (ver. 3.2.4).
Enjoyed this article? I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Twitter, Facebook or Linked In.
Show me some love with the like buttons below... Thank you and please don't forget to share and comment below!!
Avez vous aimé cet article? Je vous serais très reconnaissant si vous aidiez à sa diffusion en l'envoyant par courriel à un ami ou en le partageant sur Twitter, Facebook ou Linked In.
Montrez-moi un peu d'amour avec les like ci-dessous ... Merci et n'oubliez pas, s'il vous plaît, de partager et de commenter ci-dessous!
This page has been seen 107679 times
|
__label__pos
| 0.517101 |
Aprendiendo AWS: Lambda usando VSCode y Python
Continuando con la serie sobre AWS, en este caso he decidido jugar un poco con una de las plataformas del momento, Lambda. Para ello he construido una API con Python para leer y escribir datos de una base de datos, en local y con diferentes capas de test.
¿Qué es Lambda?
Lambda es un producto de AWS que permite ejecutar trozos de código (funciones) de manera aislada de otras aplicaciones. Esto aporta una capa extra de abstracción con todo lo bueno y malo que ello implica, y además únicamente se factura por el tiempo en el que la función se está ejecutando.
Se puede interactuar con Lambda utilizando el editor en la nube que ofrecen, subir el código a S3 usando un formulario, o utilizar sistemas como SAM local para automatizar todo el proceso, y este último es el que se ha usado en este artículo.
El objetivo
Para este pequeño proyecto, el plan consistía en crear al menos un par de funciones que expusieran una API REST, escribir en una base de datos, definir todo lo posible mediante código y configuración y que el despliegue se realizara utilizando la consola.
El ejemplo es un banco con cinco funciones, crear, depositar, retirar, ver saldo, y hacer una transferencia.
Escribiendo la primera función
El lenguaje utilizado en este caso ha sido Python, así que parte del reto ha sido aprender a escribir utilizando la sintaxis del lenguaje de la serpiente:
def setup(event, context):
payload = json.loads(event['body’])
account = payload['id']
bank = Bank()
bank.save_account({"id": account, "balance": 0})
return {
"statusCode": 201
}
En Lambda, y para el caso particular de Python, se define una función como un método que acepta dos parámetros, event que contiene los datos que se le pasan a la función, y context que contiene información relativa a la ejecución de la misma.
En este ejemplo se puede ver los valores que se pasan a esta función dentro de event
{'httpMethod': 'PUT', 'body': '{"id": "test_alice"}', 'resource': '/account', 'requestContext': {'resourcePath': '/account', 'httpMethod': 'PUT', 'stage': 'prod', 'identity': {'sourceIp': '127.0.0.1:61479'}}, 'queryStringParameters': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive', 'Content-Length': '20', 'Host': 'localhost', 'User-Agent': 'python-requests/2.18.4', 'X-Forwarded-Port': '', 'X-Forwarded-Proto': ''}, 'pathParameters': None, 'stageVariables': None, 'path': '/account', 'isBase64Encoded': False}
Para poder continuar, es necesario interpretar los parámetros. En este caso, el valor body es un JSON con la información de la cuenta, y para ello se crea un objeto a partir de la cadena de texto utilizando json.loads(event['body’])
Esto permite interactuar con los datos que se pasan a la función, que en este caso lo que hace es guardar la nueva cuenta en la clase Bank creada para abstraer toda la información relativa a la base de datos.
Finalmente, la función ha de devolver un objeto que contenga, al menos un campo con el estado HTTP en statusCode, que en este caso es 201 (Created).
Utilizando la misma lógica se puede hacer las operaciones de depósito, retirada, saldo y transferencia, aunque para ello puede resultar interesante abstraer la función que recibe el evento de la lógica de la función, usando una función que se llamará dispatch:
def dispatch(event, context):
requestContext = event["requestContext"]
resourcePath = requestContext["resourcePath"]
method = requestContext["httpMethod"]
if(method == "GET"):
payload = event['queryStringParameters']
else:
payload = json.loads(event['body'])
try:
if(resourcePath == "/account"):
if(method == "PUT"):
setup(payload)
return {
"statusCode": 201
}
elif(method == "GET"):
return {
"statusCode": 200,
"body": get_balance(payload)
}
...
except Exception:
return {
"statusCode": 500
}
Esta función permite leer los parámetros, interpretar la ruta y llamar a la función específica de la misma, así como hacer un pequeño tratamiento de errores.
Integrando con DynamoDB
Una vez que están todas las funciones definidas, el siguiente paso es escribir la clase Bank que será la que escriba información en la base de datos.
Mantener estas responsabilidades separadas permite, en primer lugar, una mejor capacidad de pruebas, y en segundo lugar, hace al código de la función independiente del proveedor de bases de datos.
import boto3
import os
import uuid
class Bank:
def __init__(self):
endpoint = os.environ.get("ENDPOINT")
if(endpoint is None):
self.dynamodb = boto3.resource('dynamodb')
else:
self.dynamodb = boto3.resource('dynamodb', endpoint_url=endpoint)
self.accounts = self.dynamodb.Table("Accounts")
self.transactions = self.dynamodb.Table("Transactions")
def get_account(self, account_name):
try:
result = self.accounts.get_item(Key={'id': account_name})
return result['Item']
except:
return None
def log_transaction(self, transaction):
transaction["id"] = str(uuid.uuid1())
self.transactions.put_item(Item=transaction)
def save_account(self, account):
self.accounts.put_item(Item=account)
El único detalle destacable de esta clase, es que comprueba si existe un endpoint para DynamoDB definido, esto permitirá utilizar la misma clase para los tests.
Probando la función
Una vez escrito el código de las funciones y la conexión con la base de datos, el siguiente paso es probar que todo funciona como se espera.
Configurando la base de datos
Para probar la interacción con la base de datos se puede, o bien configurarla utilizando el sdk de aws y el comando aws configure o bien utilizando una base de datos local, que podemos arrancar utilizando el siguiente comando:
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar --inMemory
Esto proporciona una base de datos de DynamoDB en local lista para utilizar. La creación de tablas, en este ejemplo, se realiza como parte de los test unitarios:
table = dynamodb.create_table(
TableName="Accounts",
KeySchema=[
{
'AttributeName': 'id',
'KeyType': 'HASH'
}],
AttributeDefinitions=[
{
'AttributeName': 'id',
'AttributeType': 'S'
}],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
})
table.meta.client.get_waiter('table_exists').wait(TableName='Accounts')
Probando las funciones
Con la base de datos funcionando y las tablas creadas, llega el momento de escribir las diferentes pruebas, como esta, que comprueba que un ingreso ha sido efectivo:
...
def test_deposit(self):
function.setup({'id': "Bob"})
function.deposit({'id': "Bob", "amount": 10.0})
self.assertEqual(function.get_balance({'id': "Bob"}), 10)
Para ejecutar los tests es tan sencillo como escribir python function-tests.pydesde la consola de comandos.
Configurando Lambda
Para convertir el código de las funciones en una API que se pueda consultar vía HTTP se ha recurrido a SAM (Serverless Application Model) local, un conjunto de herramientas para definir, probar y desplegar funciones Lambda.
La plantilla
Una plantilla de SAM no es más que un subconjunto de una plantilla de CloudFormation, que en un artículo anterior comentaba que es una manera de automatizar la creación y actualización de recursos dentro de AWS.
Para este ejemplo la plantilla tiene el siguiente aspecto:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: 'A simple banking app built on lambdas'
Resources:
Bank:
Type: AWS::Serverless::Function
Properties:
Handler: function.dispatch
CodeUri: .
Runtime: python3.6
Events:
CreateAccount:
Type: Api
Properties:
Path: '/account/new'
Method: put
GetBalance:
Type: Api
Properties:
Path: '/account'
Method: get
Deposit:
Type: Api
Properties:
Path: '/account/deposit'
Method: post
Transfer:
Type: Api
Properties:
Path: '/account/transfer'
Method: post
Withdraw:
Type: Api
Properties:
Path: '/account/withdraw'
Method: post
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'dynamodb:PutItem'
- 'dynamodb:GetItem'
Resource: 'arn:aws:dynamodb:*:*:*'
En esta plantilla se define:
– El handler, o función que responderá a la llamada
– Los eventos que dispararán la llamada (varios eventos pueden disparar el mismo handler)
– Las políticas que se aplican (en esta caso necesitamos permisos de lectura y escritura para la base de datos).
Validando y probando la plantilla con SAM
Una vez definidos todos los valores para la plantilla, se puede ejecutar SAM para comprobar que la plantilla es válida:
sam validate
El cliente SAM permite, además, probar la definición de la plantilla de manera local:
sam local start-api
Este comando inicializa un servidor en el puerto 3000 con el que se pueden probar las funciones desde un cliente HTTP (en este ejemplo he utilizado Postman) y ver como la API responde a las diferentes llamadas.
Pruebas de integración
Para comprobar que todo funciona bien antes de subir la plantilla a AWS, he creado un conjunto de tests un tanto diferentes, esta vez para comprobar que los datos se transforman correctamente, llamando al cliente de localhost.
def test_deposit(self):
r = requests.get('http://localhost:3000/account?id=test_bob')
initial_bob_balance = float(r.text)
r = requests.post('http://localhost:3000/account/deposit', data = json.dumps({'id':'test_bob', 'amount': 100}))
self.assertEqual(200, r.status_code)
r = requests.get('http://localhost:3000/account?id=test_bob')
new_balance = float(r.text)
self.assertEqual(new_balance, initial_bob_balance + 100)
Para ejecutar los tests es tan sencillo como escribir python integration-tests.pydesde la consola de comandos.
Desplegando la plantilla
Finalmente queda empaquetar la función y desplegarla en la consola de AWS. En primer lugar se empaqueta la función en un fichero zip, se guarda en una carpeta (bucket) de S3 y se actualizan las referencias en el fichero que se ha denominado package.yaml:
sam package --template-file template.yaml --s3-bucket my-bucket --output-template-file packaged.yaml
En segundo lugar se invoca a CloudFormation para crear todos los recursos definidos en el paquete, incluyendo el código fuente y el conjunto de permisos:
sam deploy --template-file packaged.yaml --stack-name my-stack-name --capabilities CAPABILITY_IAM
Una vez se haya desplegado el paquete, se puede ir a la consola de AWS y ver cómo se ha desplegado la función:
Captura de pantalla 2018-05-12 a las 21.00.03
Al hacer click, se puede ver además los diferentes endpoints que API Gateway ha creado. así como las diferentes opciones:
Captura de pantalla 2018-05-13 a las 9.54.07
Con esto la función creada está lista para ser ejecutada.
En resumen
Eh este artículo se ha definido una función Lambda de manera local, utilizando tantas capas de abstracción como hayan sido necesarias. Se han creado diferentes tipos de test para probar los diferentes puntos de integración, y finalmente se ha empaquetado y desplegado la función desde la línea de comandos.
El código está disponible en GitHub.
Sin embargo, se han quedado algunas cosas fuera del alcance del artículo, y quedan como sugerencia al lector para continuar experimentando:
• Inyección de dependencias y Mocking para hacer tests unitarios
• Integración con un sistema como Jenkins para ejecutar tests
• Crear la base de datos utilizando la plantilla de SAM
• Uso de frameworks como Serverless que agregan una capa de abstracción incluso mayor, aislando de las implementaciones de AWS, Azure, Google Cloud o IBM.
¿Y tú, has hecho alguna prueba con Lambdas u otra plataforma Serverless?
Herramientas utilizadas
• SAM local para probar la función en local, así como para desplegarla en AWS.
• Visual Studio Code para editar el código y para interactuar con la API de SAM local usando su consola integrada.
• DynamoDB Local para emular una base de datos de DynamoDB.
• Boto 3 el SDK oficial de AWS para Python.
• Postman para probar las diferentes APIs tanto locales como remotas.
Más documentación y ejemplos
Autor: Roberto Luis Bisbé
Software Developer, Computer Engineer
Responder
Introduce tus datos o haz clic en un icono para iniciar sesión:
Logo de WordPress.com
Estás comentando usando tu cuenta de WordPress.com. Salir / Cambiar )
Google photo
Estás comentando usando tu cuenta de Google. Salir / Cambiar )
Imagen de Twitter
Estás comentando usando tu cuenta de Twitter. Salir / Cambiar )
Foto de Facebook
Estás comentando usando tu cuenta de Facebook. Salir / Cambiar )
Conectando a %s
Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios .
|
__label__pos
| 0.706032 |
Vasile Turcu Vasile Turcu - 1 year ago 90
C++ Question
C++ Copy Constructor Error
Say I have the following piece of code:
class C1{
int x, y;
public:
C1(int a, int b):x(a),y(b)
{
cout<<"const normal C1"<<endl;
}
};
class C2 : protected C1
{
int x, y;
const int z;
public:
C2(int u, int v, int w):C1(u,v),z(w)
{
cout<<"constr normal C2"<<endl;
}
C2(const C2 &a,int u, int v, int w):C1(u,v),z(w)
{
cout<<"cpy constr C2"<<endl;
}
};
int main(){
C2 a(2,3,4);
C2 b=a;
}
When I create the object b, the copy constructor is supposed to be called instead of the normal constructor, but it is not. Now I know that a copy constructor should look like
C(C const& obj){...}
where C is a class, but in this case, I need the other parameters for C1's constructor. Now, my question is, what have I done wrong, or what can I do in order to make the copy constructor work properly?
Answer Source
Your second constructor is not a copy constructor. A constructor is only a copy constructor if it can be called with a single argument which is the object to be copied from.
Why not:
C2(const C2 &a):C1(static_cast<const C1&>(a)), z(a.z)
{
cout<<"cpy constr C2"<<endl;
}
Note that
1. this copy constructor is practically the same as the compiler generated one.
2. the static_cast is unnecessary (I put it in for clarity).
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.995429 |
1
So, I am looking to insert two cell values into one cell, one following the other. For the following example see the Image below.Spreadsheet
I want to put a Formula in C such that the formula would result in ab I am aware you can set one cell equal to another just by =A1, but I am looking to put both in at the same time. Something like =(=A1)(=B1),but that actually works, because this just returns the general "Formula Parse Error".
1
• The cell values are text, so you’re not trying to perform mathematical addition. There are a bunch of formulas for joining text, but you could also try “=A1&B1”
– Tedinoz
Nov 6, 2020 at 23:30
1 Answer 1
1
Solution:
Use that in cell C1:
=TEXTJOIN(,,A1,B1)
Output:
example
References:
TEXTJOIN
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.997067 |
3.4 Evaluating forecast accuracy
Training and test sets
It is important to evaluate forecast accuracy using genuine forecasts. Consequently, the size of the residuals is not a reliable indication of how large true forecast errors are likely to be. The accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model.
When choosing models, it is common practice to separate the available data into two portions, training and test data, where the training data is used to estimate any parameters of a forecasting method and the test data is used to evaluate its accuracy. Because the test data is not used in determining the forecasts, it should provide a reliable indication of how well the model is likely to forecast on new data.
The size of the test set is typically about 20% of the total sample, although this value depends on how long the sample is and how far ahead you want to forecast. The test set should ideally be at least as large as the maximum forecast horizon required. The following points should be noted.
• A model which fits the training data well will not necessarily forecast well.
• A perfect fit can always be obtained by using a model with enough parameters.
• Over-fitting a model to data is just as bad as failing to identify a systematic pattern in the data.
Some references describe the test set as the “hold-out set” because these data are “held out” of the data used for fitting. Other references call the training set the “in-sample data” and the test set the “out-of-sample data.” We prefer to use “training data” and “test data” in this book.
Functions to subset a time series
The window() function introduced in Chapter 2 is useful when extracting a portion of a time series, such as we need when creating training and test sets. In the window() function, we specify the start and/or end of the portion of time series required using time values. For example,
window(ausbeer, start=1995)
extracts all data from 1995 onward.
Another useful function is subset() which allows for more types of subsetting. A great advantage of this function is that it allows the use of indices to choose a subset. For example,
subset(ausbeer, start=length(ausbeer)-4*5)
extracts the last 5 years of observations from ausbeer. It also allows extracting all values for a specific season. For example,
subset(ausbeer, quarter = 1)
extracts the first quarters for all years.
Finally, head and tail are useful for extracting the first few or last few observations. For example, the last 5 years of ausbeer can also be obtained using
tail(ausbeer, 4*5)
Forecast errors
A forecast “error” is the difference between an observed value and its forecast. Here “error” does not mean a mistake, it means the unpredictable part of an observation. It can be written as \[ e_{T+h} = y_{T+h} - \hat{y}_{T+h|T}, \] where the training data is given by \(\{y_1,\dots,y_T\}\) and the test data is given by \(\{y_{T+1},y_{T+2},\dots\}\).
Note that forecast errors are different from residuals in two ways. First, residuals are calculated on the training set while forecast errors are calculated on the test set. Second, residuals are based on one-step forecasts while forecast errors can involve multi-step forecasts.
We can measure forecast accuracy by summarising the forecast errors in different ways.
Scale-dependent errors
The forecast errors are on the same scale as the data. Accuracy measures that are based only on \(e_{t}\) are therefore scale-dependent and cannot be used to make comparisons between series that involve different units.
The two most commonly used scale-dependent measures are based on the absolute errors or squared errors: \[\begin{align*} \text{Mean absolute error: MAE} & = \text{mean}(|e_{t}|),\\ \text{Root mean squared error: RMSE} & = \sqrt{\text{mean}(e_{t}^2)}. \end{align*}\] When comparing forecast methods applied to a single time series, or to several time series with the same units, the MAE is popular as it is easy to both understand and compute. A forecast method that minimises the MAE will lead to forecasts of the median, while minimising the RMSE will lead to forecasts of the mean. Consequently, the RMSE is also widely used, despite being more difficult to interpret.
Percentage errors
The percentage error is given by \(p_{t} = 100 e_{t}/y_{t}\). Percentage errors have the advantage of being unit-free, and so are frequently used to compare forecast performances between data sets. The most commonly used measure is: \[ \text{Mean absolute percentage error: MAPE} = \text{mean}(|p_{t}|). \] Measures based on percentage errors have the disadvantage of being infinite or undefined if \(y_{t}=0\) for any \(t\) in the period of interest, and having extreme values if any \(y_{t}\) is close to zero. Another problem with percentage errors that is often overlooked is that they assume the unit of measurement has a meaningful zero.2 For example, a percentage error makes no sense when measuring the accuracy of temperature forecasts on either the Fahrenheit or Celsius scales, because temperature has an arbitrary zero point.
They also have the disadvantage that they put a heavier penalty on negative errors than on positive errors. This observation led to the use of the so-called “symmetric” MAPE (sMAPE) proposed by Armstrong (1978, p. 348), which was used in the M3 forecasting competition. It is defined by \[ \text{sMAPE} = \text{mean}\left(200|y_{t} - \hat{y}_{t}|/(y_{t}+\hat{y}_{t})\right). \] However, if \(y_{t}\) is close to zero, \(\hat{y}_{t}\) is also likely to be close to zero. Thus, the measure still involves division by a number close to zero, making the calculation unstable. Also, the value of sMAPE can be negative, so it is not really a measure of “absolute percentage errors” at all.
Hyndman & Koehler (2006) recommend that the sMAPE not be used. It is included here only because it is widely used, although we will not use it in this book.
Scaled errors
Scaled errors were proposed by Hyndman & Koehler (2006) as an alternative to using percentage errors when comparing forecast accuracy across series with different units. They proposed scaling the errors based on the training MAE from a simple forecast method.
For a non-seasonal time series, a useful way to define a scaled error uses naïve forecasts: \[ q_{j} = \frac{\displaystyle e_{j}} {\displaystyle\frac{1}{T-1}\sum_{t=2}^T |y_{t}-y_{t-1}|}. \] Because the numerator and denominator both involve values on the scale of the original data, \(q_{j}\) is independent of the scale of the data. A scaled error is less than one if it arises from a better forecast than the average naïve forecast computed on the training data. Conversely, it is greater than one if the forecast is worse than the average naïve forecast computed on the training data.
For seasonal time series, a scaled error can be defined using seasonal naïve forecasts: \[ q_{j} = \frac{\displaystyle e_{j}} {\displaystyle\frac{1}{T-m}\sum_{t=m+1}^T |y_{t}-y_{t-m}|}. \]
The mean absolute scaled error is simply \[ \text{MASE} = \text{mean}(|q_{j}|). \]
Examples
beer2 <- window(ausbeer,start=1992,end=c(2007,4))
beerfit1 <- meanf(beer2,h=10)
beerfit2 <- rwf(beer2,h=10)
beerfit3 <- snaive(beer2,h=10)
autoplot(window(ausbeer, start=1992)) +
autolayer(beerfit1, series="Mean", PI=FALSE) +
autolayer(beerfit2, series="Naïve", PI=FALSE) +
autolayer(beerfit3, series="Seasonal naïve", PI=FALSE) +
xlab("Year") + ylab("Megalitres") +
ggtitle("Forecasts for quarterly beer production") +
guides(colour=guide_legend(title="Forecast"))
Forecasts of Australian quarterly beer production using data up to the end of 2007.
Figure 3.9: Forecasts of Australian quarterly beer production using data up to the end of 2007.
Figure 3.9 shows three forecast methods applied to the quarterly Australian beer production using data only to the end of 2007. The actual values for the period 2008–2010 are also shown. We compute the forecast accuracy measures for this period.
beer3 <- window(ausbeer, start=2008)
accuracy(beerfit1, beer3)
accuracy(beerfit2, beer3)
accuracy(beerfit3, beer3)
RMSE MAE MAPE MASE
Mean method 38.45 34.83 8.28 2.44
Naïve method 62.69 57.40 14.18 4.01
Seasonal naïve method 14.31 13.40 3.17 0.94
It is obvious from the graph that the seasonal naïve method is best for these data, although it can still be improved, as we will discover later. Sometimes, different accuracy measures will lead to different results as to which forecast method is best. However, in this case, all of the results point to the seasonal naïve method as the best of these three methods for this data set.
To take a non-seasonal example, consider the Google stock price. The following graph shows the 200 observations ending on 6 Dec 2013, along with forecasts of the next 40 days obtained from three different methods.
googfc1 <- meanf(goog200, h=40)
googfc2 <- rwf(goog200, h=40)
googfc3 <- rwf(goog200, drift=TRUE, h=40)
autoplot(subset(goog, end = 240)) +
autolayer(googfc1, PI=FALSE, series="Mean") +
autolayer(googfc2, PI=FALSE, series="Naïve") +
autolayer(googfc3, PI=FALSE, series="Drift") +
xlab("Day") + ylab("Closing Price (US$)") +
ggtitle("Google stock price (daily ending 6 Dec 13)") +
guides(colour=guide_legend(title="Forecast"))
Forecasts of the Google stock price from 7 Dec 2013.
Figure 3.10: Forecasts of the Google stock price from 7 Dec 2013.
googtest <- window(goog, start=201, end=240)
accuracy(googfc1, googtest)
accuracy(googfc2, googtest)
accuracy(googfc3, googtest)
RMSE MAE MAPE MASE
Mean method 114.21 113.27 20.32 30.28
Naïve method 28.43 24.59 4.36 6.57
Drift method 14.08 11.67 2.07 3.12
Here, the best method is the drift method (regardless of which accuracy measure is used).
Time series cross-validation
A more sophisticated version of training/test sets is time series cross-validation. In this procedure, there are a series of test sets, each consisting of a single observation. The corresponding training set consists only of observations that occurred prior to the observation that forms the test set. Thus, no future observations can be used in constructing the forecast. Since it is not possible to obtain a reliable forecast based on a small training set, the earliest observations are not considered as test sets.
The following diagram illustrates the series of training and test sets, where the blue observations form the training sets, and the red observations form the test sets.
The forecast accuracy is computed by averaging over the test sets. This procedure is sometimes known as “evaluation on a rolling forecasting origin” because the “origin” at which the forecast is based rolls forward in time.
With time series forecasting, one-step forecasts may not be as relevant as multi-step forecasts. In this case, the cross-validation procedure based on a rolling forecasting origin can be modified to allow multi-step errors to be used. Suppose that we are interested in models that produce good \(4\)-step-ahead forecasts. Then the corresponding diagram is shown below.
Time series cross-validation is implemented with the tsCV() function. In the following example, we compare the RMSE obtained via time series cross-validation with the residual RMSE.
e <- tsCV(goog200, rwf, drift=TRUE, h=1)
sqrt(mean(e^2, na.rm=TRUE))
#> [1] 6.233
sqrt(mean(residuals(rwf(goog200, drift=TRUE))^2, na.rm=TRUE))
#> [1] 6.169
As expected, the RMSE from the residuals is smaller, as the corresponding “forecasts” are based on a model fitted to the entire data set, rather than being true forecasts.
A good way to choose the best forecasting model is to find the model with the smallest RMSE computed using time series cross-validation.
Pipe operator
The ugliness of the above R code makes this a good opportunity to introduce some alternative ways of stringing R functions together. In the above code, we are nesting functions within functions within functions, so you have to read the code from the inside out, making it difficult to understand what is being computed. Instead, we can use the pipe operator %>% as follows.
goog200 %>% tsCV(forecastfunction=rwf, drift=TRUE, h=1) -> e
e^2 %>% mean(na.rm=TRUE) %>% sqrt()
#> [1] 6.233
goog200 %>% rwf(drift=TRUE) %>% residuals() -> res
res^2 %>% mean(na.rm=TRUE) %>% sqrt()
#> [1] 6.169
The left hand side of each pipe is passed as the first argument to the function on the right hand side. This is consistent with the way we read from left to right in English. When using pipes, all other arguments must be named, which also helps readability. When using pipes, it is natural to use the right arrow assignment -> rather than the left arrow. For example, the third line above can be read as “Take the goog200 series, pass it to rwf() with drift=TRUE, compute the resulting residuals, and store them as res.”
We will use the pipe operator whenever it makes the code easier to read. In order to be consistent, we will always follow a function with parentheses to differentiate it from other objects, even if it has no arguments. See, for example, the use of sqrt() and residuals() in the code above.
Example: using tsCV()
The goog200 data, plotted in Figure 3.5, includes daily closing stock price of Google Inc from the NASDAQ exchange for 200 consecutive trading days starting on 25 February 2013.
The code below evaluates the forecasting performance of 1- to 8-step-ahead naïve forecasts with tsCV(), using MSE as the forecast error measure. The plot shows that the forecast error increases as the forecast horizon increases, as we would expect.
e <- tsCV(goog200, forecastfunction=naive, h=8)
# Compute the MSE values and remove missing values
mse <- colMeans(e^2, na.rm = T)
# Plot the MSE values against the forecast horizon
data.frame(h = 1:8, MSE = mse) %>%
ggplot(aes(x = h, y = MSE)) + geom_point()
Bibliography
Armstrong, J. S. (1978). Long-range forecasting: From crystal ball to computer. John Wiley & Sons. [Amazon]
Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679–688. [DOI]
1. That is, a percentage is valid on a ratio scale, but not on an interval scale. Only ratio scale variables have meaningful zeros.↩︎
|
__label__pos
| 0.978448 |
神刀安全网
C++ in Competitive Programming: I/O
Welcome back to my series about Competitive Programming. Here is the introduction in case you missed it.
In this post I’ll explain some common idioms to deal with input and output.
In C++ a simple task like reading an integer from the standard input can be done in different ways: using streams, C functions or OS-dependant calls. The streams model offers a pretty high level interface, but it is generally slower than using native operating system calls. However, in different cases, it is acceptable.
I have solved a lot of challenges and very rarely I had to switch to C functions (e.g. scanf) or turn off the synchronization between C++ streams and standard C streams after each input/output operation (by using std::ios_base::sync_with_stdio ). Most of the time the I/O is not the point of the exercise then we can use the convenient streams model. This point seems irrelevant but it brings about simplification, enabling us to write not only simpler but also safer idioms. We’ll see that in some cases the streams interface is enough for basic parsing as well.
Due to the nature of the challenges – generally not focusing on I/O – I remark that the following idioms are pragmatic : they “work here”, but they may not fit a production environment where streams are often avoided like the plague. As I have read in a comment on reddit : “I fear that it may lead some people to prefer quick and functional solutions that work for some cases, neglecting important traits such as scalability, performance or cross-platform compatibility”. I think that solving challenges is about prototyping a solution which has to cover some sensible scenarios cleverly set up by challenge moderators. “Production” is another business. Definitely. Requirements evolve, scalability matters, etc. Being well-versed in Competitive Programming is not enough, sure, but I think that solving challenges is an entertaining way to face new problems that maybe – at some point – you’ll deal with for “real life purposes”.
I repeat a point from the introduction that I consider the most important: competitive programming is guaranteed brain excercise : “on any given day, what you’re asked to do at work may or may not be challenging, or you may be exercising one of the many non-technical skills required of software developers. By practicing regularly with competitive programming problems, you can ensure that the coding part of your brain receives a regular workout. It’s like one of those brain training apps, but for real skills”. In addition to this, other eleven reasons are wisely explained in this nice article which I recommend again.
After this short additional introduction, let me show some code about I/O.
Just for completeness, in the simplest cases you only need to read lonely values like numbers or strings:
int N, M; cin >> N >> M;
Remember in these challenges you don’t need to validate the input (unless stated otherwise – never found).
In one of the most common scenarios you need to read a sequence of numbers, and you can do it by using the synergetic copy_n + istream_iterator + back_inserter trio. That’s our first idiom. Generally the length of the sequence is passed just before (and the numbers are separated by whitespaces):
reserve prepares the vector to host “length” elements – it’s possibly allocating memory. A note: I’d really prefer writing something like vector<int> sequence(reserve, length) where “reserve” here is just a tag. The same applies for resize, etc. And this would avoid initializer lists overload predominance:
I used copy_n instead of copy because not only is it clearer, but also convenient in case of more input to read (if I used copy then I would need to pass an end-of-stream iterator, but passing istream_iterator<int>() is dangerous because copy goes on iterating until the stream gets invalid).
With ranges the code streamlines:
boundedreturns a range of exactly length elements, in this case from the standard input ( view::take is another possibility).
Since a default operator>> for vector is not defined, reading a matrix needs manual iteration (here using copy_n is even more convenient):
Remember we don’t want to create a library, because it couldn’t be used to solve challenges. For this reason we don’t introduce an operator>> for vector.
Other cases involve strings, not just words but also lines. Pay attention to getline: as I have recently blogged (for another reason, but the lesson learned was the same), getline is an unformatted function. This means it does not ignore leading delimiters ( newline by default). These innocent lines can lead to something you may don’t expect:
int N; string line; cin >> N; getline(cin, line);
The problem here is that we want to ignore the separator between the int and the line. E.g.:
10'/n' a line representing some data
Since getline does not skip the leading ‘/n’, it will stop immediately, writing nothing into the string.
This problem is easy to solve by passing std::ws (a manipulator which discards leading whitespaces from an input stream) :
This succint version is valid too:
And here is how the ranges join the party:
In another recurring pattern the input appears in this form:
N T a1 a2 ... aN t1 t2 ... tT
N is the length of a sequence of numbers, T is the number of test cases. Generally this kind of challenges require you to operate on the sequence with some kind of clever pre-processing and to use each test-case value (or values) to perform an operation quickly. For example, for each test-case value t, output the sum of the first t elements in the sequence.
Here is the simplest code to use for reading inputs in this scenario:
We’ll meet again this problem later in this series.
More complex input patterns can be handled by combining the examples above.
Printing answers
Just a few lines on output. Obviously, by using std::cout and some convenient standard abstractions like ostream_iterator .
Often the answers are just one number or two. In this case, just send them to the standard output applying the required formatting rules (e.g. space separated):
I generally don’t use endl because it causes a stream flush and this may negatively affect the performance. For this reason I use just “/n” out of habit. In certain situations a printf is probably shorter, but it’s still error prone: not only if you write a wrong format, but also if you forget to update it in case you change a type (e.g. imagine you solve a problem by using int and then you change it to unsigned long long). With streams you don’t need to change anything because they automatically pick the correct operator<< overload.
When you need to print a sequence – like a vector – just use copy + ostream_iterator :
Note that the last separator is printed after the back element. This means extra care is needed to avoid it. For example:
Maybe in C++17 we’ll use the trailblazing ostream_joiner to save the extra line – since the “last” separator is not outputted at all:
Another worthwhile scenario is when you need to print floating point numbers and the output is expected to be an approximation of a certain number of digits. fixed and setprecision are good friends in this case. For example:
will print num1 and num2 with exactly two digits after the comma:
will print:
1.00 1.24
In case you need to print in a loop, it’s a good habit to set stream manipulators once, before the iteration:
I’ll be back on some standard mathematical utilities (e.g. rounding) later in this series.
Pushing streams to the limit
Before concluding, it’s worth sharing a couple of “beyond-the-basics” examples. Sometimes just configuring streams options is enough for solving some problems.
The first challenge required to tokenize a string based on given delimiters and to print the obtained tokens to the standard output, one per line. In other languages like Java you could use a StringTokenizer – indeed lots of Java guys use this opportunity on CC websites. Note that more complex challenges where parsing is the main topic does not allow you to adoperate standard things like streams or tokenizers (in other languages) – and they won’t be as efficient as the challenge requires!
This solution is not intended for production.So please don’t post comments like “streams are slow” or “who uses streams to tokenize?”. Here we have a different scenario. This code can be easily plugged into challenges requiring some basic parsing. By default streams skip whitespaces, here we need to skip also other delimiters.
Ranges library provides views like split and delimit for this kind of things:
Anyhow, let me go back to my modest C++14 empty sheet on HackerRank, I have only 30′ to complete the contest and Java boys are submitting two-line solutions…should I roll my own mini-parser?
C++ has a (a bit old-fashioned?) concept called facet to handle specific localization aspects. Basically, each facet corresponds to a precise localization setting that can be tuned. For example, the ctype facet encapsulates character classification features – it can answer questions like “does this char is a space?” or “does this char is a digit?”. One can inherit from a base facet and change default behavior.
For our task we can inherit from ctype<char>, a specialization of std::ctype which encapsulates character classification features for type char . Unlike general-purpose std::ctype , which uses virtual functions, this specialization uses table lookup to classify characters (which is generally faster). Here is a solution to the challenge:
This requires a bit of clarification if you don’t know facets at all: ctype<char> uses a table to associate each char to its “classification” (this table is, indeed, called classification table ). The table size is standard ( ctype<char>::table_size – at least 256). We just need to set char delimiters to ctype_base::space . All std::istream formatted input functions are required to use std::ctype<char> for character classing during input parsing. For example, istream::operator>> asks to ctype::is if a char is a space (aka a delimiter). Under the hood ctype<char> lookups the internal table.
Don’t worry about the lonely custom_delims allocation, the locale guarantees that the new facet will be deleted as soon as it’s not referenced anymore (facets are reference counted – another possible performance penalty, in addition to virtual function calls).
Although I never use such approach for parsing in my production code, in Competitive Programming it may be acceptable. For example, on HackerRank I submitted this code against a test-case requiring to split a string of 400’000 chars and it took (output included) 0.05s – within the time requirement of that challenge. And this code is easily reusable.
I promised two examples. The other was about number punctuation : given an integer number, print the string representation of the number, comma separated, supporting different comma styles – e.g. US 1000000 is 1,000,000, instead Indian 1000000 becomes 10,00,000. Another time, we may use the standard localization support:
The code is quite self-explanatory. For more details I encourage you to have a look at numpunct facet .
Hope you enjoyed this post because sometimes stream capabilities suffice in such competitions – consider them before rolling your own alternatives.
Summary
A brief recap of some weapons to add to our “pragmatic C++ competitive programmer toolkit”:
• Usually I/O is not the point of the exercises, so using streams is fine
• To read a sequence of values into a vector use:
• vector:: reserve (N), and
• copy_n + istream_iterator + back_inserter
• To print sequences use copy + ostream_iterator
• To print floating points rounded to the nth-digit use fixed << setprecision(n)
• If parsing is not the point of the challenge, consider localization features
转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » C++ in Competitive Programming: I/O
分享到:更多 ()
评论 抢沙发
• 昵称 (必填)
• 邮箱 (必填)
• 网址
分享按钮
|
__label__pos
| 0.750914 |
How To Delete Filtered Messages On Facebook? [Solved] 2022 – Best Answer
How do you delete filtered messages on Messenger?
To delete filtered messages on Messenger, open the Messages app and select the message you want to delete. Then tap the three lines in the top-left corner of the screen and select “Delete.
Why do I get filtered messages on Facebook?
Facebook filters messages to make sure they are only sent to people who are following the Facebook page of the sender.
How do I erase private messages on Facebook?
Facebook offers a way to erase private messages by going to the “Message History” section on the Facebook website. There, you can choose to remove all messages from a specific time period or just those from the past week.
How do I permanently delete my Facebook messages on both sides?
There is no one-size-fits-all answer to this question, as the deletion process on Facebook will vary depending on the platform and account you are using. However, some tips on how to delete Facebook messages permanently include using the “Delete All” button on your Facebook page, or by contacting Facebook support.
What are filtered messages on Messenger?
Messenger filters messages by sender, subject, and message type.
How do you check filtered messages on Messenger?
To check filtered messages on Messenger, open the app and select the “Messenger” tab. In the “Message History” section, you will see a list of all messages that have been sent and received in the past 24 hours. You can click on any message to view its full details.
VOTE
How To Delete An Airbnb Listing? [Solved] 2022 - Best Answer
What is message filtering?
Message filtering is a process of suppressing or rejecting messages based on certain criteria.
How can you tell if someone has a secret conversation on Messenger?
If someone is talking on Messenger about a secret topic, their conversation may be difficult to understand. Additionally, if they are using a strong language, it may be difficult to hear them.
Can you tell if someone is checking your Messenger?
There is no one-size-fits-all answer to this question, as the use of Messenger depends on the individual user and their personal preferences. However, some users might find it helpful to check their Messenger frequently in order to make sure that they are not missing any important messages or conversations.
How do I permanently delete messages from Messenger?
To permanently delete messages from Messenger, open the app and select “Settings.” Scroll down and select “Message History.” Tap on the “Delete” button.
Does Blocking someone on Facebook delete messages?
Yes, deleting a conversation on Messenger deletes it for the other person.
Does deleting a conversation on Messenger delete it for the other person?
Yes, it deletes the conversation for the other person.
Does vanish mode delete messages on both sides?
No, the deletion process happens on the receiving side only.
Does report and block delete messages on both sides?
Yes, both the sender and receiver of a message can delete it.
Are deleted messages on Messenger gone forever?
Yes, deleted messages are gone forever on Messenger.
VOTE
How To Watermark In Photoshop? [Solved] 2022 - Best Answer
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
|
__label__pos
| 0.999992 |
Broadband Speed Test
Put your connection to the test and find out how fast it really performs
We all know that broadband suppliers advertise their speeds as "up to" a certain level. But how fast is your actual connection? Now you can find out with our broadband speed test, use the checker below to find out. To get accurate results make sure you're not using your internet connection for anything else while the test runs (it only takes a few seconds).
Loading, please wait...
UK average broadband speeds for June 2024
Chart showing the average download speeds across tests carried out by users of the broadband.co.uk speed test in June 2024.
We record hundreds of thousands of test results every month. Broadband provider is detected by IP address rather than self selection. Mobile broadband providers and those that only serve business customers are not counted towards the national average figure.
Ofcom have also published average broadband speeds recorded across the UK, including individual figures for the largest providers, different types of broadband and different types of neighbourhood. These are recorded by a different method involving equipment installed in the home to record speeds during day-to-day usage. Figures are reported twice a year. The last available report gave the UK's national average broadband speed as 28.9Mb in November 2015.
Why should I be testing my broadband speed?
Here are 5 great reasons to be testing your broadband speed:
1. To check that your broadband provider is giving you the speeds they should
2. To see if your broadband speeds vary at different times of day
3. To see if there are faster providers and deals available in your area
4. To compare your speeds with those of your friends and neighbours
5. Your test results will help us compile our monthly speed test results which show the fastest and slowest broadband providers in the UK.
Without broadband speed test tools there would be no way of holding your supplier to account, you just wouldn't know if you were getting the service you are paying for.
Some providers "throttle" their networks at peak times which can cause broadband connections to slow, but testing your connection regularly you can find out if your supplier is doing this to you.
As well as testing your broadband speed, we'll also let you know how your speeds compare to those of other broadband customers and let you know if there are faster providers in your area.
How can I ensure my test is accurate?
Make sure that nothing else is using your Internet connection. This includes other apps on your computer like your email client, instant messenger or browser tabs running Facebook or other constantly updating sites. It can also include online radio, background downloads and updates to your virus checker or operating system.
Make sure that nothing intensive is running on your computer. If your virus checker is in the middle of a scan or you're busy copying files or converting a video, this is likely to slow down your computer and so affect how accurately the speed test can run. If possible close all other apps, browser windows and tabs and try to run the speed test alone. This is especially important if you have a slow computer or you're running the test on a tablet or mobile phone and tend to wait a long time for apps to load.
Make sure that no other devices are using your network, this could include a TV box or games console in the living room or the smartphone in your pocket. If possible turn these off or turn off the wifi connection.
If possible, connect by cable rather than wirelessly. Wireless networks can get slower the further away from the router you are and can suffer interference from thick walls or metal objects. If you're on a mobile device or have to use wireless, try to stay relatively close to your router, preferably within line of site, and avoid running equipment that can cause interference such as microwave ovens or unshielded flashing lights.
If possible, avoid using proxies, VPNs, and 'Turbo' features, some browsers or broadband providers offer features designed to speed up your connection by processing the data that goes through your connection before you receive it. This can artificially increase your test speed or even prevent the test from completing. Our test uses the HTTP protocol over TCP port 80, if you're having trouble with the test make sure that this traffic isn't handled by a proxy or blocked by a firewall. Some Internet connected audio-video equipment or services designed to allow access to video content in other countries can also have an affect on our test results.
We recommend running the test more than once to make sure there wasn't some sort of blip during your first test. We'd also recommend using another speed test such as this Broadband Speed Test to compare your results, and testing at several different times of the day and night to see how peaktime network congestion affects your service.
How does the speed test work?
Unlike many speed test providers, our test doesn't require Flash and will instead run on any recent web browser with JavaScript, including tablets such as the iPad, Galaxy Tab, Google Nexus or Kindle Fire. You can even test your speed on a mobile phone without installing any apps! We've tested on a range of smartphones include iPhones, Android phones like Samsung Galaxy S4, Moto G and HTC One, and recent Windows Phone devices like the Nokia Lumia 630.
The test sends a small amount of data to your browser to get a rough idea of how fast your connection is and to measure latency (how quickly transfers begin), based on this figure it then sends a large amount of data and measures how quickly this is transferred. The data transferred is randomised and designed to not benefit from compression and to avoid caching, however it can be affected by proxies and 'turbo' browser features so ensure that these are switched off before you test. We send enough data to saturate your connection ensuring that even the fastest lines are accurately tested. The test is run multiple times and these figures are averaged to get the best indication of your connection's speed. For the upload portion of the test, a similar process occurs but random data is sent back to our servers.
The data our test transfers is not stored on your computer. We use browser cookies to remember your last test result, your postcode and the rating you gave your provider. This is to save you time when you next visit the site, allow comparisons with your last result and to prevent too many duplicate provider ratings. Our cookie only lasts 30 days, but will be rewritten with each test.
Your test results are stored on our servers to allow us to report on the average speeds of each of the providers. This helps other visitors to our site to decide which is the best broadband provider for their needs, as do any ratings you leave for your provider.
Rating your provider or entering your postcode are optional, the test will work without these. However if we have your postcode then we're able to show you the estimated speeds for your line and which technologies are available in your area. This allows you to better understand your speed in context and determine whether you have a fault, or if you're just experiencing the speeds expected in your area.
What does my speed actually mean?
It's easy to be confused by terms like Mbps, Mb, MB and MB/s.
Broadband speed is generally measured in megabits per second. This is written as 'Mbps' or just 'Mb', sometimes this is also referred to as 'meg' or 'megs'. A 'bit' is a single binary digit of data. There are 1,000,000 bits in a megabit.
Most computers measure filesize in 'bytes' rather than bits. Bits tend to be abbreviated to lower case b while bytes are abbreviated to upper case B, so a MB is very different to a Mb. Megabytes per second tend to be written as 'MB/s' or 'MB/sec'. There are 8 bits to 1 byte, 1024 bytes to the kilobyte and 1024 kilobytes to the megabyte. This means that there are actually 1,048,576 bytes or 8,388,608 bits in a megabyte.
File transfer speed, as reported during a download, is likely to be measured in megabytes per second rather than megabits per second, which means this is more than 8 times slower than you might expect if you thought megabits and megabytes were equivalent.
In reality a 38Mb fibre broadband connection can only download a file at 4.7MB/s. However for most webpages that's immediate loading, and for the average music album download, that's approximately one track per second. You'd also be able to download the average standard definition movie in just 3.5 minutes or in 15 minutes for a HD film. Obviously on a 76Mb connection these times are roughly halved (or a little over).
Why Is My Broadband Slower Than Expected?
We often get asked why broadband speed checkers sometimes report download speeds slower than the maximum theoretically possible at a property. We've compiled a list of some of the causes of slow broadband connections.
1. Being signed up to a slow deal. Some people, especially those who have not upgraded their broadband deal for a while, are on packages that will be set at lower speeds than they could get. We'd suggest that you compare the latest deals available at your property using our broadband postcode checker.
2. The quality of the phone line to your house, you really need an engineer to test this properly, but if when you use the line for phone calls if it doesn't sound great then you may have a poor quality line.
3. The quality of the phone wiring in your house. Its best to place the router as close to the point of entry of your phone line into the house. If you are plugging in via a distance of poor quality internal wiring this can degrade your signal and hence your download speed. We've seen cases where people have used an old extension cable to plug their router in and they couldn't get their broadband to work at all as it was in such poor condition, we suggested they bought a new cable and after they tried that it worked fine.
4. Your modem/router - Higher quality hardware can improve your download speed.
5. Time of the day - the more people using broadband at your exchange the lower your speeds can potentially get as congestion kicks in, so try testing your speeds at different times of the day.
6. The tasks your computer is carrying out can affect the results of broadband speed checkers, if you are downloading something or your email application is checking mail etc all these will take up some bandwidth and thus cause the speed checkers to report a lower speed than you are actually getting.
7. More than one computer on a connection. It is more common nowadays for multiple computers (and other devices like games consoles) to share a connection in a house. So if you are carrying out a speed test and another person is doing something bandwidth heavy you may not get properly representational results
You can use our broadband speed test to test your speeds at any time. We suggest you try speed tests a number of times over a period before reporting issues to your providers technical support.
If your broadband's slower than expected
1. Work through our tips to speed up your broadband to ensure your home setup is optimised
2. Compare broadband in your area to find faster deals
3. If you believe your broadband provider is at fault, follow our guide to complaining to your supplier
Do I need a faster connection?
If you're a heavy downloader then faster speeds will certainly allow you to download more files more quickly.
If you stream videos, TV shows and films over your connection then download speeds of 5Mb or over should allow high definition content without buffering. However if you live in a home with more than one Internet user, perhaps other family members or housemates, then you can quickly discover that you need several times that at times when everyone's using your connection, especially if more than one of you wants to stream video.
If you currently experience regular buffering of videos and slowdowns when other people are using your connection then you'd also very likely benefit from a speed upgrade.
Will switching providers actually improve my speed?
If you're not currently using a cable or fibre service then switching from standard broadband to a superfast connection will most definitely produce a significant speed improvement.
If you're already on cable or fibre and on a slower package, this speed reduction is likely to be artificially enforced. Switching to a package with a higher speed limit will likely see a significant increase in your speeds. On Virgin Media cable services this is likely to always be the case, as their lines are rated to be able to achieve the maximum speeds.
On a fibre to the cabinet service (such as BT Infinity), your distance from the street cabinet can reduce the maximum speed that your line can support. If you're currently on a half speed up to 38Mb service and currently seeing less that 38Mb then it's unlikely that you'll be able to benefit from switching to a faster package. If you get the full 38Mb then you still may not get the full 76Mb when you upgrade but instead get a lower figure between those two, determined by your line length. If you're unsure about this, your provider will be able to give you an accurate estimate of the maximum speed your line can support.
If you're only able to switch from one standard phoneline broadband service to another then speed changes are unlikely to be as significant as switching from one broadband technology to another. However if you're on a smaller budget provider then factors such as traffic management and peak time congestion may be slowing you down either due to artificial limits or the supplier's network capacity being saturated. In this case switching to a more expensive small provider with a better reputation, or switching to a larger supplier who has invested in their network capacity to allow them to advertise a 'truly unlimited' service may result in faster download speeds at peak times (i.e. evenings). However this is unlikely to be a very significant improvement, no more than increasing your evening speed to what you're currently able to achieve at off peak times like the early morning.
If you're on a rural exchange with very little competition and there's only standard phone line broadband available, BT may not have upgraded your exchange from up to 8Mb ADSL Max technology to the faster up to 24Mb ADSL2+ equipment. If this is the case then LLU providers like TalkTalk or Sky may have put their own equipment into your exchange, allowing you to possibly get increases in speed of more than double. However the size of this increase will depend on line length in terms of distance from the telephone exchange. If you're currently seeing maximum speeds of 7.5Mb on ADSL Max then switching to ADSL2+ might see your speed increase to between 16 and 19Mb. But if you're a very long distance from the exchange you may see no increase, or only a small increase of 0.5 to 1Mb. Our availability checker should be able to give you some idea of the speed difference you'll likely see and your new provider must give you an accurate estimate of speeds before you sign up.
|
__label__pos
| 0.541593 |
Feeds
back to article KingSpec's 2TB Multicore PCI-E SSD whopper vs the rest
Earlier this year we took a look at SSD caching, an alternative way of getting some SSD performance added to a system without too much strain put on the finances. So how about a gander at the other end of the spectrum – huge capacity and performance and a total disregard for the budget Enter the KingSpec Multicore MC1S81M2T, a …
COMMENTS
This topic is closed for new posts.
Boffin
Am I missing something?
The LSI MegaRaid 9271-4i with four Crucial M4 512GB drives comes in at 1576.23 including delivery, with performance not far outside the KingSpec, having quoted the KingSpec secondary setup as the second best performer based on the ATTO benchmark.
Do the lower capacity KingSpec cards stay in the 8-up formation, or drop to 6-up and 4-up. At which point, for the 1TB is the performance comparative the the above specification?
1
0
Silver badge
Headmaster
Re: Am I missing something?
Personally I'm still trying to work out how 8 x 240GB = 2TB?
There's either some serious rounding error involved, or someone is screwing us out of 100GB or so (even before other losses due to formatting and block sizes etc).
0
0
Anonymous Coward
Re: Am I missing something?
Easy: (240GB x 8) = 2061584302080 bytes = 2TB (base 10) = 1.875TB (base 2)
Storage manufacturers use a different measuring stick, as they have always done. This is becoming somewhat more of an issue now that storage is getting so large and the discrepancy is getting so big. Its one of the reasons people have tried to implement the ibi standard (MiB, GiB, etc) for the base 2 values.
2
0
Silver badge
Re: Am I missing something?
Don't forget, I'm sure the 240GB will be "decimal" 1000bytes to the K, 1000K to the M, 1000M to the G too...
So by the time that's factored in the true storage to the IT geek mind is 1.75TB.
But that's not such a good headline grabber is it!
1
0
Bronze badge
Real World Priorities
In the real world, if you're moving small files around, any modern disk will move them "instantly", so the small file performance is of little concern, unless you're moving a huge number of them.
Where this drive excels is when moving/saving the sort of files that normally send you off for a coffee and few games of freecell (like opera recordings, or HD video).
Want one!
PS I didn't see any DATA on boot times. How bad is it?
0
3
FAIL
Re: Real World Priorities
> In the real world, if you're moving small files around, any modern disk will move them "instantly", so the small file performance is of little concern, unless you're moving a huge number of them.
That's not the disk though, that's the cacheing on your OS accelerating your reads by holding recently read data in memory and instantly acknowleging writes before they've actually been commited to disk.
Do the same operation again with synchronous write semantics and a cold read cache and you'll see a massive advantage to the SSD.
> Where this drive excels is when moving/saving the sort of files that normally send you off for a coffee and few games of freecell (like opera recordings, or HD video).
What sort of shit heap have you got on your desk that means you have to get coffee while it copies files?
2
1
Bronze badge
Re: Real World Priorities
Real world example: download a multi-gigabyte recording of one of the recent Wagner operas featured at the Proms. It's in AAC format. Open file in audio editor to edit into separate Acts. Save resulting wav file on HDD before editing. Plenty of time for coffee and freecell.
0
0
Re: Real World Priorities
A crap corporate network share or Share point. take your pick.
0
1
MCG
Thumb Up
Re: Real World Priorities
>What sort of shit heap have you got on your desk that means you have to get coffee while it copies files?
Probably a recent model Apple desktop.... got a 2013 i5 iMac right here and it is a pisspoor performer in every respect, the very epitome of style over performance :)
1
1
What about databases?
It would have been good to see a database IOPS test. Database reads/writes are like moving small files but databases can be massive in size. So would the KingSpec SSD bomb or win in this scenario? We're always looking for ways to speed up databases from a hardware aspect.
4
0
Silver badge
SandForce
Stopped reading after the word 'SandForce'. Will continue reading if somebody confirms they have managed to create controller that isn't full of bugs.
3
1
Bronze badge
Questionable
There are three issues I see with this:
1) In any even remotely realistic use-case, you will run out of another resource (CPU or network) long before you reach the level of sequential I/O performance this device can allegedly deliver.
2) The test graphs only cover sequential I/O, not random I/O. What is the 4KB random-write performance on this device after it has been "primed" with a full fill from /dev/urandom?
3) Being based on mSATA SSDs, this is little more than an 8-port SATA card.
So what is the big deal? You can easily achieve similar performance using a decent 8-port SAS card and 8 similar SATA SSDs (e.g. from Intel or Kingston).
1
1
No Comparison to other PCI-E SSD's?
This data is pretty useless then in terms of comparing to what else is out there on the market. Why not compare it to the read/write speeds of a floppy disk drive whilst you are at it?
Sorry to be a bit down about this post but i was expecting more, a quick google search of PCI-E SSD on shopping will show a load of other PCI-E Drives available yes, you maybe looking at 2TB's of storage but why compare speeds if its all about size...
1
0
Coat
"huge capacity and performance and a total disregard for the budget"
So it's the IT version of Top Gear then?
and on that bombshell.....
3
0
Silver badge
Thumb Up
But I want one!!!
Ok... yes its expensive and I'm sure that within a couple of years it will be smaller and more efficient.
But if I had unlimited funds... I'd build out a small hadoop cluster using low powered Intel chips, 1 or 2 of these cards in Streacom FC10 chassis with a small SSD for OS and stuff.
While it will be very expensive and wouldn't be as fast or as powerful as other options... it would be one beautiful looking box (remember those Crays?) and could sit in your office conference room for demos.
Yes, its not for everyone... but it would be very cool.
0
0
Gold badge
Happy
Hourses for courses?
Large files served up fast?
Lots of scope for the "adult entertainment" market perhaps?
0
0
Silver badge
Linux
Re: Hourses for courses?
> Large files served up fast?
>
> Lots of scope for the "adult entertainment" market perhaps?
The problem with that is you have to be copying to another copy of the relevant hardware or you are going to bottleneck immediately on whatever that other thing is (network,storage).
It's like getting excited that your new spinny disk can do reads at 160MB/s when everything reading from it will be bottlenecked by GigE and limited to a mere 100MB/s at best.
1
0
Silver badge
Re: Hourses for courses?
"It's like getting excited that your new spinny disk can do reads at 160MB/s when everything reading from it will be bottlenecked by GigE and limited to a mere 100MB/s at best."
Of course that 160Mb/s falls off rapidly when the reads are random, which is where SSDs exel.
The first use I can think of for a 2Tb card is as cache for a large ZFS array, coupled with a decent PCIe write cache of 8Gb or so (they're a LOT cheaper than the 1600 quid you'll fork out for a STEC ZeusRAM drive and hellaciously faster on latencies because they're not constrained by 6Gb SAS/DATA interfaces.)
The real question for such a card is "How does it stack up against FusionIO's offering?", not "how does it stack up against a bunch of drives on a bandwidth-limited bus?"
As someone else stated: "sandforce" is a warning label for performance equipment.
0
0
Silver badge
Slots.
There is something fiddley to be aware of.
If cardlanes == slotlanes, they fit together easily.
If cardlanes < slotlanes, the card fits in and works, though the slot will have a few lanes wasted.
if cardlanes > slotlands, then *electronically* they work - though the card won't run at full speed. Mechanically, it won't actually fit into the slot. Not until you do some delicate work with a dremel. Once the appropriate (literal) hacking has been done, then you can use it.
0
0
Sandforce.
That'll be a £3.5k brick within 6 months then.
2
1
I've heard of KingSpec
Years ago I found out that they did a decent capacity, OK-ish price 1.8" PATA SSD that would fit in a Rio Karma, so I got one via eBay. It never worked properly. Sent it back for repair, at my cost; almost 1/3 the amount of the overall drive. Came back four weeks later, still didn't work properly in any PATA device. Basically just flash errors everywhere. Just had to write it off. I won't touch KingSpec kit again after that. A dud device is one thing, but acknowledging a fault, "repairing" it at the owner's cost and sending back a still broken device? No thanks.
3
0
Silver badge
Horseless carriage?
Idle wonder: currently we are using Flash to emulate disk drives. It reminds me of how early disk drives could be used as emulated tape drives. What would a greenfield system architecture designed around Flash storage look like?
0
0
waiting for the right time..
..to tell my boss that without one of these we can't get the job done.
so not anytime soon then ..sigh..
0
0
Anonymous Coward
It's cheaper than 1TB of savvis SAN shared storage which would lose a race with a 2 legged dog. Well, cheaper after 3 months anyway.
0
0
what goes around
comes around,
I seem to remeber,
the original disc drives, all 10 M of them,
plugged into the Apple II bus directly,
0
0
Silver badge
SATA Bottleneck
I've an idea - we could have several of these serial ports working together to transfer data simultaneously. I even have a name for such a data bus where several serial ata lines work in parallel: I call it the trouser press!
0
0
Would have been more interesting if the Reviewer had asked ¤why¤ he wasn't getting quoted speeds?
Was the manufacturer lying perhaps?
Or did he not want to upset them and stop getting new toys to play with?
0
0
This topic is closed for new posts.
|
__label__pos
| 0.941036 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.