content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
getCurrentTextEditor on MacOSX ignores space in input
My AU Plugin works in REAPER.
I’m using Label->getCurrentTextEditor() for editing label text and everything works normally on Windows, all alphanum characters and spaces are entered without any issues.
But, On MacOSX when entering ‘space’ this character ignored and instead ‘Play’ button triggering in the REAPER background of my plugin window.
Is it possible that mentioned behavior happens due to the limitation of Keyboard type UIKeyboardTypeDefault?
In REAPER’s plugin options menu (the “+” button above the editor window) there’s an option to “Send all keyboard input to plugin”. You can click this to send spacebar presses to the plugin instead of to the REAPER main window.
1 Like
Wow, nice hint, Thanks a lot, it’s really strange to disable this option by default.
|
__label__pos
| 0.773859 |
LAPACK 3.4.2
LAPACK: Linear Algebra PACKage
All Files Functions Groups
slasyf.f
Go to the documentation of this file.
1 *> \brief \b SLASYF computes a partial factorization of a real symmetric matrix, using the diagonal pivoting method.
2 *
3 * =========== DOCUMENTATION ===========
4 *
5 * Online html documentation available at
6 * http://www.netlib.org/lapack/explore-html/
7 *
8 *> \htmlonly
9 *> Download SLASYF + dependencies
10 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/slasyf.f">
11 *> [TGZ]</a>
12 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/slasyf.f">
13 *> [ZIP]</a>
14 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/slasyf.f">
15 *> [TXT]</a>
16 *> \endhtmlonly
17 *
18 * Definition:
19 * ===========
20 *
21 * SUBROUTINE SLASYF( UPLO, N, NB, KB, A, LDA, IPIV, W, LDW, INFO )
22 *
23 * .. Scalar Arguments ..
24 * CHARACTER UPLO
25 * INTEGER INFO, KB, LDA, LDW, N, NB
26 * ..
27 * .. Array Arguments ..
28 * INTEGER IPIV( * )
29 * REAL A( LDA, * ), W( LDW, * )
30 * ..
31 *
32 *
33 *> \par Purpose:
34 * =============
35 *>
36 *> \verbatim
37 *>
38 *> SLASYF computes a partial factorization of a real symmetric matrix A
39 *> using the Bunch-Kaufman diagonal pivoting method. The partial
40 *> factorization has the form:
41 *>
42 *> A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or:
43 *> ( 0 U22 ) ( 0 D ) ( U12**T U22**T )
44 *>
45 *> A = ( L11 0 ) ( D 0 ) ( L11**T L21**T ) if UPLO = 'L'
46 *> ( L21 I ) ( 0 A22 ) ( 0 I )
47 *>
48 *> where the order of D is at most NB. The actual order is returned in
49 *> the argument KB, and is either NB or NB-1, or N if N <= NB.
50 *>
51 *> SLASYF is an auxiliary routine called by SSYTRF. It uses blocked code
52 *> (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or
53 *> A22 (if UPLO = 'L').
54 *> \endverbatim
55 *
56 * Arguments:
57 * ==========
58 *
59 *> \param[in] UPLO
60 *> \verbatim
61 *> UPLO is CHARACTER*1
62 *> Specifies whether the upper or lower triangular part of the
63 *> symmetric matrix A is stored:
64 *> = 'U': Upper triangular
65 *> = 'L': Lower triangular
66 *> \endverbatim
67 *>
68 *> \param[in] N
69 *> \verbatim
70 *> N is INTEGER
71 *> The order of the matrix A. N >= 0.
72 *> \endverbatim
73 *>
74 *> \param[in] NB
75 *> \verbatim
76 *> NB is INTEGER
77 *> The maximum number of columns of the matrix A that should be
78 *> factored. NB should be at least 2 to allow for 2-by-2 pivot
79 *> blocks.
80 *> \endverbatim
81 *>
82 *> \param[out] KB
83 *> \verbatim
84 *> KB is INTEGER
85 *> The number of columns of A that were actually factored.
86 *> KB is either NB-1 or NB, or N if N <= NB.
87 *> \endverbatim
88 *>
89 *> \param[in,out] A
90 *> \verbatim
91 *> A is REAL array, dimension (LDA,N)
92 *> On entry, the symmetric matrix A. If UPLO = 'U', the leading
93 *> n-by-n upper triangular part of A contains the upper
94 *> triangular part of the matrix A, and the strictly lower
95 *> triangular part of A is not referenced. If UPLO = 'L', the
96 *> leading n-by-n lower triangular part of A contains the lower
97 *> triangular part of the matrix A, and the strictly upper
98 *> triangular part of A is not referenced.
99 *> On exit, A contains details of the partial factorization.
100 *> \endverbatim
101 *>
102 *> \param[in] LDA
103 *> \verbatim
104 *> LDA is INTEGER
105 *> The leading dimension of the array A. LDA >= max(1,N).
106 *> \endverbatim
107 *>
108 *> \param[out] IPIV
109 *> \verbatim
110 *> IPIV is INTEGER array, dimension (N)
111 *> Details of the interchanges and the block structure of D.
112 *> If UPLO = 'U', only the last KB elements of IPIV are set;
113 *> if UPLO = 'L', only the first KB elements are set.
114 *>
115 *> If IPIV(k) > 0, then rows and columns k and IPIV(k) were
116 *> interchanged and D(k,k) is a 1-by-1 diagonal block.
117 *> If UPLO = 'U' and IPIV(k) = IPIV(k-1) < 0, then rows and
118 *> columns k-1 and -IPIV(k) were interchanged and D(k-1:k,k-1:k)
119 *> is a 2-by-2 diagonal block. If UPLO = 'L' and IPIV(k) =
120 *> IPIV(k+1) < 0, then rows and columns k+1 and -IPIV(k) were
121 *> interchanged and D(k:k+1,k:k+1) is a 2-by-2 diagonal block.
122 *> \endverbatim
123 *>
124 *> \param[out] W
125 *> \verbatim
126 *> W is REAL array, dimension (LDW,NB)
127 *> \endverbatim
128 *>
129 *> \param[in] LDW
130 *> \verbatim
131 *> LDW is INTEGER
132 *> The leading dimension of the array W. LDW >= max(1,N).
133 *> \endverbatim
134 *>
135 *> \param[out] INFO
136 *> \verbatim
137 *> INFO is INTEGER
138 *> = 0: successful exit
139 *> > 0: if INFO = k, D(k,k) is exactly zero. The factorization
140 *> has been completed, but the block diagonal matrix D is
141 *> exactly singular.
142 *> \endverbatim
143 *
144 * Authors:
145 * ========
146 *
147 *> \author Univ. of Tennessee
148 *> \author Univ. of California Berkeley
149 *> \author Univ. of Colorado Denver
150 *> \author NAG Ltd.
151 *
152 *> \date September 2012
153 *
154 *> \ingroup realSYcomputational
155 *
156 * =====================================================================
157 SUBROUTINE slasyf( UPLO, N, NB, KB, A, LDA, IPIV, W, LDW, INFO )
158 *
159 * -- LAPACK computational routine (version 3.4.2) --
160 * -- LAPACK is a software package provided by Univ. of Tennessee, --
161 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
162 * September 2012
163 *
164 * .. Scalar Arguments ..
165 CHARACTER uplo
166 INTEGER info, kb, lda, ldw, n, nb
167 * ..
168 * .. Array Arguments ..
169 INTEGER ipiv( * )
170 REAL a( lda, * ), w( ldw, * )
171 * ..
172 *
173 * =====================================================================
174 *
175 * .. Parameters ..
176 REAL zero, one
177 parameter( zero = 0.0e+0, one = 1.0e+0 )
178 REAL eight, sevten
179 parameter( eight = 8.0e+0, sevten = 17.0e+0 )
180 * ..
181 * .. Local Scalars ..
182 INTEGER imax, j, jb, jj, jmax, jp, k, kk, kkw, kp,
183 $ kstep, kw
184 REAL absakk, alpha, colmax, d11, d21, d22, r1,
185 $ rowmax, t
186 * ..
187 * .. External Functions ..
188 LOGICAL lsame
189 INTEGER isamax
190 EXTERNAL lsame, isamax
191 * ..
192 * .. External Subroutines ..
193 EXTERNAL scopy, sgemm, sgemv, sscal, sswap
194 * ..
195 * .. Intrinsic Functions ..
196 INTRINSIC abs, max, min, sqrt
197 * ..
198 * .. Executable Statements ..
199 *
200 info = 0
201 *
202 * Initialize ALPHA for use in choosing pivot block size.
203 *
204 alpha = ( one+sqrt( sevten ) ) / eight
205 *
206 IF( lsame( uplo, 'U' ) ) THEN
207 *
208 * Factorize the trailing columns of A using the upper triangle
209 * of A and working backwards, and compute the matrix W = U12*D
210 * for use in updating A11
211 *
212 * K is the main loop index, decreasing from N in steps of 1 or 2
213 *
214 * KW is the column of W which corresponds to column K of A
215 *
216 k = n
217 10 continue
218 kw = nb + k - n
219 *
220 * Exit from loop
221 *
222 IF( ( k.LE.n-nb+1 .AND. nb.LT.n ) .OR. k.LT.1 )
223 $ go to 30
224 *
225 * Copy column K of A to column KW of W and update it
226 *
227 CALL scopy( k, a( 1, k ), 1, w( 1, kw ), 1 )
228 IF( k.LT.n )
229 $ CALL sgemv( 'No transpose', k, n-k, -one, a( 1, k+1 ), lda,
230 $ w( k, kw+1 ), ldw, one, w( 1, kw ), 1 )
231 *
232 kstep = 1
233 *
234 * Determine rows and columns to be interchanged and whether
235 * a 1-by-1 or 2-by-2 pivot block will be used
236 *
237 absakk = abs( w( k, kw ) )
238 *
239 * IMAX is the row-index of the largest off-diagonal element in
240 * column K, and COLMAX is its absolute value
241 *
242 IF( k.GT.1 ) THEN
243 imax = isamax( k-1, w( 1, kw ), 1 )
244 colmax = abs( w( imax, kw ) )
245 ELSE
246 colmax = zero
247 END IF
248 *
249 IF( max( absakk, colmax ).EQ.zero ) THEN
250 *
251 * Column K is zero: set INFO and continue
252 *
253 IF( info.EQ.0 )
254 $ info = k
255 kp = k
256 ELSE
257 IF( absakk.GE.alpha*colmax ) THEN
258 *
259 * no interchange, use 1-by-1 pivot block
260 *
261 kp = k
262 ELSE
263 *
264 * Copy column IMAX to column KW-1 of W and update it
265 *
266 CALL scopy( imax, a( 1, imax ), 1, w( 1, kw-1 ), 1 )
267 CALL scopy( k-imax, a( imax, imax+1 ), lda,
268 $ w( imax+1, kw-1 ), 1 )
269 IF( k.LT.n )
270 $ CALL sgemv( 'No transpose', k, n-k, -one, a( 1, k+1 ),
271 $ lda, w( imax, kw+1 ), ldw, one,
272 $ w( 1, kw-1 ), 1 )
273 *
274 * JMAX is the column-index of the largest off-diagonal
275 * element in row IMAX, and ROWMAX is its absolute value
276 *
277 jmax = imax + isamax( k-imax, w( imax+1, kw-1 ), 1 )
278 rowmax = abs( w( jmax, kw-1 ) )
279 IF( imax.GT.1 ) THEN
280 jmax = isamax( imax-1, w( 1, kw-1 ), 1 )
281 rowmax = max( rowmax, abs( w( jmax, kw-1 ) ) )
282 END IF
283 *
284 IF( absakk.GE.alpha*colmax*( colmax / rowmax ) ) THEN
285 *
286 * no interchange, use 1-by-1 pivot block
287 *
288 kp = k
289 ELSE IF( abs( w( imax, kw-1 ) ).GE.alpha*rowmax ) THEN
290 *
291 * interchange rows and columns K and IMAX, use 1-by-1
292 * pivot block
293 *
294 kp = imax
295 *
296 * copy column KW-1 of W to column KW
297 *
298 CALL scopy( k, w( 1, kw-1 ), 1, w( 1, kw ), 1 )
299 ELSE
300 *
301 * interchange rows and columns K-1 and IMAX, use 2-by-2
302 * pivot block
303 *
304 kp = imax
305 kstep = 2
306 END IF
307 END IF
308 *
309 kk = k - kstep + 1
310 kkw = nb + kk - n
311 *
312 * Updated column KP is already stored in column KKW of W
313 *
314 IF( kp.NE.kk ) THEN
315 *
316 * Copy non-updated column KK to column KP
317 *
318 a( kp, k ) = a( kk, k )
319 CALL scopy( k-1-kp, a( kp+1, kk ), 1, a( kp, kp+1 ),
320 $ lda )
321 CALL scopy( kp, a( 1, kk ), 1, a( 1, kp ), 1 )
322 *
323 * Interchange rows KK and KP in last KK columns of A and W
324 *
325 CALL sswap( n-kk+1, a( kk, kk ), lda, a( kp, kk ), lda )
326 CALL sswap( n-kk+1, w( kk, kkw ), ldw, w( kp, kkw ),
327 $ ldw )
328 END IF
329 *
330 IF( kstep.EQ.1 ) THEN
331 *
332 * 1-by-1 pivot block D(k): column KW of W now holds
333 *
334 * W(k) = U(k)*D(k)
335 *
336 * where U(k) is the k-th column of U
337 *
338 * Store U(k) in column k of A
339 *
340 CALL scopy( k, w( 1, kw ), 1, a( 1, k ), 1 )
341 r1 = one / a( k, k )
342 CALL sscal( k-1, r1, a( 1, k ), 1 )
343 ELSE
344 *
345 * 2-by-2 pivot block D(k): columns KW and KW-1 of W now
346 * hold
347 *
348 * ( W(k-1) W(k) ) = ( U(k-1) U(k) )*D(k)
349 *
350 * where U(k) and U(k-1) are the k-th and (k-1)-th columns
351 * of U
352 *
353 IF( k.GT.2 ) THEN
354 *
355 * Store U(k) and U(k-1) in columns k and k-1 of A
356 *
357 d21 = w( k-1, kw )
358 d11 = w( k, kw ) / d21
359 d22 = w( k-1, kw-1 ) / d21
360 t = one / ( d11*d22-one )
361 d21 = t / d21
362 DO 20 j = 1, k - 2
363 a( j, k-1 ) = d21*( d11*w( j, kw-1 )-w( j, kw ) )
364 a( j, k ) = d21*( d22*w( j, kw )-w( j, kw-1 ) )
365 20 continue
366 END IF
367 *
368 * Copy D(k) to A
369 *
370 a( k-1, k-1 ) = w( k-1, kw-1 )
371 a( k-1, k ) = w( k-1, kw )
372 a( k, k ) = w( k, kw )
373 END IF
374 END IF
375 *
376 * Store details of the interchanges in IPIV
377 *
378 IF( kstep.EQ.1 ) THEN
379 ipiv( k ) = kp
380 ELSE
381 ipiv( k ) = -kp
382 ipiv( k-1 ) = -kp
383 END IF
384 *
385 * Decrease K and return to the start of the main loop
386 *
387 k = k - kstep
388 go to 10
389 *
390 30 continue
391 *
392 * Update the upper triangle of A11 (= A(1:k,1:k)) as
393 *
394 * A11 := A11 - U12*D*U12**T = A11 - U12*W**T
395 *
396 * computing blocks of NB columns at a time
397 *
398 DO 50 j = ( ( k-1 ) / nb )*nb + 1, 1, -nb
399 jb = min( nb, k-j+1 )
400 *
401 * Update the upper triangle of the diagonal block
402 *
403 DO 40 jj = j, j + jb - 1
404 CALL sgemv( 'No transpose', jj-j+1, n-k, -one,
405 $ a( j, k+1 ), lda, w( jj, kw+1 ), ldw, one,
406 $ a( j, jj ), 1 )
407 40 continue
408 *
409 * Update the rectangular superdiagonal block
410 *
411 CALL sgemm( 'No transpose', 'Transpose', j-1, jb, n-k, -one,
412 $ a( 1, k+1 ), lda, w( j, kw+1 ), ldw, one,
413 $ a( 1, j ), lda )
414 50 continue
415 *
416 * Put U12 in standard form by partially undoing the interchanges
417 * in columns k+1:n
418 *
419 j = k + 1
420 60 continue
421 jj = j
422 jp = ipiv( j )
423 IF( jp.LT.0 ) THEN
424 jp = -jp
425 j = j + 1
426 END IF
427 j = j + 1
428 IF( jp.NE.jj .AND. j.LE.n )
429 $ CALL sswap( n-j+1, a( jp, j ), lda, a( jj, j ), lda )
430 IF( j.LE.n )
431 $ go to 60
432 *
433 * Set KB to the number of columns factorized
434 *
435 kb = n - k
436 *
437 ELSE
438 *
439 * Factorize the leading columns of A using the lower triangle
440 * of A and working forwards, and compute the matrix W = L21*D
441 * for use in updating A22
442 *
443 * K is the main loop index, increasing from 1 in steps of 1 or 2
444 *
445 k = 1
446 70 continue
447 *
448 * Exit from loop
449 *
450 IF( ( k.GE.nb .AND. nb.LT.n ) .OR. k.GT.n )
451 $ go to 90
452 *
453 * Copy column K of A to column K of W and update it
454 *
455 CALL scopy( n-k+1, a( k, k ), 1, w( k, k ), 1 )
456 CALL sgemv( 'No transpose', n-k+1, k-1, -one, a( k, 1 ), lda,
457 $ w( k, 1 ), ldw, one, w( k, k ), 1 )
458 *
459 kstep = 1
460 *
461 * Determine rows and columns to be interchanged and whether
462 * a 1-by-1 or 2-by-2 pivot block will be used
463 *
464 absakk = abs( w( k, k ) )
465 *
466 * IMAX is the row-index of the largest off-diagonal element in
467 * column K, and COLMAX is its absolute value
468 *
469 IF( k.LT.n ) THEN
470 imax = k + isamax( n-k, w( k+1, k ), 1 )
471 colmax = abs( w( imax, k ) )
472 ELSE
473 colmax = zero
474 END IF
475 *
476 IF( max( absakk, colmax ).EQ.zero ) THEN
477 *
478 * Column K is zero: set INFO and continue
479 *
480 IF( info.EQ.0 )
481 $ info = k
482 kp = k
483 ELSE
484 IF( absakk.GE.alpha*colmax ) THEN
485 *
486 * no interchange, use 1-by-1 pivot block
487 *
488 kp = k
489 ELSE
490 *
491 * Copy column IMAX to column K+1 of W and update it
492 *
493 CALL scopy( imax-k, a( imax, k ), lda, w( k, k+1 ), 1 )
494 CALL scopy( n-imax+1, a( imax, imax ), 1, w( imax, k+1 ),
495 $ 1 )
496 CALL sgemv( 'No transpose', n-k+1, k-1, -one, a( k, 1 ),
497 $ lda, w( imax, 1 ), ldw, one, w( k, k+1 ), 1 )
498 *
499 * JMAX is the column-index of the largest off-diagonal
500 * element in row IMAX, and ROWMAX is its absolute value
501 *
502 jmax = k - 1 + isamax( imax-k, w( k, k+1 ), 1 )
503 rowmax = abs( w( jmax, k+1 ) )
504 IF( imax.LT.n ) THEN
505 jmax = imax + isamax( n-imax, w( imax+1, k+1 ), 1 )
506 rowmax = max( rowmax, abs( w( jmax, k+1 ) ) )
507 END IF
508 *
509 IF( absakk.GE.alpha*colmax*( colmax / rowmax ) ) THEN
510 *
511 * no interchange, use 1-by-1 pivot block
512 *
513 kp = k
514 ELSE IF( abs( w( imax, k+1 ) ).GE.alpha*rowmax ) THEN
515 *
516 * interchange rows and columns K and IMAX, use 1-by-1
517 * pivot block
518 *
519 kp = imax
520 *
521 * copy column K+1 of W to column K
522 *
523 CALL scopy( n-k+1, w( k, k+1 ), 1, w( k, k ), 1 )
524 ELSE
525 *
526 * interchange rows and columns K+1 and IMAX, use 2-by-2
527 * pivot block
528 *
529 kp = imax
530 kstep = 2
531 END IF
532 END IF
533 *
534 kk = k + kstep - 1
535 *
536 * Updated column KP is already stored in column KK of W
537 *
538 IF( kp.NE.kk ) THEN
539 *
540 * Copy non-updated column KK to column KP
541 *
542 a( kp, k ) = a( kk, k )
543 CALL scopy( kp-k-1, a( k+1, kk ), 1, a( kp, k+1 ), lda )
544 CALL scopy( n-kp+1, a( kp, kk ), 1, a( kp, kp ), 1 )
545 *
546 * Interchange rows KK and KP in first KK columns of A and W
547 *
548 CALL sswap( kk, a( kk, 1 ), lda, a( kp, 1 ), lda )
549 CALL sswap( kk, w( kk, 1 ), ldw, w( kp, 1 ), ldw )
550 END IF
551 *
552 IF( kstep.EQ.1 ) THEN
553 *
554 * 1-by-1 pivot block D(k): column k of W now holds
555 *
556 * W(k) = L(k)*D(k)
557 *
558 * where L(k) is the k-th column of L
559 *
560 * Store L(k) in column k of A
561 *
562 CALL scopy( n-k+1, w( k, k ), 1, a( k, k ), 1 )
563 IF( k.LT.n ) THEN
564 r1 = one / a( k, k )
565 CALL sscal( n-k, r1, a( k+1, k ), 1 )
566 END IF
567 ELSE
568 *
569 * 2-by-2 pivot block D(k): columns k and k+1 of W now hold
570 *
571 * ( W(k) W(k+1) ) = ( L(k) L(k+1) )*D(k)
572 *
573 * where L(k) and L(k+1) are the k-th and (k+1)-th columns
574 * of L
575 *
576 IF( k.LT.n-1 ) THEN
577 *
578 * Store L(k) and L(k+1) in columns k and k+1 of A
579 *
580 d21 = w( k+1, k )
581 d11 = w( k+1, k+1 ) / d21
582 d22 = w( k, k ) / d21
583 t = one / ( d11*d22-one )
584 d21 = t / d21
585 DO 80 j = k + 2, n
586 a( j, k ) = d21*( d11*w( j, k )-w( j, k+1 ) )
587 a( j, k+1 ) = d21*( d22*w( j, k+1 )-w( j, k ) )
588 80 continue
589 END IF
590 *
591 * Copy D(k) to A
592 *
593 a( k, k ) = w( k, k )
594 a( k+1, k ) = w( k+1, k )
595 a( k+1, k+1 ) = w( k+1, k+1 )
596 END IF
597 END IF
598 *
599 * Store details of the interchanges in IPIV
600 *
601 IF( kstep.EQ.1 ) THEN
602 ipiv( k ) = kp
603 ELSE
604 ipiv( k ) = -kp
605 ipiv( k+1 ) = -kp
606 END IF
607 *
608 * Increase K and return to the start of the main loop
609 *
610 k = k + kstep
611 go to 70
612 *
613 90 continue
614 *
615 * Update the lower triangle of A22 (= A(k:n,k:n)) as
616 *
617 * A22 := A22 - L21*D*L21**T = A22 - L21*W**T
618 *
619 * computing blocks of NB columns at a time
620 *
621 DO 110 j = k, n, nb
622 jb = min( nb, n-j+1 )
623 *
624 * Update the lower triangle of the diagonal block
625 *
626 DO 100 jj = j, j + jb - 1
627 CALL sgemv( 'No transpose', j+jb-jj, k-1, -one,
628 $ a( jj, 1 ), lda, w( jj, 1 ), ldw, one,
629 $ a( jj, jj ), 1 )
630 100 continue
631 *
632 * Update the rectangular subdiagonal block
633 *
634 IF( j+jb.LE.n )
635 $ CALL sgemm( 'No transpose', 'Transpose', n-j-jb+1, jb,
636 $ k-1, -one, a( j+jb, 1 ), lda, w( j, 1 ), ldw,
637 $ one, a( j+jb, j ), lda )
638 110 continue
639 *
640 * Put L21 in standard form by partially undoing the interchanges
641 * in columns 1:k-1
642 *
643 j = k - 1
644 120 continue
645 jj = j
646 jp = ipiv( j )
647 IF( jp.LT.0 ) THEN
648 jp = -jp
649 j = j - 1
650 END IF
651 j = j - 1
652 IF( jp.NE.jj .AND. j.GE.1 )
653 $ CALL sswap( j, a( jp, 1 ), lda, a( jj, 1 ), lda )
654 IF( j.GE.1 )
655 $ go to 120
656 *
657 * Set KB to the number of columns factorized
658 *
659 kb = k - 1
660 *
661 END IF
662 return
663 *
664 * End of SLASYF
665 *
666 END
|
__label__pos
| 0.992161 |
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!
Homework Help: Find the curve given the tangent
1. Jun 5, 2010 #1
1. The problem statement, all variables and given/known data
Given that the tangent to the curve [itex] c(t) [/itex] at any point on the curve is [itex] T(t) = (-sin(t), cos(t) )[/itex], find [itex] c(t) [/itex] if the curve passes through the point [itex] (0,0) [/itex].
3. The attempt at a solution
I try to let
[itex] c(t) = ( x(t), y(t) ) [/itex]
Then
[itex] c'(t) = ( x'(t), y'(t) ) [/itex]
[itex]| c'(t) | = \sqrt{[x'(t)]^2 + [y'(t)]^2 } [/itex]
And
[itex] T(t) = \frac{c'(t)}{|c'(t)|} [/itex]
However this is complicated and consequently I am not sure how to solve it. I am also not sure how to "use" the point given since (0, 0) correspond the the values x and y respectively rather than t.
Thanks.
2. jcsd
3. Jun 5, 2010 #2
rock.freak667
User Avatar
Homework Helper
If you had c'(t) =<2t,1> then c(t) = <t2+A,t+B> where A,B=constant.
But you have
c'(t)/|c'(t)| = <-sint,cost>
so what is c(t) ? (note that |c'(t)| is just a constant)
At (0,0), t=0. So what is c(t) now?
4. Jun 5, 2010 #3
c(t) = (cos(t) + C, sin(t) + K)
t= 0, point is (0,0)
So x(t) = 1 + C = 0, C = -1
And y(t) = 0 + K, K=0
So c(t) = (cos(t) -1 , sin(t) ) ?
5. Jun 5, 2010 #4
rock.freak667
User Avatar
Homework Helper
Yes but you forgot out |c'(t)| which is the distance from the center to any tangent. In this case it would just be the same as |T(t)|
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
|
__label__pos
| 0.998541 |
Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
966 lines (860 sloc) 32.8 KB
#!/usr/bin/env python3.7
import argparse
import math
import heapq
import importlib
import itertools
import os.path
import pickle
import pkgutil
import random
import sys
import time
import traceback
from collections import defaultdict, namedtuple
from dataclasses import dataclass, field
from typing import List
try:
import requests
from lxml import html
except ImportError:
print("Unable to import dependencies for scraping. Disabling SE scraping.", file=sys.stderr)
requests = None
html = None
try:
from tabulate import tabulate
except ImportError:
print("tabulate not found. Using fallback.", file=sys.stderr)
def tabulate(rows, headers, **_):
sizes = []
for i in range(len(rows[0])):
sizes.append(max(map(lambda x: len(str(x[i]))+1,
rows+[headers])))
def make_len(val, size, blank):
val = str(val)
return val + blank*(size-len(val))
def line(row, sep='|', blank=' '):
return sep.join(make_len(row[i], sizes[i], blank)
for i in range(len(row)))
ret = ''
ret += line(headers) + '\n'
ret += line(['']*len(sizes), '+', '-') + '\n'
for i in rows:
ret += line(i) + '\n'
return ret
# Logging constants
MSG_COLORS = defaultdict(
lambda: '\x1b[37m', # default value
tourney='\x1b[94m',
major='\x1b[95m',
minor='\x1b[93m',
good='\x1b[32m',
bad='\x1b[91m',
warning='\x1b[33m',
error='\x1b[31m',
info='\x1b[37m',
debug='\x1b[90m',
score='\x1b[36m',
final='\x1b[96m',
pool='\x1b[96m',
winner='\x1b[92m',
)
MSG_TYPES = set(MSG_COLORS)
def system_color_support():
import platform
if platform.system() == 'Windows':
if platform.release() == '10':
try:
import winreg
top_key = winreg.ConnectRegistry(None, winreg.HKEY_CURRENT_USER)
console = winreg.OpenKey(top_key, 'Console')
ansi_on, rtype = winreg.QueryValueEx(console, 'VirtualTerminalLevel')
return bool(ansi_on)
except (ImportError, FileNotFoundError):
print(
"HKEY_CURRENT_USER\\Console\\VirtualTerminalLevel is missing from the"
" registry. Colors will be disabled.",
file=sys.stderr
)
return False
else:
return True
else:
return False
else:
return 'xterm' in os.environ.get('TERM', '')
if sys.stdin.isatty() and system_color_support():
CLEAR_COLOR = '\x1b[0m'
MSG_COLORS['seed'] = '\x1b[94m'
else:
MSG_COLORS = defaultdict(str)
CLEAR_COLOR = ''
LOG_END = CLEAR_COLOR + '\n'
LOG_SUPPRESS = set()
def exception(message=None):
if message:
print(f"{MSG_COLORS['error']}{message}", file=sys.stdout)
else:
print(MSG_COLORS['error'], end='', file=sys.stdout)
traceback.print_exc()
print(CLEAR_COLOR, end='', file=sys.stdout)
ALL = type('ALL', (), {'__contains__': lambda s,x: True, 'add': lambda s,x: None, 'update': lambda s, *_, **__: None})()
# Name Pool (for adventurers)
FIRST_NAMES = [
'Eddard', 'Rob', 'Jon', 'Sansa', 'Theon', 'Arya', 'Brandon', 'Richard',
'Hodor', 'Jaime', 'Cersei', 'Tyrion', 'Tywin', 'Robert', 'Joffrey',
'Tommen', 'Dany', 'Samwell', 'Marjorie', 'Stannis', 'Peter', 'Jora',
'Bilbo', 'Frodo', 'Sam', 'Legolas', 'Gimley', 'Gandalf', 'Ned', 'Albert',
'Lyn', 'Eliwood', 'Hector', 'Guy', 'Kent', 'Dorcas', 'Fiora', 'Ike',
'Marth', 'Roy', 'Lucina', 'Corrin', 'Robin', 'Chrom', 'Anna', 'Ramsey',
'Alexander', 'James', 'John', 'Jacob', 'Deborah', 'Rebecca', 'Willard',
'Zeus', 'Athena', 'Apollo', 'Diana', 'Juno', 'Hera', 'Icarus', 'Samson',
'Chell', 'Gordon', 'Samus', 'Link', 'Edward', 'Alphonse', 'Winry', 'Fox',
'Mario', 'Luigi', 'Ash', 'Brock', 'Misty', 'Winston', 'Torbjorn', 'Angela',
'Kirby', 'Masahiro', 'Shigeru', 'Lucy', 'Freddie', 'Patrick', 'Aerith',
'Cloud', 'Tifa', 'Barret', 'Red', 'Blue', 'Gary', 'Chara', 'Usagi',
'Ajna', 'Morgan', 'Steve', 'Harry', 'Jack', 'Homer', 'Bart', 'Lisa',
'Elsa', 'Ana', 'Emma', 'Regina', 'Mary', 'Margaret', 'Pit', 'Brad',
'Sonja', 'Ryu', 'Ken', 'Olivia', 'Major', 'Ron', 'Quinn', 'Elmer',
# Signing off
'Justin'
]
LAST_NAMES = [
'Stark', 'Barathean', 'Lannister', 'Snow', 'Tarley', 'Grayjoy', 'Bolton',
'Stormborn', 'Targaryen', 'Balish', 'Mormant', 'Baggins', 'Churchill',
'Freeman', 'Aran', 'Elric', 'Rockbell', 'McCloud', 'Lombardi', 'Smith',
'Lindholm', 'Ketchum', 'Sakurai', 'Miyamoto', 'Heartfilia', 'Mercury',
'Oak', 'Elm', 'Birch', 'Tsukino', 'Strife', 'Lockheart', 'Jackson',
'Potter', 'Sparrow', 'Simpson', 'Flanders', 'Young', 'Einstein', 'Swan',
'Parker', 'Harris', 'Moore', 'Barnes', 'Finley', 'Pitt', 'Ridley'
]
NAME_SUFFIXES = [
'I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII', 'IX', 'X', 'XIII',
'Jr.', 'Sr.', 'PhD', 'MD', 'DDS'
]
MONIKERS = [
'the Great', 'the Smuggler', 'the Cat Burglar', 'the Insomniac',
'the Forgettable', 'the Orphan', 'the Wizard', 'the Lazy',
'the Untamed', 'the Well-armed', 'the Pirate', 'the Unimportant',
'the Hero', 'the Unkempt', 'the Distasteful', 'the Dog Whisperer',
'the Peasant', 'the Impaler', 'of Arendale', 'the Simpleton'
]
# lol Cloud McCloud is possible.
# === Data structures for the game ===
class Treasure(namedtuple('Treasure', ['name', 'value', 'weight'])):
def __str__(self):
return f"{self.name} (${self.value}, {self.weight}kg)"
class RoomState(
namedtuple(
'RoomState',
['room', 'treasures', 'players', 'inventory', 'stamina']
)
):
def __str__(self):
builder = [
f"Room #{self.room}",
" Treasures:"
]
builder.extend(
f" {treasure}"
for treasure in self.treasures
)
builder.append(" Other Players:")
builder.extend(
f" {player}"
for player in self.players
)
builder.append(" Your Inventory:")
builder.extend(
f" {treasure}"
for treasure in self.inventory
)
builder.append(f" Stamina: {self.stamina}")
return '\n'.join(builder)
@property
def carry_weight(self):
return sum(treasure.weight for treasure in self.inventory)
@property
def total_value(self):
return sum(treasure.value for treasure in self.inventory)
Move = namedtuple('Move', ['direction'])
Take = namedtuple('Take', ['treasure', 'bid'])
Drop = namedtuple('Drop', ['treasure'])
# === Adventurers ===
class Adventurer:
def __init__(self, name, random):
self.name = name
self.random = random
def get_action(self, state):
raise NotImplementedError()
def enter_ruins(self):
pass
class Drunkard(Adventurer):
def get_action(self, state):
move_cost = 10 + int(math.ceil(state.carry_weight / 5))
if state.stamina // move_cost <= state.room + 1:
return 'previous'
options = ['next']
if state.room > 1:
options.append('previous')
if state.treasures:
options += ['take'] * 5
action = self.random.choice(options)
if action == 'take':
which = self.random.randrange(len(state.treasures))
treasure = state.treasures[which]
if treasure.weight + state.carry_weight > 50: # it doesn't fit
return 'drop', self.random.randrange(len(state.inventory))
else:
return 'take', which, treasure.weight + (self.random.randrange(5) if state.players else 0)
else:
return action
# === Player Information ===
@dataclass
class Player:
name: str
bot: Adventurer
room: int = 1
stamina: int = 1000
treasures: List[Treasure] = field(default_factory=list)
def get_action(self, state):
if not self.active:
return None
try:
raw_action = self.bot.get_action(state)
except Exception as e:
exception(f"Exception from {self}: {str(e)}")
return None
try:
if raw_action == 'next':
return Move(1)
elif raw_action == 'previous':
return Move(-1)
else:
atype, *args = raw_action
if atype == 'take':
return Take(*args)
elif atype == 'drop':
return Drop(*args)
except TypeError:
exception(f"Invalid action from {self}: {raw_action}")
return None
@property
def carry_weight(self):
return sum(treasure.weight for treasure in self.treasures)
@property
def total_value(self):
return sum(treasure.value for treasure in self.treasures)
@property
def active(self):
return self.stamina > 0 and self.room > 0
@property
def alive(self):
return self.stamina > 0 or self.room == 0
def __str__(self):
return f"{self.name} ({type(self.bot).__name__})"
class Ruins:
pause_on_death = []
def __init__(self, *adventurers, seed=None):
assert adventurers
if seed is None:
seed = random.getrandbits(6969)
self._seed_obj = [adv.__name__ for adv in adventurers], seed
self._replay_saved = False
self.random = random.Random(seed)
# create a separate random instance for flavor so that deaths and other
# flavorful events don't interfere with treasure generation
self.flavor_rand = random.Random(self.random.getrandbits(420))
self.treasure_num = itertools.count(1)
self.players = {
name: Player(name, adventurer(name, self.new_seed()))
for name, adventurer in (
(self.generate_name(), adventurer)
for adventurer in adventurers
)
}
self.rooms = [self.generate_room(1)]
self.turn_number = 0
self.complete = False
@classmethod
def from_replay(cls, replay_file, candidates):
with open(replay_file, 'rb') as f:
adv_names, seed = pickle.load(f)
cand = {
botclass.__name__: botclass
for botclass in [*candidates, Drunkard]
}
adventurers = [cand[name] for name in adv_names]
return cls(*adventurers, seed=seed)
def save_replay(self, replay_file):
if not self._replay_saved:
with open(replay_file, 'wb') as f:
pickle.dump(self._seed_obj, f)
self._replay_saved = True
def new_seed(self):
return random.Random(self.random.getrandbits(744))
def generate_name(self):
r = self.flavor_rand.random()
parts = [self.flavor_rand.choice(FIRST_NAMES)]
if r < 0.75:
parts.append(self.flavor_rand.choice(LAST_NAMES))
if r < 0.15:
parts.append(self.flavor_rand.choice(NAME_SUFFIXES))
else:
parts.append(self.flavor_rand.choice(MONIKERS))
return ' '.join(parts)
def ndr(self, n, r):
return sum(self.random.randint(1, r) for _ in range(n))
def generate_treasure(self, room):
weight = max(1, self.ndr(2, 6) - 2)
value = self.ndr(1, 10 * weight) + self.ndr(2, 5 * room + 10)
return Treasure(f"Treasure #{next(self.treasure_num):03}", value, weight)
def generate_room(self, room):
n_treasures = self.random.randint(room // 3 + 3, room // 2 + 5)
return [self.generate_treasure(room) for _ in range(n_treasures)]
def ensure_room(self, room):
while len(self.rooms) < room:
self.rooms.append(self.generate_room(len(self.rooms) + 1))
def trap(self):
return self.flavor_rand.choice([
"was sliced in half by a swinging blade trap.",
"fell into a pit of spikes.",
"was crushed by a boulder.",
"was eaten by a wild shriekbat.",
"was shot by a crossbow trap.",
"fell into a bottomless pit.",
"was devoured by a mimic.",
"was incinerated by a fire trap.",
"got sucked into a dimensional vortex.",
"mysteriously vanished.",
"was flung into a pool of acid.",
"was stung by a giant bee.",
"was absorbed by a gelatinous monster.",
"was bitten by a swarm of venomous snakes.",
"was decapitated by a sword trap"
])
def kill(self, player, message):
self.gamelog(player, message, type='bad')
player.stamina = 0
if player.treasures:
self.rooms[player.room - 1] += player.treasures
self.gamelog(f"{player.name} dropped these items into room {player.room}:", type='debug')
for treasure in player.treasures:
self.gamelog(treasure, type='debug')
player.treasures = []
if type(player.bot).__name__ in self.pause_on_death:
if self._replay_saved:
input('Press enter to continue...')
else:
filename = input('Save a replay? (enter a name) ')
if filename:
self.save_replay(filename + '.seed')
if input('Exit? ').lower().startswith('y'):
sys.exit(1)
def gamelog(self, *message, type='info', end='', **kwargs):
if type in LOG_SUPPRESS:
return
if self.complete:
prefix = 'Game End'
elif self.turn_number == 0:
prefix = 'Pregame'
else:
prefix = f"Turn {self.turn_number:03}"
print(f"{MSG_COLORS[type]}[{prefix}]", *message, end=(LOG_END+end), **kwargs)
# def gamelog_lines(self, lines, type='info'):
# if type in LOG_SUPPRESS:
# return
# if self.complete:
# prefix = 'Game End'
# elif self.turn_number == 0:
# prefix = 'Pregame'
# else:
# prefix = f"Turn {self.turn_number:03}"
# print(MSG_COLORS[type], end='')
# for line in lines:
# print(f"[{prefix}] {line}")
# print(CLEAR_COLOR, end='')
def snapshot(self, player):
return RoomState(
player.room,
list(self.rooms[player.room - 1]),
[
other.name
for other in self.players.values()
if other is not player and other.room == player.room
],
list(player.treasures),
player.stamina
)
def turn(self):
self.turn_number += 1
self.gamelog("Turn", self.turn_number, "begins!", type='minor')
bids = defaultdict(list)
drops = defaultdict(list)
kill_later = []
actions = [ # Actions must resolve simultaneously
(player, player.get_action(self.snapshot(player)))
for player in self.players.values()
if player.active
]
for player, action in actions:
self.gamelog(player, action, type='debug')
if action is None:
kill_later.append((player, f"{self.trap()} (Invalid action.)"))
continue
elif isinstance(action, Move):
cost = 10 + int(math.ceil(player.carry_weight / 5))
if player.stamina >= cost:
player.room += action.direction
player.stamina -= cost
self.ensure_room(player.room)
if player.room > 0:
if player.stamina == 0:
kill_later.append((
player,
f"collapsed in the doorway to room #{player.room}"
" and died of exhaustion"
))
else:
self.gamelog(player, f"moved into room #{player.room}")
else:
self.gamelog(
player,
f"""exited the ruins with {
player.stamina
} stamina and {
len(player.treasures)
} treasures, totaling ${
player.total_value
} in value, and {
player.carry_weight
}kg in weight.""",
type='minor'
)
else:
kill_later.append((player, "died of exhaustion"))
continue
elif isinstance(action, Take):
treasure, bid = action
try:
bid = int(bid)
except ValueError:
kill_later.append((player, self.trap() + " (Non-integer bid)"))
continue
try:
target = self.rooms[player.room - 1][int(treasure)]
except IndexError:
kill_later.append((player, self.trap() + " (Invalid treasure index)"))
continue
except (TypeError, ValueError):
kill_later.append(
(player, f"{self.trap()} (Non-integer treasure index)")
)
continue
min_bid = target.weight
if bid < min_bid:
kill_later.append((
player,
f"tried to lift {target.name} but {self.trap()} (Bid too low)"
))
elif bid > player.stamina:
kill_later.append((
player,
f"went all out to take {target.name}, but had a heart attack and"
" collapsed. (Bid too high)"
))
elif target.weight + player.carry_weight > 50:
kill_later.append((player, self.trap() + " (Treasure too heavy)"))
else:
bids[player.room, treasure].append((bid, player.name))
player.stamina -= bid
elif isinstance(action, Drop):
# No need to check stamina here because we already know this player
# has at least 1 stamina from the player.active check earlier
player.stamina -= 1
try:
dropped = player.treasures.pop(int(action.treasure))
except (IndexError, TypeError, ValueError):
kill_later.append((
player,
"was bitten by a venomous spider and died moments later. (Invalid drop)"
))
else:
drops[player.room].append(dropped)
self.gamelog(
player,
f"Dropped a treasure into room #{player.room}:",
dropped
)
for (room, index), bidlist in bids.items():
treasure = self.rooms[room - 1][index]
if len(bidlist) == 1: # No competition over treasure
_, player = bidlist[0]
self.players[player].treasures.append(treasure)
self.rooms[room - 1][index] = None
self.gamelog(self.players[player], "took", treasure)
elif len(bidlist) > 1: # Multiple players going for same treasure
bidlist.sort(reverse=True)
if bidlist[0][0] > bidlist[1][0]: # No one tied for first
_, player = bidlist.pop(0)
self.players[player].treasures.append(treasure)
self.rooms[room - 1][index] = None
self.gamelog(self.players[player], "fought hard and took", treasure)
# everyone else is a loser
for _, player in bidlist:
self.gamelog(
self.players[player],
f"attempted to take {treasure.name}, but was met with resistance."
)
for room, items in drops.items():
self.rooms[room - 1] += items
for room in self.rooms:
if None in room:
room[:] = [treasure for treasure in room if treasure]
for player, message in kill_later:
self.kill(player, message)
def run_game(self, tablefmt='presto'):
self.gamelog("A new game begins!", type='major')
self.gamelog("Competitors:")
for player in self.players.values():
self.gamelog(f"* {player}")
try:
player.bot.enter_ruins()
except Exception:
exception(f"Failure to initialize {player.bot}")
self.kill(player, "is dead on arrival.")
while any(player.active for player in self.players.values()):
# input()
self.turn()
self.complete = True
self.gamelog("The game has ended!", type='major')
def ranking_key(player):
player.treasures.sort(key=lambda x: x.value, reverse=True)
return (
player.alive,
player.total_value,
-player.carry_weight,
-len(player.treasures),
*(treasure.value for treasure in player.treasures)
)
ranked = sorted(self.players.values(), key=ranking_key, reverse=True)
n_players = len(ranked)
scores = [
(player, n_players - index if player.alive and player.treasures else 0)
for index, player in enumerate(ranked)
]
self.gamelog(scores[0][0], "won the game", type='good')
self.gamelog(
"Score for this game:\n" +
tabulate(
[
[
player.bot.__class__.__name__,
player.name,
f'${player.total_value}' if player.alive else 'DEAD',
len(player.treasures),
player.carry_weight,
player.stamina,
score,
]
for player, score in scores
],
headers=['Bot Class', 'Character', 'Money', 'Treasures', 'Weight', 'Stamina', 'Score'],
colalign=['left', 'left', 'right', 'right', 'right', 'right', 'right'],
tablefmt=tablefmt
),
type='score'
)
return scores
def run_tournament(
bots,
game_size=10,
pool_games=20,
required_lead=50,
max_final_games=500,
tablefmt='presto',
seed=None
):
rand = random.Random(seed)
def tourneylog(*message, type='tourney', end='', **kwargs):
if type in LOG_SUPPRESS:
return
print(f"{MSG_COLORS[type]}[==TOURNAMENT==]", *message, end=(LOG_END+end), **kwargs)
full_pool = list(bots)
scores = {
bot_class.__name__: 0
for bot_class in full_pool
}
bots_by_name = {
bot_class.__name__: bot_class
for bot_class in full_pool
}
def run_game(bots):
rand.shuffle(bots)
game = Ruins(*bots, seed=rand.getrandbits(1337))
for player, score in game.run_game(tablefmt=tablefmt):
if not isinstance(player.bot, Drunkard):
scores[type(player.bot).__name__] += score
if len(full_pool) > game_size:
tourneylog(
f"Since there are more than {game_size} bots in the tournament,"
" a pool will be run to determine which bots will compete in the final series."
)
game_counts = {bot.__name__: 0 for bot in full_pool} #TEMP
carryover = []
for pool_round in range(pool_games):
tourneylog("Starting round", pool_round + 1, "of the pool")
rand.shuffle(full_pool)
pool = carryover
carryover = []
for bot in full_pool:
while len(pool) >= game_size:
for b in pool[:game_size]:
game_counts[b.__name__] += 1
run_game(pool[:game_size])
pool = pool[game_size:]
if bot in pool:
carryover.append(bot)
else:
pool.append(bot)
carryover += pool
while len(carryover) >= game_size:
for b in carryover[:game_size]:
game_counts[b.__name__] += 1
run_game(carryover[:game_size])
carryover = carryover[game_size:]
tourneylog("Making sure an equal number of games were played by each bot...", type='debug')
for botname, count in game_counts.items():
tourneylog(
botname, 'played', count, 'games.',
type=('debug' if count == pool_games else 'warning')
)
ranked_bots = sorted(scores.items(), key=lambda x: x[1], reverse=True)
tourneylog("Results from pool series:", type='pool')
for line in tabulate(
[
(botname, score, format(score / pool_games, '.03f'))
for botname, score in ranked_bots
],
headers=['Bot Class', 'Score', 'Mean Score'],
tablefmt='presto'
).splitlines():
tourneylog(line, type='pool')
finalists = [
bots_by_name[botname]
for botname, _ in ranked_bots[:game_size]
]
scores = {
bot_class.__name__: 0
for bot_class in finalists
}
else:
finalists = full_pool
if len(finalists) < game_size:
tourneylog(
"Since there aren't enough bots, remaining slots will be filled in with Drunkards",
type='warning'
)
while len(finalists) < game_size:
finalists.append(Drunkard)
finalist_game = 0
while True:
finalist_game += 1
tourneylog(f"Starting game {finalist_game} of the final round.")
run_game(finalists)
if finalist_game >= max_final_games:
tourneylog("Maximum number of finalist games run!", type='warning')
break
if len(scores) < 2:
tourneylog("There aren't enough competitors. Exiting.", type='bad')
return
first, second = heapq.nlargest(2, scores.values())
if first - second >= required_lead:
tourneylog(
f"The first place bot has achieved a {first - second} point lead over the"
" second place bot!"
)
break
ranked_bots = sorted(scores.items(), key=lambda x: x[1], reverse=True)
tourneylog("The tournament has completed successfully!")
tourneylog(
"Final scores of the finalists:\n"
+ tabulate(
[
(botname, score, format(score / finalist_game, '.03f'))
for botname, score in ranked_bots
],
headers=['Bot Class', 'Score', 'Mean Score'],
tablefmt=tablefmt
),
type='final'
)
tourneylog(f"The winner of the tournament is {ranked_bots[0][0]}!", type='winner')
# === META - Loading bots ===
def scrape_page(url):
try:
page = html.fromstring(requests.get(url).text)
except Exception:
exception("Unable to download bots")
sys.exit(2)
for answer in page.xpath("//div[@class='answer']"):
try:
headers = answer.xpath(".//h1")
title = headers[0].text_content()
user = answer.xpath(".//div[@class='user-details']//a")[-1].text
code = answer.xpath(".//pre/code")[0].text
except Exception:
exception("Unable to extract bot from answer")
else:
yield code, title, user
def sanitize(name):
name = '_'.join(name.split())
for i, ch in enumerate(name):
if ch != '_' and not ch.isalnum():
return name[:i]
else:
return name
def download_bots(url, bot_dir):
for code, title, user in scrape_page(url):
try:
module_file = f"{sanitize(user).lower()}__{sanitize(title).lower()}.py"
if not module_file[0].isalpha():
module_file = 'a__' + module_file
with open(os.path.join(bot_dir, module_file), 'w') as f:
f.write(
f"'''{title}\nby {user}\n'''\n"
"from __main__ import Adventurer\n"
"print = lambda *_, **__: None\n\n"
)
f.write(code)
except Exception:
exception()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'-s', '--seed',
help="Seed to use"
)
parser.add_argument(
'-r', '--replay',
help="Run from a replay file."
)
parser.add_argument(
'-d',
'--bot-dir',
default='ruins_bots'
)
if requests and html:
parser.add_argument(
'--url',
help="Scrape bot code from StackExchange first"
)
else:
parser.set_defaults(
url=None
)
parser.add_argument(
'-1', '--single',
action='store_true',
help="Run a single game instead of a tournament."
" This will not limit the maximum number of adventurers in the ruins"
" or fill in empty slots with Drunkards."
)
parser.add_argument(
'-f', '--tablefmt', '--fmt',
default='presto',
help="The table format to use for scores"
)
parser.add_argument(
'-p', '--pause-on-death',
nargs='?',
metavar='CLASSNAME',
const=ALL,
default=[],
type=lambda s: s.split(':'),
help="Pause the controller when an adventurer dies. You may also specify a colon-separated list of class names to match against."
)
logmodes = parser.add_mutually_exclusive_group()
logmodes.add_argument(
'--debug',
action='store_true',
help='Show all game log messages.'
)
logmodes.add_argument(
'--score-only',
action='store_true',
help="Suppress all log messages except score"
)
logmodes.add_argument(
'-q', '--quiet',
action='store_true',
help="Suppress unimportant log messages."
)
parser.add_argument(
'-x', '--suppress',
nargs='+',
default=[],
choices=MSG_TYPES,
metavar='TYPE',
help=f"Suppress messages of the given type. (One of: {', '.join(MSG_TYPES)})"
)
logmodes.add_argument(
'-o', '--only',
nargs='+',
choices=MSG_TYPES,
metavar='TYPE',
help="Only show messages of the given types. (Same options as --suppress)"
)
args = parser.parse_args()
if args.debug and args.suppress:
parser.error("Cannot pass --suppress and --debug together.")
if args.only and args.suppress:
parser.error("Cannot pass --suppress and --only together.")
if args.score_only:
LOG_SUPPRESS = MSG_TYPES
if args.single:
LOG_SUPPRESS.discard('score')
else:
LOG_SUPPRESS.discard('final')
elif args.quiet:
LOG_SUPPRESS |= {'minor', 'good', 'info'}
if not args.single:
LOG_SUPPRESS.update({'score', 'major'})
elif args.only:
LOG_SUPPRESS = MSG_TYPES - set(args.only)
if not args.debug:
LOG_SUPPRESS.add('debug')
LOG_SUPPRESS.update(args.suppress)
if 'error' in LOG_SUPPRESS:
exception = lambda *_, **__: None
if args.url:
os.makedirs(args.bot_dir, exist_ok=True)
download_bots(args.url, args.bot_dir)
bot_classes = []
if os.path.isdir(args.bot_dir):
for finder, name, ispkg in pkgutil.walk_packages([os.path.abspath(args.bot_dir)]):
if ispkg:
continue
try:
module = finder.find_module(name).load_module(name)
except:
exception("Recovering from error in import")
else:
for obj in vars(module).values():
if ( obj is not Adventurer
and isinstance(obj, type)
and issubclass(obj, Adventurer)):
bot_classes.append(obj)
Ruins.pause_on_death = args.pause_on_death
if args.replay:
Ruins.from_replay(args.replay, [*bot_classes, Drunkard]).run_game()
else:
if args.seed is None:
args.seed = ''.join(
random.choice('0123456789ABCDEFGHJKLMNPQRSTVWXY') for _ in range(8)
)
print(f"Seed: {MSG_COLORS['seed']}{args.seed}{CLEAR_COLOR}")
if args.single:
Ruins(*bot_classes, seed=args.seed).run_game(tablefmt=args.tablefmt)
else:
run_tournament(bot_classes, tablefmt=args.tablefmt, seed=args.seed)
You can’t perform that action at this time.
|
__label__pos
| 0.987109 |
How to resolve PC matic error on startup?
0
1693
Internet can be a scary place if your system gets infected by any online virus or malicious software. This is the reason why it is always recommended that you should keep a good antivirus like PC Matic installed in your PC. Although PC Matic got all the features that an ideal antivirus should have, it is also prone to many errors and issues. Once in a while, every user has to deal with some kind of PC matic error on startup.
If you are also a PC Matic user facing errors while starting up PC Matic then this article is going to help you enormously. Here, we will list down some of the most common reasons for the occurrence of these errors and step by step method to get rid of them.
phone
What are the reasons for PC Matic startup Problems?
There are many possible reasons behind such issues. Some of them are as follows:
1. Corrupted Windows: The most common reason behind this type of errors corrupted or incomplete installation of windows.
2. Corrupted or Missing Drivers: These errors can also occur due to a corrupted driver for the PC Matic software.
3. Existing virus or Malware in your PC: Some viruses are also programmed to block the startup of any antivirus software. This might be the reason why you are facing PC Matic Startup Issues.
How to fix PC Matic Startup Problems?
The steps you should follow in order to get rid of these problems are as follows:
1. Check Compatibility: Make sure you are using the PC Matic Version compatible with the windows you are using. Also ensure that the correct drivers are installed in your PC. You can visit the manufacturer’s website to find the compatible drivers.
2. Install windows updates: Make sure all the windows updates are installed in your computer.
3. Run the window registry repair tool: The window registry tool will repair the corrupted registry of your windows. You can download the registry repair tool online. Please be patient while running this tool as it may take some time to complete the registry repairing process.
4. Rebuild WMI Repository: If the registry repair tool doesn’t solve the problem then the WMI corruption might be the reason behind the problems. To solve this problem you have to rebuild the WMI repository. Follow the step given below to check the WMI corruption.
1. Right click on the My Computer icon from the desktop.
2. Click on Manage.
3. Click on “Services and Applications” tree.
4. Right click on the “WMI Control” items and choose properties.
5. Now, a new window should appear with the message “Successfully Connected to”.
6. If you can’t find this message then it indicates that there is a WMI corruption.
Note: You can also download the automatic WMI repair tool online.
1. Re-Install Windows: If all the above-mentioned solutions can’t sort out the issues then you should reinstall or repair install the windows.
To get rid of PC Matic errors, all a customer need to do is dial the PC Matic Technical Support Helpline phone Number and talk to customer support specialists.
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.906517 |
- added sample files missing from MANIFEST
[imager.git] / fills.c
CommitLineData
92bda632
TC
1#include "imager.h"
2#include "imageri.h"
f1ac5027
TC
3
4/*
773bc121 5=head1 NAME
f1ac5027 6
773bc121 7fills.c - implements the basic general fills
f1ac5027 8
773bc121
TC
9=head1 SYNOPSIS
10
11 i_fill_t *fill;
12 i_color c1, c2;
13 i_fcolor fc1, fc2;
14 int combine;
15 fill = i_new_fill_solidf(&fc1, combine);
16 fill = i_new_fill_solid(&c1, combine);
17 fill = i_new_fill_hatchf(&fc1, &fc2, combine, hatch, cust_hash, dx, dy);
18 fill = i_new_fill_hatch(&c1, &c2, combine, hatch, cust_hash, dx, dy);
f576ce7e 19 fill = i_new_fill_image(im, matrix, xoff, yoff, combine);
773bc121
TC
20 i_fill_destroy(fill);
21
22=head1 DESCRIPTION
23
24Implements the basic general fills, which can be used for filling some
25shapes and for flood fills.
26
27Each fill can implement up to 3 functions:
28
29=over
30
31=item fill_with_color
32
33called for fills on 8-bit images. This can be NULL in which case the
34fill_with_colorf function is called.
35
36=item fill_with_fcolor
37
38called for fills on non-8-bit images or when fill_with_color is NULL.
39
40=item destroy
41
42called by i_fill_destroy() if non-NULL, to release any extra resources
43that the fill may need.
44
45=back
46
47fill_with_color and fill_with_fcolor are basically the same function
48except that the first works with lines of i_color and the second with
49lines of i_fcolor.
50
51If the combines member if non-zero the line data is populated from the
52target image before calling fill_with_*color.
53
54fill_with_color needs to fill the I<data> parameter with the fill
55pixels. If combines is non-zero it the fill pixels should be combined
56with the existing data.
57
58The current fills are:
59
60=over
61
62=item *
63
64solid fill
65
66=item *
67
68hatched fill
69
70=item *
71
72fountain fill
73
74=back
75
76Fountain fill is implemented by L<filters.c>.
77
efdc2568
TC
78Other fills that could be implemented include:
79
80=over
81
82=item *
83
84image - an image tiled over the fill area, with an offset either
85horizontally or vertically.
86
87=item *
88
89checkerboard - combine 2 fills in a checkerboard
90
91=item *
92
93combine - combine the levels of 2 other fills based in the levels of
94an image
95
96=item *
97
98regmach - use the register machine to generate colors
99
100=back
101
773bc121
TC
102=over
103
104=cut
f1ac5027
TC
105*/
106
107static i_color fcolor_to_color(i_fcolor *c) {
108 int ch;
109 i_color out;
110
111 for (ch = 0; ch < MAXCHANNELS; ++ch)
112 out.channel[ch] = SampleFTo8(c->channel[ch]);
976efad5
TC
113
114 return out;
f1ac5027
TC
115}
116
117static i_fcolor color_to_fcolor(i_color *c) {
118 int ch;
976efad5 119 i_fcolor out;
f1ac5027
TC
120
121 for (ch = 0; ch < MAXCHANNELS; ++ch)
122 out.channel[ch] = Sample8ToF(c->channel[ch]);
976efad5
TC
123
124 return out;
f1ac5027
TC
125}
126
efdc2568 127/* alpha combine in with out */
f1ac5027
TC
128#define COMBINE(out, in, channels) \
129 { \
130 int ch; \
131 for (ch = 0; ch < (channels); ++ch) { \
132 (out).channel[ch] = ((out).channel[ch] * (255 - (in).channel[3]) \
133 + (in).channel[ch] * (in).channel[3]) / 255; \
134 } \
135 }
136
efdc2568
TC
137/* alpha combine in with out, in this case in is a simple array of
138 samples, potentially not integers - the mult combiner uses doubles
139 for accuracy */
140#define COMBINEA(out, in, channels) \
141 { \
142 int ch; \
143 for (ch = 0; ch < (channels); ++ch) { \
144 (out).channel[ch] = ((out).channel[ch] * (255 - (in)[3]) \
145 + (in)[ch] * (in)[3]) / 255; \
146 } \
147 }
148
f1ac5027
TC
149#define COMBINEF(out, in, channels) \
150 { \
151 int ch; \
152 for (ch = 0; ch < (channels); ++ch) { \
153 (out).channel[ch] = (out).channel[ch] * (1.0 - (in).channel[3]) \
154 + (in).channel[ch] * (in).channel[3]; \
155 } \
156 }
157
efdc2568
TC
158typedef struct
159{
160 i_fill_t base;
161 i_color c;
162 i_fcolor fc;
163} i_fill_solid_t;
164
f1ac5027 165static void fill_solid(i_fill_t *, int x, int y, int width, int channels,
43c5dacb 166 i_color *);
f1ac5027 167static void fill_solidf(i_fill_t *, int x, int y, int width, int channels,
43c5dacb 168 i_fcolor *);
f1ac5027 169static void fill_solid_comb(i_fill_t *, int x, int y, int width, int channels,
43c5dacb 170 i_color *);
f1ac5027 171static void fill_solidf_comb(i_fill_t *, int x, int y, int width,
43c5dacb 172 int channels, i_fcolor *);
f1ac5027
TC
173
174static i_fill_solid_t base_solid_fill =
175{
176 {
177 fill_solid,
178 fill_solidf,
179 NULL,
efdc2568
TC
180 NULL,
181 NULL,
f1ac5027
TC
182 },
183};
184static i_fill_solid_t base_solid_fill_comb =
185{
186 {
187 fill_solid_comb,
188 fill_solidf_comb,
189 NULL,
efdc2568
TC
190 NULL,
191 NULL,
f1ac5027
TC
192 },
193};
194
773bc121
TC
195/*
196=item i_fill_destroy(fill)
197
92bda632
TC
198=category Fills
199
773bc121
TC
200Call to destroy any fill object.
201
202=cut
203*/
204
f1ac5027
TC
205void
206i_fill_destroy(i_fill_t *fill) {
207 if (fill->destroy)
208 (fill->destroy)(fill);
209 myfree(fill);
210}
211
773bc121
TC
212/*
213=item i_new_fill_solidf(color, combine)
214
92bda632
TC
215=category Fills
216
773bc121
TC
217Create a solid fill based on a float color.
218
219If combine is non-zero then alpha values will be combined.
220
221=cut
222*/
223
f1ac5027
TC
224i_fill_t *
225i_new_fill_solidf(i_fcolor *c, int combine) {
226 int ch;
f0960b14 227 i_fill_solid_t *fill = mymalloc(sizeof(i_fill_solid_t)); /* checked 14jul05 tonyc */
f1ac5027 228
141a6114 229 if (combine) {
f1ac5027 230 *fill = base_solid_fill_comb;
efdc2568
TC
231 i_get_combine(combine, &fill->base.combine, &fill->base.combinef);
232 }
f1ac5027
TC
233 else
234 *fill = base_solid_fill;
235 fill->fc = *c;
236 for (ch = 0; ch < MAXCHANNELS; ++ch) {
237 fill->c.channel[ch] = SampleFTo8(c->channel[ch]);
238 }
239
240 return &fill->base;
241}
242
773bc121
TC
243/*
244=item i_new_fill_solid(color, combine)
245
92bda632
TC
246=category Fills
247
248Create a solid fill based on an 8-bit color.
773bc121
TC
249
250If combine is non-zero then alpha values will be combined.
251
252=cut
253*/
254
f1ac5027
TC
255i_fill_t *
256i_new_fill_solid(i_color *c, int combine) {
257 int ch;
f0960b14 258 i_fill_solid_t *fill = mymalloc(sizeof(i_fill_solid_t)); /* checked 14jul05 tonyc */
f1ac5027 259
141a6114 260 if (combine) {
f1ac5027 261 *fill = base_solid_fill_comb;
efdc2568
TC
262 i_get_combine(combine, &fill->base.combine, &fill->base.combinef);
263 }
f1ac5027
TC
264 else
265 *fill = base_solid_fill;
266 fill->c = *c;
267 for (ch = 0; ch < MAXCHANNELS; ++ch) {
268 fill->fc.channel[ch] = Sample8ToF(c->channel[ch]);
269 }
270
271 return &fill->base;
272}
273
f1ac5027
TC
274static unsigned char
275builtin_hatches[][8] =
276{
277 {
278 /* 1x1 checkerboard */
279 0xAA, 0x55, 0xAA, 0x55, 0xAA, 0x55, 0xAA, 0x55,
280 },
281 {
282 /* 2x2 checkerboard */
283 0xCC, 0xCC, 0x33, 0x33, 0xCC, 0xCC, 0x33, 0x33,
284 },
285 {
286 /* 4 x 4 checkerboard */
287 0xF0, 0xF0, 0xF0, 0xF0, 0x0F, 0x0F, 0x0F, 0x0F,
288 },
289 {
290 /* single vertical lines */
291 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01,
292 },
293 {
294 /* double vertical lines */
295 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11,
296 },
297 {
298 /* quad vertical lines */
299 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55,
300 },
301 {
302 /* single hlines */
303 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
304 },
305 {
306 /* double hlines */
307 0xFF, 0x00, 0x00, 0x00, 0xFF, 0x00, 0x00, 0x00,
308 },
309 {
310 /* quad hlines */
311 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00, 0xFF, 0x00,
312 },
313 {
314 /* single / */
315 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
316 },
317 {
318 /* single \ */
319 0x80, 0x40, 0x20, 0x10, 0x08, 0x04, 0x02, 0x01,
320 },
321 {
322 /* double / */
323 0x11, 0x22, 0x44, 0x88, 0x11, 0x22, 0x44, 0x88,
324 },
325 {
326 /* double \ */
327 0x88, 0x44, 0x22, 0x11, 0x88, 0x44, 0x22, 0x11,
328 },
329 {
330 /* single grid */
331 0xFF, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
332 },
333 {
334 /* double grid */
335 0xFF, 0x88, 0x88, 0x88, 0xFF, 0x88, 0x88, 0x88,
336 },
337 {
338 /* quad grid */
339 0xFF, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA,
340 },
341 {
342 /* single dots */
343 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
344 },
345 {
346 /* 4 dots */
347 0x88, 0x00, 0x00, 0x00, 0x88, 0x00, 0x00, 0x00,
348 },
349 {
350 /* 16 dots */
351 0xAA, 0x00, 0xAA, 0x00, 0xAA, 0x00, 0xAA, 0x00,
352 },
353 {
354 /* simple stipple */
355 0x48, 0x84, 0x00, 0x00, 0x84, 0x48, 0x00, 0x00,
356 },
357 {
358 /* weave */
359 0x55, 0xFD, 0x05, 0xFD, 0x55, 0xDF, 0x50, 0xDF,
360 },
361 {
362 /* single cross hatch */
363 0x82, 0x44, 0x28, 0x10, 0x28, 0x44, 0x82, 0x01,
364 },
365 {
366 /* double cross hatch */
367 0xAA, 0x44, 0xAA, 0x11, 0xAA, 0x44, 0xAA, 0x11,
368 },
369 {
370 /* vertical lozenge */
371 0x11, 0x11, 0x11, 0xAA, 0x44, 0x44, 0x44, 0xAA,
372 },
373 {
374 /* horizontal lozenge */
375 0x88, 0x70, 0x88, 0x07, 0x88, 0x70, 0x88, 0x07,
376 },
377 {
378 /* scales overlapping downwards */
7a606d29 379 0x80, 0x80, 0x41, 0x3E, 0x08, 0x08, 0x14, 0xE3,
f1ac5027
TC
380 },
381 {
382 /* scales overlapping upwards */
7a606d29 383 0xC7, 0x28, 0x10, 0x10, 0x7C, 0x82, 0x01, 0x01,
f1ac5027
TC
384 },
385 {
386 /* scales overlapping leftwards */
7a606d29 387 0x83, 0x84, 0x88, 0x48, 0x38, 0x48, 0x88, 0x84,
f1ac5027
TC
388 },
389 {
390 /* scales overlapping rightwards */
7a606d29 391 0x21, 0x11, 0x12, 0x1C, 0x12, 0x11, 0x21, 0xC1,
f1ac5027
TC
392 },
393 {
394 /* denser stipple */
395 0x44, 0x88, 0x22, 0x11, 0x44, 0x88, 0x22, 0x11,
396 },
397 {
398 /* L-shaped tiles */
399 0xFF, 0x84, 0x84, 0x9C, 0x94, 0x9C, 0x90, 0x90,
400 },
cc6483e0
TC
401 {
402 /* wider stipple */
403 0x80, 0x40, 0x20, 0x00, 0x02, 0x04, 0x08, 0x00,
404 },
f1ac5027
TC
405};
406
407typedef struct
408{
409 i_fill_t base;
410 i_color fg, bg;
411 i_fcolor ffg, fbg;
412 unsigned char hatch[8];
413 int dx, dy;
414} i_fill_hatch_t;
415
416static void fill_hatch(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 417 i_color *data);
f1ac5027 418static void fill_hatchf(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 419 i_fcolor *data);
773bc121
TC
420static
421i_fill_t *
422i_new_hatch_low(i_color *fg, i_color *bg, i_fcolor *ffg, i_fcolor *fbg,
423 int combine, int hatch, unsigned char *cust_hatch,
424 int dx, int dy);
425
426/*
427=item i_new_fill_hatch(fg, bg, combine, hatch, cust_hatch, dx, dy)
428
92bda632
TC
429=category Fills
430
773bc121
TC
431Creates a new hatched fill with the fg color used for the 1 bits in
432the hatch and bg for the 0 bits. If combine is non-zero alpha values
433will be combined.
f1ac5027 434
773bc121
TC
435If cust_hatch is non-NULL it should be a pointer to 8 bytes of the
436hash definition, with the high-bits to the left.
437
438If cust_hatch is NULL then one of the standard hatches is used.
439
440(dx, dy) are an offset into the hatch which can be used to unalign adjoining areas, or to align the origin of a hatch with the the side of a filled area.
441
442=cut
443*/
444i_fill_t *
445i_new_fill_hatch(i_color *fg, i_color *bg, int combine, int hatch,
446 unsigned char *cust_hatch, int dx, int dy) {
447 return i_new_hatch_low(fg, bg, NULL, NULL, combine, hatch, cust_hatch,
448 dx, dy);
449}
450
451/*
452=item i_new_fill_hatchf(fg, bg, combine, hatch, cust_hatch, dx, dy)
453
92bda632
TC
454=category Fills
455
773bc121
TC
456Creates a new hatched fill with the fg color used for the 1 bits in
457the hatch and bg for the 0 bits. If combine is non-zero alpha values
458will be combined.
459
460If cust_hatch is non-NULL it should be a pointer to 8 bytes of the
461hash definition, with the high-bits to the left.
462
463If cust_hatch is NULL then one of the standard hatches is used.
464
465(dx, dy) are an offset into the hatch which can be used to unalign adjoining areas, or to align the origin of a hatch with the the side of a filled area.
466
467=cut
468*/
469i_fill_t *
470i_new_fill_hatchf(i_fcolor *fg, i_fcolor *bg, int combine, int hatch,
471 unsigned char *cust_hatch, int dx, int dy) {
472 return i_new_hatch_low(NULL, NULL, fg, bg, combine, hatch, cust_hatch,
473 dx, dy);
474}
475
f576ce7e 476static void fill_image(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 477 i_color *data);
f576ce7e 478static void fill_imagef(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 479 i_fcolor *data);
f576ce7e
TC
480struct i_fill_image_t {
481 i_fill_t base;
482 i_img *src;
483 int xoff, yoff;
484 int has_matrix;
485 double matrix[9];
486};
487
488/*
489=item i_new_fill_image(im, matrix, xoff, yoff, combine)
490
92bda632
TC
491=category Fills
492
f576ce7e
TC
493Create an image based fill.
494
92bda632
TC
495matrix is an array of 9 doubles representing a transformation matrix.
496
497xoff and yoff are the offset into the image to start filling from.
498
f576ce7e
TC
499=cut
500*/
501i_fill_t *
502i_new_fill_image(i_img *im, double *matrix, int xoff, int yoff, int combine) {
f0960b14 503 struct i_fill_image_t *fill = mymalloc(sizeof(*fill)); /* checked 14jul05 tonyc */
f576ce7e
TC
504
505 fill->base.fill_with_color = fill_image;
506 fill->base.fill_with_fcolor = fill_imagef;
507 fill->base.destroy = NULL;
508
509 if (combine) {
510 i_get_combine(combine, &fill->base.combine, &fill->base.combinef);
511 }
512 else {
513 fill->base.combine = NULL;
514 fill->base.combinef = NULL;
515 }
516 fill->src = im;
517 if (xoff < 0)
518 xoff += im->xsize;
519 fill->xoff = xoff;
520 if (yoff < 0)
521 yoff += im->ysize;
522 fill->yoff = yoff;
523 if (matrix) {
524 fill->has_matrix = 1;
525 memcpy(fill->matrix, matrix, sizeof(fill->matrix));
526 }
527 else
528 fill->has_matrix = 0;
529
530 return &fill->base;
531}
532
533
773bc121
TC
534#define T_SOLID_FILL(fill) ((i_fill_solid_t *)(fill))
535
536/*
537=back
538
539=head1 INTERNAL FUNCTIONS
540
541=over
542
543=item fill_solid(fill, x, y, width, channels, data)
544
545The 8-bit sample fill function for non-combining solid fills.
546
547=cut
548*/
549static void
550fill_solid(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 551 i_color *data) {
773bc121
TC
552 while (width-- > 0) {
553 *data++ = T_SOLID_FILL(fill)->c;
554 }
555}
556
557/*
558=item fill_solid(fill, x, y, width, channels, data)
559
560The floating sample fill function for non-combining solid fills.
561
562=cut
563*/
564static void
565fill_solidf(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 566 i_fcolor *data) {
773bc121
TC
567 while (width-- > 0) {
568 *data++ = T_SOLID_FILL(fill)->fc;
569 }
570}
571
572/*
573=item fill_solid_comb(fill, x, y, width, channels, data)
574
575The 8-bit sample fill function for combining solid fills.
576
577=cut
578*/
579static void
580fill_solid_comb(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 581 i_color *data) {
773bc121
TC
582 i_color c = T_SOLID_FILL(fill)->c;
583
584 while (width-- > 0) {
43c5dacb 585 *data++ = c;
773bc121
TC
586 }
587}
588
589/*
590=item fill_solidf_comb(fill, x, y, width, channels, data)
591
592The floating sample fill function for combining solid fills.
593
594=cut
595*/
596static void
597fill_solidf_comb(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 598 i_fcolor *data) {
773bc121
TC
599 i_fcolor c = T_SOLID_FILL(fill)->fc;
600
601 while (width-- > 0) {
43c5dacb 602 *data++ = c;
773bc121
TC
603 }
604}
605
606/*
607=item i_new_hatch_low(fg, bg, ffg, fbg, combine, hatch, cust_hatch, dx, dy)
608
609Implements creation of hatch fill objects.
610
611=cut
612*/
f1ac5027
TC
613static
614i_fill_t *
615i_new_hatch_low(i_color *fg, i_color *bg, i_fcolor *ffg, i_fcolor *fbg,
616 int combine, int hatch, unsigned char *cust_hatch,
617 int dx, int dy) {
f0960b14 618 i_fill_hatch_t *fill = mymalloc(sizeof(i_fill_hatch_t)); /* checked 14jul05 tonyc */
f1ac5027
TC
619
620 fill->base.fill_with_color = fill_hatch;
621 fill->base.fill_with_fcolor = fill_hatchf;
622 fill->base.destroy = NULL;
623 fill->fg = fg ? *fg : fcolor_to_color(ffg);
624 fill->bg = bg ? *bg : fcolor_to_color(fbg);
625 fill->ffg = ffg ? *ffg : color_to_fcolor(fg);
626 fill->fbg = fbg ? *fbg : color_to_fcolor(bg);
141a6114 627 if (combine) {
efdc2568
TC
628 i_get_combine(combine, &fill->base.combine, &fill->base.combinef);
629 }
630 else {
631 fill->base.combine = NULL;
632 fill->base.combinef = NULL;
633 }
f1ac5027
TC
634 if (cust_hatch) {
635 memcpy(fill->hatch, cust_hatch, 8);
636 }
637 else {
638 if (hatch > sizeof(builtin_hatches)/sizeof(*builtin_hatches))
639 hatch = 0;
640 memcpy(fill->hatch, builtin_hatches[hatch], 8);
641 }
642 fill->dx = dx & 7;
643 fill->dy = dy & 7;
644
645 return &fill->base;
646}
647
773bc121
TC
648/*
649=item fill_hatch(fill, x, y, width, channels, data)
f1ac5027 650
773bc121 651The 8-bit sample fill function for hatched fills.
f1ac5027 652
b8c2033e 653=cut
773bc121 654*/
f1ac5027 655static void fill_hatch(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 656 i_color *data) {
f1ac5027
TC
657 i_fill_hatch_t *f = (i_fill_hatch_t *)fill;
658 int byte = f->hatch[(y + f->dy) & 7];
659 int xpos = (x + f->dx) & 7;
660 int mask = 128 >> xpos;
661
43c5dacb
TC
662 while (width-- > 0) {
663 *data++ = (byte & mask) ? f->fg : f->bg;
664
665 if ((mask >>= 1) == 0)
666 mask = 128;
f1ac5027
TC
667 }
668}
669
773bc121
TC
670/*
671=item fill_hatchf(fill, x, y, width, channels, data)
672
673The floating sample fill function for hatched fills.
674
675=back
676*/
f1ac5027 677static void fill_hatchf(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 678 i_fcolor *data) {
f1ac5027
TC
679 i_fill_hatch_t *f = (i_fill_hatch_t *)fill;
680 int byte = f->hatch[(y + f->dy) & 7];
681 int xpos = (x + f->dx) & 7;
682 int mask = 128 >> xpos;
683
43c5dacb
TC
684 while (width-- > 0) {
685 *data++ = (byte & mask) ? f->ffg : f->fbg;
686
687 if ((mask >>= 1) == 0)
688 mask = 128;
efdc2568
TC
689 }
690}
691
f576ce7e
TC
692/* hopefully this will be inlined (it is with -O3 with gcc 2.95.4) */
693/* linear interpolation */
694static i_color interp_i_color(i_color before, i_color after, double pos,
695 int channels) {
696 i_color out;
697 int ch;
698
699 pos -= floor(pos);
700 for (ch = 0; ch < channels; ++ch)
701 out.channel[ch] = (1-pos) * before.channel[ch] + pos * after.channel[ch];
702 if (out.channel[3])
703 for (ch = 0; ch < channels; ++ch)
704 if (ch != 3) {
705 int temp = out.channel[ch] * 255 / out.channel[3];
706 if (temp > 255)
707 temp = 255;
708 out.channel[ch] = temp;
709 }
710
711 return out;
712}
713
714/* hopefully this will be inlined (it is with -O3 with gcc 2.95.4) */
715/* linear interpolation */
716static i_fcolor interp_i_fcolor(i_fcolor before, i_fcolor after, double pos,
717 int channels) {
718 i_fcolor out;
719 int ch;
720
721 pos -= floor(pos);
722 for (ch = 0; ch < channels; ++ch)
723 out.channel[ch] = (1-pos) * before.channel[ch] + pos * after.channel[ch];
724 if (out.channel[3])
725 for (ch = 0; ch < channels; ++ch)
726 if (ch != 3) {
727 int temp = out.channel[ch] / out.channel[3];
728 if (temp > 1.0)
729 temp = 1.0;
730 out.channel[ch] = temp;
731 }
732
733 return out;
734}
735
736/*
737=item fill_image(fill, x, y, width, channels, data, work)
738
739=cut
740*/
741static void fill_image(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 742 i_color *data) {
f576ce7e 743 struct i_fill_image_t *f = (struct i_fill_image_t *)fill;
f576ce7e 744 int i = 0;
cde2dbc7 745 i_color *out = data;
f576ce7e
TC
746
747 if (f->has_matrix) {
748 /* the hard way */
749 while (i < width) {
750 double rx = f->matrix[0] * (x+i) + f->matrix[1] * y + f->matrix[2];
751 double ry = f->matrix[3] * (x+i) + f->matrix[4] * y + f->matrix[5];
752 double ix = floor(rx / f->src->xsize);
753 double iy = floor(ry / f->src->ysize);
754 i_color c[2][2];
755 i_color c2[2];
756 int dy;
757
758 if (f->xoff) {
759 rx += iy * f->xoff;
760 ix = floor(rx / f->src->xsize);
761 }
762 else if (f->yoff) {
763 ry += ix * f->yoff;
764 iy = floor(ry / f->src->ysize);
765 }
766 rx -= ix * f->src->xsize;
767 ry -= iy * f->src->ysize;
768
769 for (dy = 0; dy < 2; ++dy) {
770 if ((int)rx == f->src->xsize-1) {
771 i_gpix(f->src, f->src->xsize-1, ((int)ry+dy) % f->src->ysize, &c[dy][0]);
772 i_gpix(f->src, 0, ((int)ry+dy) % f->src->xsize, &c[dy][1]);
773 }
774 else {
775 i_glin(f->src, (int)rx, (int)rx+2, ((int)ry+dy) % f->src->ysize,
776 c[dy]);
777 }
778 c2[dy] = interp_i_color(c[dy][0], c[dy][1], rx, f->src->channels);
779 }
cde2dbc7 780 *out++ = interp_i_color(c2[0], c2[1], ry, f->src->channels);
f576ce7e
TC
781 ++i;
782 }
783 }
784 else {
785 /* the easy way */
786 /* this should be possible to optimize to use i_glin() */
787 while (i < width) {
788 int rx = x+i;
789 int ry = y;
790 int ix = rx / f->src->xsize;
791 int iy = ry / f->src->ysize;
792
793 if (f->xoff) {
794 rx += iy * f->xoff;
795 ix = rx / f->src->xsize;
796 }
797 else if (f->yoff) {
798 ry += ix * f->yoff;
799 iy = ry / f->src->xsize;
800 }
801 rx -= ix * f->src->xsize;
802 ry -= iy * f->src->ysize;
cde2dbc7
TC
803 i_gpix(f->src, rx, ry, out);
804 ++out;
f576ce7e
TC
805 ++i;
806 }
807 }
cde2dbc7
TC
808 if (f->src->channels == 3) {
809 /* just set the alpha */
810 for (i = 0; i < width; ++i) {
811 data->channel[3] = 255;
812 data++;
813 }
814 }
815 else if (f->src->channels == 2) {
816 /* copy the alpha to channel 3, duplicate the grey value */
817 for (i = 0; i < width; ++i) {
818 data->channel[3] = data->channel[1];
819 data->channel[1] = data->channel[2] = data->channel[0];
820 data++;
821 }
822 }
823 else if (f->src->channels == 1) {
824 /* set the alpha, duplicate grey */
825 for (i = 0; i < width; ++i) {
826 data->channel[3] = 255;
827 data->channel[1] = data->channel[2] = data->channel[0];
828 data++;
829 }
830 }
f576ce7e
TC
831}
832
833/*
834=item fill_image(fill, x, y, width, channels, data, work)
835
836=cut
837*/
838static void fill_imagef(i_fill_t *fill, int x, int y, int width, int channels,
43c5dacb 839 i_fcolor *data) {
f576ce7e 840 struct i_fill_image_t *f = (struct i_fill_image_t *)fill;
f576ce7e 841 int i = 0;
f576ce7e
TC
842
843 if (f->has_matrix) {
844 /* the hard way */
845 while (i < width) {
846 double rx = f->matrix[0] * (x+i) + f->matrix[1] * y + f->matrix[2];
847 double ry = f->matrix[3] * (x+i) + f->matrix[4] * y + f->matrix[5];
848 double ix = floor(rx / f->src->xsize);
849 double iy = floor(ry / f->src->ysize);
850 i_fcolor c[2][2];
851 i_fcolor c2[2];
852 int dy;
853
854 if (f->xoff) {
855 rx += iy * f->xoff;
856 ix = floor(rx / f->src->xsize);
857 }
858 else if (f->yoff) {
859 ry += ix * f->yoff;
860 iy = floor(ry / f->src->ysize);
861 }
862 rx -= ix * f->src->xsize;
863 ry -= iy * f->src->ysize;
864
865 for (dy = 0; dy < 2; ++dy) {
866 if ((int)rx == f->src->xsize-1) {
867 i_gpixf(f->src, f->src->xsize-1, ((int)ry+dy) % f->src->ysize, &c[dy][0]);
868 i_gpixf(f->src, 0, ((int)ry+dy) % f->src->xsize, &c[dy][1]);
869 }
870 else {
871 i_glinf(f->src, (int)rx, (int)rx+2, ((int)ry+dy) % f->src->ysize,
872 c[dy]);
873 }
874 c2[dy] = interp_i_fcolor(c[dy][0], c[dy][1], rx, f->src->channels);
875 }
43c5dacb 876 *data++ = interp_i_fcolor(c2[0], c2[1], ry, f->src->channels);
f576ce7e
TC
877 ++i;
878 }
879 }
880 else {
881 /* the easy way */
882 /* this should be possible to optimize to use i_glin() */
883 while (i < width) {
884 int rx = x+i;
885 int ry = y;
886 int ix = rx / f->src->xsize;
887 int iy = ry / f->src->ysize;
888
889 if (f->xoff) {
890 rx += iy * f->xoff;
891 ix = rx / f->src->xsize;
892 }
893 else if (f->yoff) {
894 ry += ix * f->yoff;
895 iy = ry / f->src->xsize;
896 }
897 rx -= ix * f->src->xsize;
898 ry -= iy * f->src->ysize;
43c5dacb
TC
899 i_gpixf(f->src, rx, ry, data);
900 ++data;
f576ce7e
TC
901 ++i;
902 }
903 }
cde2dbc7
TC
904 if (f->src->channels == 3) {
905 /* just set the alpha */
906 for (i = 0; i < width; ++i) {
907 data->channel[3] = 1.0;
908 data++;
909 }
910 }
911 else if (f->src->channels == 2) {
912 /* copy the alpha to channel 3, duplicate the grey value */
913 for (i = 0; i < width; ++i) {
914 data->channel[3] = data->channel[1];
915 data->channel[1] = data->channel[2] = data->channel[0];
916 data++;
917 }
918 }
919 else if (f->src->channels == 1) {
920 /* set the alpha, duplicate grey */
921 for (i = 0; i < width; ++i) {
922 data->channel[3] = 1.0;
923 data->channel[1] = data->channel[2] = data->channel[0];
924 data++;
925 }
926 }
f576ce7e
TC
927}
928
efdc2568
TC
929static void combine_replace(i_color *, i_color *, int, int);
930static void combine_replacef(i_fcolor *, i_fcolor *, int, int);
931static void combine_alphablend(i_color *, i_color *, int, int);
932static void combine_alphablendf(i_fcolor *, i_fcolor *, int, int);
933static void combine_mult(i_color *, i_color *, int, int);
934static void combine_multf(i_fcolor *, i_fcolor *, int, int);
935static void combine_dissolve(i_color *, i_color *, int, int);
936static void combine_dissolvef(i_fcolor *, i_fcolor *, int, int);
937static void combine_add(i_color *, i_color *, int, int);
938static void combine_addf(i_fcolor *, i_fcolor *, int, int);
939static void combine_subtract(i_color *, i_color *, int, int);
940static void combine_subtractf(i_fcolor *, i_fcolor *, int, int);
941static void combine_diff(i_color *, i_color *, int, int);
942static void combine_difff(i_fcolor *, i_fcolor *, int, int);
943static void combine_darken(i_color *, i_color *, int, int);
944static void combine_darkenf(i_fcolor *, i_fcolor *, int, int);
945static void combine_lighten(i_color *, i_color *, int, int);
946static void combine_lightenf(i_fcolor *, i_fcolor *, int, int);
947static void combine_hue(i_color *, i_color *, int, int);
948static void combine_huef(i_fcolor *, i_fcolor *, int, int);
949static void combine_sat(i_color *, i_color *, int, int);
950static void combine_satf(i_fcolor *, i_fcolor *, int, int);
951static void combine_value(i_color *, i_color *, int, int);
952static void combine_valuef(i_fcolor *, i_fcolor *, int, int);
953static void combine_color(i_color *, i_color *, int, int);
954static void combine_colorf(i_fcolor *, i_fcolor *, int, int);
955
b33c08f8 956static struct i_combines {
efdc2568
TC
957 i_fill_combine_f combine;
958 i_fill_combinef_f combinef;
959} combines[] =
960{
961 { /* replace */
962 combine_replace,
963 combine_replacef,
964 },
965 { /* alpha blend */
966 combine_alphablend,
967 combine_alphablendf,
968 },
969 {
970 /* multiply */
971 combine_mult,
972 combine_multf,
973 },
974 {
975 /* dissolve */
976 combine_dissolve,
977 combine_dissolvef,
978 },
979 {
980 /* add */
981 combine_add,
982 combine_addf,
983 },
984 {
985 /* subtract */
986 combine_subtract,
987 combine_subtractf,
988 },
989 {
990 /* diff */
991 combine_diff,
992 combine_difff,
993 },
994 {
995 combine_lighten,
996 combine_lightenf,
997 },
998 {
999 combine_darken,
1000 combine_darkenf,
1001 },
1002 {
1003 combine_hue,
1004 combine_huef,
1005 },
1006 {
1007 combine_sat,
1008 combine_satf,
1009 },
1010 {
1011 combine_value,
1012 combine_valuef,
1013 },
1014 {
1015 combine_color,
1016 combine_colorf,
1017 },
1018};
1019
1020/*
1021=item i_get_combine(combine, color_func, fcolor_func)
1022
1023=cut
1024*/
1025
1026void i_get_combine(int combine, i_fill_combine_f *color_func,
1027 i_fill_combinef_f *fcolor_func) {
1028 if (combine < 0 || combine > sizeof(combines) / sizeof(*combines))
1029 combine = 0;
1030
1031 *color_func = combines[combine].combine;
1032 *fcolor_func = combines[combine].combinef;
1033}
1034
1035static void combine_replace(i_color *out, i_color *in, int channels, int count) {
1036 while (count--) {
1037 *out++ = *in++;
1038 }
1039}
1040
1041static void combine_replacef(i_fcolor *out, i_fcolor *in, int channels, int count) {
1042 while (count--) {
1043 *out++ = *in++;
1044 }
1045}
1046
1047static void combine_alphablend(i_color *out, i_color *in, int channels, int count) {
1048 while (count--) {
1049 COMBINE(*out, *in, channels);
1050 ++out;
1051 ++in;
1052 }
1053}
1054
1055static void combine_alphablendf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1056 while (count--) {
1057 COMBINEF(*out, *in, channels);
1058 ++out;
1059 ++in;
1060 }
1061}
1062
1063static void combine_mult(i_color *out, i_color *in, int channels, int count) {
1064 int ch;
1065
1066 while (count--) {
efdc2568
TC
1067 double mult[MAXCHANNELS];
1068 mult[3] = in->channel[3];
1069 for (ch = 0; ch < (channels); ++ch) {
1070 if (ch != 3)
1071 mult[ch] = (out->channel[ch] * in->channel[ch]) * (1.0 / 255);
1072 }
1073 COMBINEA(*out, mult, channels);
1074 ++out;
1075 ++in;
1076 }
1077}
1078
1079static void combine_multf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1080 int ch;
1081
1082 while (count--) {
1083 i_fcolor c = *in;
1084 for (ch = 0; ch < channels; ++ch) {
1085 if (ch != 3)
1086 c.channel[ch] = out->channel[ch] * in->channel[ch];
1087 }
1088 COMBINEF(*out, c, channels);
1089 ++out;
1090 ++in;
1091 }
1092}
1093
1094static void combine_dissolve(i_color *out, i_color *in, int channels, int count) {
efdc2568
TC
1095 while (count--) {
1096 if (in->channel[3] > rand() * (255.0 / RAND_MAX))
1097 COMBINE(*out, *in, channels);
1098 ++out;
1099 ++in;
1100 }
1101}
1102
1103static void combine_dissolvef(i_fcolor *out, i_fcolor *in, int channels, int count) {
efdc2568
TC
1104 while (count--) {
1105 if (in->channel[3] > rand() * (1.0 / RAND_MAX))
1106 COMBINEF(*out, *in, channels);
1107 ++out;
1108 ++in;
1109 }
1110}
1111
1112static void combine_add(i_color *out, i_color *in, int channels, int count) {
1113 int ch;
1114
1115 while (count--) {
1116 i_color c = *in;
1117 for (ch = 0; ch < (channels); ++ch) {
1118 if (ch != 3) {
1119 int total = out->channel[ch] + in->channel[ch];
1120 if (total > 255)
1121 total = 255;
1122 c.channel[ch] = total;
1123 }
1124 }
1125 COMBINE(*out, c, channels);
1126 ++out;
1127 ++in;
1128 }
1129}
1130
1131static void combine_addf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1132 int ch;
1133
1134 while (count--) {
1135 i_fcolor c = *in;
1136 for (ch = 0; ch < (channels); ++ch) {
1137 if (ch != 3) {
1138 double total = out->channel[ch] + in->channel[ch];
1139 if (total > 1.0)
1140 total = 1.0;
1141 out->channel[ch] = total;
1142 }
1143 }
1144 COMBINEF(*out, c, channels);
1145 ++out;
1146 ++in;
1147 }
1148}
1149
1150static void combine_subtract(i_color *out, i_color *in, int channels, int count) {
1151 int ch;
1152
1153 while (count--) {
1154 i_color c = *in;
1155 for (ch = 0; ch < (channels); ++ch) {
1156 if (ch != 3) {
1157 int total = out->channel[ch] - in->channel[ch];
1158 if (total < 0)
1159 total = 0;
1160 c.channel[ch] = total;
1161 }
1162 }
1163 COMBINE(*out, c, channels);
1164 ++out;
1165 ++in;
1166 }
1167}
1168
1169static void combine_subtractf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1170 int ch;
1171
1172 while (count--) {
1173 i_fcolor c = *in;
1174 for (ch = 0; ch < channels; ++ch) {
1175 if (ch != 3) {
1176 double total = out->channel[ch] - in->channel[ch];
1177 if (total < 0)
1178 total = 0;
1179 c.channel[ch] = total;
1180 }
1181 }
1182 COMBINEF(*out, c, channels);
1183 ++out;
1184 ++in;
1185 }
1186}
1187
1188static void combine_diff(i_color *out, i_color *in, int channels, int count) {
1189 int ch;
1190
1191 while (count--) {
1192 i_color c = *in;
1193 for (ch = 0; ch < (channels); ++ch) {
1194 if (ch != 3)
1195 c.channel[ch] = abs(out->channel[ch] - in->channel[ch]);
1196 }
1197 COMBINE(*out, c, channels)
1198 ++out;
1199 ++in;
1200 }
1201}
1202
1203static void combine_difff(i_fcolor *out, i_fcolor *in, int channels, int count) {
1204 int ch;
1205
1206 while (count--) {
1207 i_fcolor c = *in;
1208 for (ch = 0; ch < (channels); ++ch) {
1209 if (ch != 3)
1210 c.channel[ch] = fabs(out->channel[ch] - in->channel[ch]);
f1ac5027 1211 }
efdc2568
TC
1212 COMBINEF(*out, c, channels);
1213 ++out;
1214 ++in;
f1ac5027
TC
1215 }
1216}
773bc121 1217
efdc2568
TC
1218static void combine_darken(i_color *out, i_color *in, int channels, int count) {
1219 int ch;
1220
1221 while (count--) {
1222 for (ch = 0; ch < channels; ++ch) {
1223 if (ch != 3 && out->channel[ch] < in->channel[ch])
1224 in->channel[ch] = out->channel[ch];
1225 }
1226 COMBINE(*out, *in, channels);
1227 ++out;
1228 ++in;
1229 }
1230}
1231
1232static void combine_darkenf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1233 int ch;
1234
1235 while (count--) {
1236 for (ch = 0; ch < channels; ++ch) {
1237 if (ch != 3 && out->channel[ch] < in->channel[ch])
1238 in->channel[ch] = out->channel[ch];
1239 }
1240 COMBINEF(*out, *in, channels);
1241 ++out;
1242 ++in;
1243 }
1244}
1245
1246static void combine_lighten(i_color *out, i_color *in, int channels, int count) {
1247 int ch;
1248
1249 while (count--) {
1250 for (ch = 0; ch < channels; ++ch) {
1251 if (ch != 3 && out->channel[ch] > in->channel[ch])
1252 in->channel[ch] = out->channel[ch];
1253 }
1254 COMBINE(*out, *in, channels);
1255 ++out;
1256 ++in;
1257 }
1258}
1259
1260static void combine_lightenf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1261 int ch;
1262
1263 while (count--) {
1264 for (ch = 0; ch < channels; ++ch) {
1265 if (ch != 3 && out->channel[ch] > in->channel[ch])
1266 in->channel[ch] = out->channel[ch];
1267 }
1268 COMBINEF(*out, *in, channels);
1269 ++out;
1270 ++in;
1271 }
1272}
1273
1274static void combine_hue(i_color *out, i_color *in, int channels, int count) {
1275 while (count--) {
1276 i_color c = *out;
1277 i_rgb_to_hsv(&c);
1278 i_rgb_to_hsv(in);
1279 c.channel[0] = in->channel[0];
1280 i_hsv_to_rgb(&c);
976efad5 1281 c.channel[3] = in->channel[3];
efdc2568
TC
1282 COMBINE(*out, c, channels);
1283 ++out;
1284 ++in;
1285 }
1286}
1287
1288static void combine_huef(i_fcolor *out, i_fcolor *in, int channels, int count) {
1289 while (count--) {
1290 i_fcolor c = *out;
1291 i_rgb_to_hsvf(&c);
1292 i_rgb_to_hsvf(in);
1293 c.channel[0] = in->channel[0];
1294 i_hsv_to_rgbf(&c);
1295 c.channel[3] = in->channel[3];
1296 COMBINEF(*out, c, channels);
1297 ++out;
1298 ++in;
1299 }
1300}
1301
1302static void combine_sat(i_color *out, i_color *in, int channels, int count) {
1303 while (count--) {
1304 i_color c = *out;
1305 i_rgb_to_hsv(&c);
1306 i_rgb_to_hsv(in);
1307 c.channel[1] = in->channel[1];
1308 i_hsv_to_rgb(&c);
1309 c.channel[3] = in->channel[3];
1310 COMBINE(*out, c, channels);
1311 ++out;
1312 ++in;
1313 }
1314}
1315
1316static void combine_satf(i_fcolor *out, i_fcolor *in, int channels, int count) {
1317 while (count--) {
1318 i_fcolor c = *out;
1319 i_rgb_to_hsvf(&c);
1320 i_rgb_to_hsvf(in);
1321 c.channel[1] = in->channel[1];
1322 i_hsv_to_rgbf(&c);
1323 c.channel[3] = in->channel[3];
1324 COMBINEF(*out, c, channels);
1325 ++out;
1326 ++in;
1327 }
1328}
1329
1330static void combine_value(i_color *out, i_color *in, int channels, int count) {
1331 while (count--) {
1332 i_color c = *out;
1333 i_rgb_to_hsv(&c);
1334 i_rgb_to_hsv(in);
1335 c.channel[2] = in->channel[2];
1336 i_hsv_to_rgb(&c);
1337 c.channel[3] = in->channel[3];
1338 COMBINE(*out, c, channels);
1339 ++out;
1340 ++in;
1341 }
1342}
1343
1344static void combine_valuef(i_fcolor *out, i_fcolor *in, int channels,
1345 int count) {
1346 while (count--) {
1347 i_fcolor c = *out;
1348 i_rgb_to_hsvf(&c);
1349 i_rgb_to_hsvf(in);
1350 c.channel[2] = in->channel[2];
1351 i_hsv_to_rgbf(&c);
1352 c.channel[3] = in->channel[3];
1353 COMBINEF(*out, c, channels);
1354 ++out;
1355 ++in;
1356 }
1357}
1358
1359static void combine_color(i_color *out, i_color *in, int channels, int count) {
1360 while (count--) {
1361 i_color c = *out;
1362 i_rgb_to_hsv(&c);
1363 i_rgb_to_hsv(in);
1364 c.channel[0] = in->channel[0];
1365 c.channel[1] = in->channel[1];
1366 i_hsv_to_rgb(&c);
1367 c.channel[3] = in->channel[3];
1368 COMBINE(*out, c, channels);
1369 ++out;
1370 ++in;
1371 }
1372}
1373
1374static void combine_colorf(i_fcolor *out, i_fcolor *in, int channels,
1375 int count) {
1376 while (count--) {
1377 i_fcolor c = *out;
1378 i_rgb_to_hsvf(&c);
1379 i_rgb_to_hsvf(in);
1380 c.channel[0] = in->channel[0];
1381 c.channel[1] = in->channel[1];
1382 i_hsv_to_rgbf(&c);
1383 c.channel[3] = in->channel[3];
1384 COMBINEF(*out, c, channels);
1385 ++out;
1386 ++in;
1387 }
1388}
1389
1390
773bc121
TC
1391/*
1392=back
1393
1394=head1 AUTHOR
1395
1396Tony Cook <[email protected]>
1397
1398=head1 SEE ALSO
1399
1400Imager(3)
1401
1402=cut
1403*/
|
__label__pos
| 0.987926 |
Import/Export JSON Specification
Table of Contents
1. Import/Export JSON Specification
2. File naming
3. Content of the json files
1. Performer
2. Studio
3. Scene
4. Image
5. Gallery
4. Files
1. Folder
2. Video file
3. Image file
4. Other files
5. In JSON format
1. performer.json
2. studio.json
3. scene.json
Import/Export JSON Specification
The metadata given to Stash can be exported into the JSON format. This structure can be modified, or replicated by other means. The resulting data can then be imported again, giving the possibility for automatic scraping of all kinds. The format of this metadata bulk is a folder structure, containing the following folders:
• files
• galleries
• images
• performers
• scenes
• studios
• movies
File naming
When exported, files are named with different formats depending on the object type:
Type Format
Files/Folders <path depth in hex, two character width>.<basename>.<hash>.json
Galleries <first zip filename>.<path hash>.json or <folder basename>.<path hash>.json or <title>.json
Images <title or first file basename>.<hash>.json
Performers <name>.json
Scenes <title or first file basename>.<hash>.json
Studios <name>.json
Movies <name>.json
Note that the file naming is not significant when importing. All json files will be read from the subdirectories.
Content of the json files
In the following, the values of the according jsons will be shown. If the value should be a number, it is written with after comma values (like 29.98 or 50.0), but still as a string. The meaning from most of them should be obvious due to the previous explanation or from the possible values stash offers when editing, otherwise a short comment will be added.
The json values are given as strings, if not stated otherwise. Every new line will stand for a new value in the json. If the value is a list of objects, the values of that object will be shown indented.
If a value is empty in any file, it can be left out of the file entirely. Many files have an created_at and updated_at, both are kept in the following format:
YYYY-MM-DDThh:mm:ssTZD
Example:
"created_at": "2019-05-03T21:36:58+01:00"
Performer
name
url
twitter
instagram
birthdate
death_date
ethnicity
country
hair_color
eye_color
height
weight
measurements
fake_tits
career_length
tattoos
piercings
image (base64 encoding of the image file)
created_at
updated_at
rating (integer)
details
Studio
name
url
image (base64 encoding of the image file)
created_at
updated_at
rating (integer)
details
Scene
title
studio
url
date
rating (integer)
details
performers (list of strings, performers name)
tags (list of strings)
markers
title
seconds
primary_tag
tags (list of strings)
created_at
updated_at
file (not a list, but a single object)
size (in bytes, no after comma values)
duration (in seconds)
video_codec (example value: h264)
audio_codec (example value: aac)
width (integer, in pixel)
height (integer, in pixel)
framerate
bitrate (integer, in Bit)
created_at
updated_at
Image
title
studio
rating (integer)
performers (list of strings, performers name)
tags (list of strings)
files (list of path strings)
galleries
zip_files (list of path strings)
folder_path
title (for user-created gallery)
created_at
updated_at
title
studio
url
date
rating (integer)
details
performers (list of strings, performers name)
tags (list of strings)
zip_files (list of path strings)
folder_path
created_at
updated_at
Files
Folder
zip_file (path to containing zip file)
mod_time
type (= folder)
path
created_at
updated_at
Video file
zip_file (path to containing zip file)
mod_time
type (= video)
path
fingerprints
type
fingerprint
size
format
width
height
duration
video_codec
audio_codec
frame
bitrate
interactive (bool)
interactive_speed (integer)
created_at
updated_at
Image file
zip_file (path to containing zip file)
mod_time
type (= image)
path
fingerprints
type
fingerprint
size
format
width
height
created_at
updated_at
Other files
zip_file (path to containing zip file)
mod_time
type (= file)
path
fingerprints
type
fingerprint
size
created_at
updated_at
In JSON format
For those preferring the json-format, defined here, the following format may be more interesting:
performer.json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://docs.stashapp.cc/in-app-manual/tasks/jsonspec/#performerjson",
"title": "performer",
"description": "A json file representing a performer. The file is named by a MD5 Code.",
"type": "object",
"properties": {
"name": {
"description": "Name of the performer",
"type": "string"
},
"url": {
"description": "URL to website of the performer",
"type": "string"
},
"twitter": {
"description": "Twitter name of the performer",
"type": "string"
},
"instagram": {
"description": "Instagram name of the performer",
"type": "string"
},
"birthdate": {
"description": "Birthdate of the performer. Format is YYYY-MM-DD",
"type": "string"
},
"death_date": {
"description": "Death date of the performer. Format is YYYY-MM-DD",
"type": "string"
},
"ethnicity": {
"description": "Ethnicity of the Performer. Possible values are black, white, asian or hispanic",
"type": "string"
},
"country": {
"description": "Country of the performer",
"type": "string"
},
"hair_color": {
"description": "Hair color of the performer",
"type": "string"
},
"eye_color": {
"description": "Eye color of the performer",
"type": "string"
},
"height": {
"description": "Height of the performer in centimeters",
"type": "string"
},
"weight": {
"description": "Weight of the performer in kilograms",
"type": "string"
},
"measurements": {
"description": "Measurements of the performer",
"type": "string"
},
"fake_tits": {
"description": "Whether performer has fake tits. Possible are Yes or No",
"type": "string"
},
"career_length": {
"description": "The time the performer has been in business. In the format YYYY-YYYY",
"type": "string"
},
"tattoos": {
"description": "Giving a description of Tattoos of the performer if any",
"type": "string"
},
"piercings": {
"description": "Giving a description of Piercings of the performer if any",
"type": "string"
},
"image": {
"description": "Image of the performer, parsed into base64",
"type": "string"
},
"created_at": {
"description": "The time this performers data was added to the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"updated_at": {
"description": "The time this performers data was last changed in the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"details": {
"description": "Description of the performer",
"type": "string"
}
},
"required": ["name", "ethnicity", "image", "created_at", "updated_at"]
}
studio.json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://docs.stashapp.cc/in-app-manual/tasks/jsonspec/#studiojson",
"title": "studio",
"description": "A json file representing a studio. The file is named by a MD5 Code.",
"type": "object",
"properties": {
"name": {
"description": "Name of the studio",
"type": "string"
},
"url": {
"description": "URL to the studios websites",
"type": "string"
},
"image": {
"description": "Logo of the studio, parsed into base64",
"type": "string"
},
"created_at": {
"description": "The time this studios data was added to the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"updated_at": {
"description": "The time this studios data was last changed in the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"details": {
"description": "Description of the studio",
"type": "string"
}
},
"required": ["name", "image", "created_at", "updated_at"]
}
scene.json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "https://docs.stashapp.cc/in-app-manual/tasks/jsonspec/#scenejson",
"title": "scene",
"description": "A json file representing a scene. The file is named by the MD5 Code of the file its data is referring to.",
"type": "object",
"properties": {
"title": {
"description": "Title of the scene",
"type": "string"
},
"studio": {
"description": "The name of the studio that produced that scene",
"type": "string"
},
"url": {
"description": "The url to the scenes original source",
"type": "string"
},
"date": {
"description": "The release date of the scene. Its given in the format YYYY-MM-DD",
"type": "string"
},
"rating": {
"description": "The scenes Rating. Its given in stars, from 1 to 5",
"type": "integer"
},
"details": {
"description": "A description of the scene, containing things like the story arc",
"type": "string"
},
"performers": {
"description": "A list of names of the performers in this gallery",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
},
"tags": {
"description": "A list of the tags associated with this scene",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
},
"markers": {
"description": "Markers mark certain events in the scene, most often the change of the position. They are attributed with their own tags.",
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {
"description": "Searchable name of the marker",
"type": "string"
},
"seconds": {
"description": "At what second the marker is set. It is given with after comma values, such as 10.0 or 17.5",
"type": "string"
},
"primary_tag": {
"description": "A tag identifying this marker. Multiple markers from the same scene with the same primary tag are concatenated, showing them as similar in nature",
"type": "string"
},
"tags": {
"description": "A list of the tags associated with this marker",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
},
"created_at": {
"description": "The time this marker was added to the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"updated_at": {
"description": "The time this marker was updated the last time. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
}
},
"required": ["seconds", "primary_tag", "created_at", "updated_at"]
},
"minItems": 1,
"uniqueItems": true
},
"files": {
"description": "A list of paths of the files for this scene",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
},
"created_at": {
"description": "The time this studios data was added to the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
},
"updated_at": {
"description": "The time this studios data was last changed in the database. Format is YYYY-MM-DDThh:mm:ssTZD",
"type": "string"
}
},
"required": ["files", "created_at", "updated_at"]
}
|
__label__pos
| 0.928723 |
[Date Prev][Date Next] [Chronological] [Thread] [Top]
Re: slapadd: which database to open
In part, I favor Michael's argument. In fact, the only reason for preferring a picky slapadd is to avoid trouble. An experienced user will not likely get in trouble, like using a wrong ldif file instead of the right one. The inexperienced or incautious user would probably erroneously feel safe when the wrong ldif happens to load in the right, yet unintended database.
There isn't much hazard involved here; typically slapadd will be done to empty databases. If they have multiple suffixes they likely have multiple ldif files. Automatically loading into "the other backend" several times would still produce the desired end result, as they are more or less independent. If they have one single ldif for multiple suffixes, then traditional slapadd can offer them no help at all.
In the case where they have multiple backends and some are populated and some aren't, and they pick the wrong ldif file, likely either it is a backend they needed to load soon anyway or the objects are already present and nothing will happen as a result.
My favorite approach would be to have slapadd return more sophisticated and useful messages, like
Better messages are always a good idea.
the first entry "cn=foo,dc=z" seems to belong to database #X, whose suffix is "dc=z"; did you mean to use -n X (or -b "dc=z")?
It may be a good idea to emit informational warnings about backend selection.
With respect to the "smart" behavior Matthew suggests (loading multiple databases within one execution of slapadd), it looks definitely intriguing, but since it quite departs from the current behavior, I'd protect it behind an explicit switch (e.g. -b "", or - n "detect").
Purely from a UI perspective, -b "" sounds like loading LDIF into the rootDSE; a bit confusing perhaps. Additionally, empty arguments tend to cause great confusion between users and their shell. -n detect sounds okay.
It might be a reasonable default as it is only changing behavior for what was previously an error condition. While users might rely on all sorts of odd features, it seems unlikely that anyone relies on slapadd to print and error and do nothing in this case.
Perhaps if someone were used to populating suffixes with ldifs, and used slapadd in a for-loop to do this, checking the exit code to find out when the right ldif had been found. Seems far-fetched though.
Still, having it available as an option is better than nothing.
Matthew Backes
Symas Corporation
[email protected]
|
__label__pos
| 0.64516 |
Filter
Back
Safeguarding your digital footprint: How AI impacts data privacy in Hong Kong
2024-06-28
Introduction
When one speaks about artificial intelligence (“AI”), one thinks of the risks associated as much as one does about its range of applications and the abundance of business opportunities it opens up to. Inevitably, the new risks arising from the innovative applications of AI present regulatory challenges in the area of personal data protection. Taking the initiative to provide guidance for Hong Kong enterprises, the Office of the Privacy Commissioner for Personal Data published the Artificial Intelligence: Model Personal Data Protection Framework (the “Model Framework”) on 11 June 2024.
Recommended measures
The use of AI shall embrace data stewardship values and ethical principles. For example, the use of AI shall be respectful, beneficial and fair, bearing in mind accountability, human oversight, data privacy etc. To achieve the same, the Model Framework recommends appropriate policies, practices and procedures for organisations to adopt when they procure, implement and use AI solutions. The Model Framework focuses on 4 main areas:
1. AI strategy and governance;
2. Risk assessment and human oversight;
3. Customisation of AI models and implementation and management of AI systems; and
4. Communication and engagement with stakeholders.
This newsletter aims to give an overview for each of the main area stated above.
AI strategy and governance
Organisations should have an internal AI governance strategy, which generally comprises an AI strategy, governance considerations for procuring AI solutions, and an AI governance committee (or similar body) to steer the process.
The AI strategy shall provide directions on the purposes for which AI solutions may be procured and how AI systems can be implemented. On one hand, such strategy provides guidance and AI-related training internally to staff members and personnel within the organisations, such that they are familiar with the “do’s and don’ts” and are equipped with the skills to work in an environment using AI systems. On the other hand, as the procurement of AI solutions often engages third parties who customises AI systems, the Model Framework also proposes procurement practices that embodies governance considerations in relation to dealing with external AI procurement parties, say, whether the potential AI suppliers have followed international technical and governance standards.
At the same time, an AI governance committee which should report to the board shall be established to oversee the procurement, implementation and use of the AI system, and cultivate effective internal reporting mechanisms for reporting system failure or raising any data protection or ethical concerns to facilitate proper monitoring by the AI governance committee.
Conduct risk assessment and human oversight
A risk-based approach should be adopted in the procurement, use and management of AI systems. Comprehensive risk assessments shall systematically identify, analyse and evaluate the risks that are involved in the process. Factors that should be considered in a risk assessment include requirements of the Personal Data (Privacy) Ordinance, Cap 486 (“PDPO”), such as the volume, sensitivity and quality of data, security of data, the probability of privacy risks and the potential severity of the harm that might result.
The rationale behind such risk management measures is proportionality, meaning that the types and extent of risk mitigation measures should correspond with and be proportionate to the levels of the identified risks. For example, an AI system might be used for decision making or assist in decision making process. If there might be algorithmic bias and discrimination in the AI system and the decision to be made is very important or has a critical impact on the company, then a higher level of human oversight would be needed than an AI system with a lower risk profile. In such circumstances, human shall retain control in the decision-making process to prevent and mitigate errors by AI, otherwise known as the human-in-the-loop strategies.
AI models customisations and
implementation and management of AI systems
Major customisation and management process comprises of three steps: first, data preparation and management; second, customisation and implementation; and last, management and continuous monitoring. The primary goal of customisation of AI Models is to use the data to improve the AI solution's performance by providing more domain / context-specific information. Continuous review and user support are required after the adoption of an AI model to ensure that the AI systems remain effective, relevant and reliable. Good data governance in the customisation and operation of AI not only protects individuals' personal data privacy but also ensures data quality, which is critical to the robustness and fairness of AI systems. In formulating the same, measures must be adopted to ensure compliance with the requirements under the PDPO.
Communication and engagement with stakeholders
Organisations should communicate and engage effectively and regularly with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators to enhance transparency and build trust. Very often, one is required to provide explanations for decisions made by and output generated by AI, disclose the use of the AI system, disclose the risks, and consider allowing opt-out. Communication with stakeholders, particularly consumers, should be in plain language that is clear and understandable to lay persons, and such communication should be drawn to the attention of stakeholders.
Conclusion
The Model Framework carries far more weight than a simple guide on data privacy. Instead, it provides practical recommendations and best practices to assist organisations to procure, implement and use AI in compliance with the relevant requirements of the PDPO, so that organisations can harness the benefits of AI while safeguarding personal data privacy. If you or your company is actively adopting AI in your daily business operations, you are strongly advised to consult the full Model Framework, and if in doubt, consult your legal representatives.
For enquiries, please feel free to contact us at:
E: [email protected] T: (852) 2810 1212
W:
www.onc.hk F: (852) 2804 6311
19th Floor, Three Exchange Square, 8 Connaught Place, Central, Hong Kong
Important: The law and procedure on this subject are very specialised and complicated. This article is just a very general outline for reference and cannot be relied upon as legal advice in any individual case. If any advice or assistance is needed, please contact our solicitors.
Published by ONC Lawyers © 2024
Our People
Dominic Wai
Dominic Wai
Partner
Dominic Wai
Dominic Wai
Partner
Back to top
|
__label__pos
| 0.576453 |
Querying with the Criteria API
When the Filterable API is not enough and you need to have more control, you can make queries directly with the JPA Criteria API. You may also need to customize sorting or joins, or otherwise modify the query in some way. To do so, you need to implement a QueryModifierDelegate that the JPAContainer entity provider calls when making a query. The easiest way to do this is to extend DefaultQueryModifierDelegate, which has empty implementations of all the methods so that you can only override the ones you need.
The entity provider calls specific QueryModifierDelegate methods at different stages while making a query. The stages are:
1. Start building a query
2. Add " ORDER BY" expression
3. Add " WHERE" expression (filter)
4. Finish building a query
Methods where you can modify the query are called before and after each stage as listed in the following table:
Table 1. QueryModifierDelegate Methods
queryWillBeBuilt()
orderByWillBeAdded()
orderByWasAdded()
filtersWillBeAdded()
filtersWereAdded()
queryHasBeenBuilt()
All the methods get two parameters. The CriteriaBuilder is a builder that you can use to build queries. The CriteriaQuery is the query being built.
You can use the getRoots().iterator().next() in CriteriaQuery to get the "root" that is queried, for example, the PERSON table, etc.
Filtering the Query
Let us consider a case where we modify the query for a Person container so that it includes only people over 116. This trivial example is identical to the one given earlier using the Filterable interface.
persons.getEntityProvider().setQueryModifierDelegate(
new DefaultQueryModifierDelegate () {
@Override
public void filtersWillBeAdded(
CriteriaBuilder criteriaBuilder,
CriteriaQuery<?> query,
List<Predicate> predicates) {
Root<?> fromPerson = query.getRoots().iterator().next();
// Add a "WHERE age > 116" expression
Path<Integer> age = fromPerson.<Integer>get("age");
predicates.add(criteriaBuilder.gt(age, 116));
}
});
Compatibility
When building queries, you should consider the capabilities of the different JPA implementations. Regarding Hibernate, see "Joins in Hibernate vs EclipseLink".
|
__label__pos
| 0.938405 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks Joe
There's more than one way to do things
PerlMonks
Re^5: perl inheritance
by tobyink (Abbot)
on Mar 23, 2012 at 23:06 UTC ( #961332=note: print w/ replies, xml ) Need Help??
in reply to Re^4: perl inheritance
in thread perl inheritance
No, I wouldn't put it in an import module. A BEGIN block would be better. If you put it in import, you have no guarantee that the import method will ever actually be called. It's quite easy to load a module without ever calling the import method. Conversely, it's very easy for the import method to get called multiple times.
I actually do something pretty similar to the technique being discussed in XML::LibXML::Augment.
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Comment on Re^5: perl inheritance
Select or Download Code
Re^6: perl inheritance
by JavaFan (Canon) on Mar 23, 2012 at 23:26 UTC
What's the advantage of using a BEGIN block for that? Do you really think there's something to gain putting those methods in place, before the other methods are compiled?
Very little advantage in this case. It just so happens that in the same loop I'm also playing around with @ISA and past experience has taught me to alter @ISA as early as possible. Modern versions of Perl are pretty smart when it comes to invalidating method resolution caches, but force of habit makes me put any @ISA alteration in a BEGIN block unless there's a good reason not to.
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
Well, I can imagine putting assignments to @ISA in a BEGIN block if you have the habit of putting other things in a BEGIN block.
I typically have my assignment to @ISA near the top of the file, before any method calls are done. And that never is a problem.
Perl has always tried to invalidate method caches on assignment to @ISA -- although in a dim past, there was a bug preventing this to happen. This was fixed in 5.004 or 5.005. Long enough ago to not care about anymore. Not that method cache has any reason to be relevant here; for that to be relevant, we need to have something like:
• Assign to @ISA.
• Call a method "X".
• Assign to @ISA so X resolves to a *different* sub.
• Call method "X" again.
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://961332]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (3)
As of 2014-03-09 04:22 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Have you used a cryptocurrency?
Results (195 votes), past polls
|
__label__pos
| 0.616582 |
Commit 70a6a926 authored by Adeodato Simo's avatar Adeodato Simo Committed by Iustin Pop
Browse files
Querying node groups: LU/Opcode
This adds opcodes.OpQueryGroups and cmdlib.LUQueryGroups.
Signed-off-by: default avatarAdeodato Simo <[email protected]>
Signed-off-by: default avatarIustin Pop <[email protected]>
Reviewed-by: default avatarIustin Pop <[email protected]>
parent 4f6014d4
......@@ -10224,6 +10224,113 @@ class LURemoveExport(NoHooksLU):
" Domain Name.")
class LUQueryGroups(NoHooksLU):
"""Logical unit for querying node groups.
"""
# pylint: disable-msg=W0142
_OP_PARAMS = [
_POutputFields,
("names", ht.EmptyList, ht.TListOf(ht.TNonEmptyString)),
]
REQ_BGL = False
_FIELDS_DYNAMIC = utils.FieldSet()
_SIMPLE_FIELDS = ["name", "uuid"]
_FIELDS_STATIC = utils.FieldSet(
"node_cnt", "node_list", "pinst_cnt", "pinst_list", *_SIMPLE_FIELDS)
def CheckArguments(self):
_CheckOutputFields(static=self._FIELDS_STATIC,
dynamic=self._FIELDS_DYNAMIC,
selected=self.op.output_fields)
def ExpandNames(self):
self.needed_locks = {}
def Exec(self, feedback_fn):
"""Computes the list of groups and their attributes.
"""
all_groups = self.cfg.GetAllNodeGroupsInfo()
if not self.op.names:
my_groups = utils.NiceSort(all_groups.keys())
else:
# Accept names to be either names or UUIDs.
all_uuid = frozenset(all_groups.keys())
name_to_uuid = dict((g.name, g.uuid) for g in all_groups.values())
my_groups = []
missing = []
for name in self.op.names:
if name in all_uuid:
my_groups.append(name)
elif name in name_to_uuid:
my_groups.append(name_to_uuid[name])
else:
missing.append(name)
if missing:
raise errors.OpPrereqError("Some groups do not exist: %s" % missing,
errors.ECODE_NOENT)
do_nodes = bool(frozenset(["node_cnt", "node_list"]).
intersection(self.op.output_fields))
do_instances = bool(frozenset(["pinst_cnt", "pinst_list"]).
intersection(self.op.output_fields))
# We need to map group->[nodes], and group->[instances]. The former is
# directly attainable, but the latter we have to do through instance->node,
# hence we need to process nodes even if we only need instance information.
if do_nodes or do_instances:
all_nodes = self.cfg.GetAllNodesInfo()
group_to_nodes = dict((all_groups[name].uuid, []) for name in my_groups)
node_to_group = {}
for node in all_nodes.values():
if node.group in group_to_nodes:
group_to_nodes[node.group].append(node.name)
node_to_group[node.name] = node.group
if do_instances:
all_instances = self.cfg.GetAllInstancesInfo()
group_to_instances = dict((all_groups[name].uuid, [])
for name in my_groups)
for instance in all_instances.values():
node = instance.primary_node
if node in node_to_group:
group_to_instances[node_to_group[node]].append(instance.name)
output = []
for name in my_groups:
group = all_groups[name]
group_output = []
for field in self.op.output_fields:
if field in self._SIMPLE_FIELDS:
val = getattr(group, field)
elif field == "node_list":
val = utils.NiceSort(group_to_nodes[group.uuid])
elif field == "node_cnt":
val = len(group_to_nodes[group.uuid])
elif field == "pinst_list":
val = utils.NiceSort(group_to_instances[group.uuid])
elif field == "pinst_cnt":
val = len(group_to_instances[group.uuid])
else:
raise errors.ParameterError(field)
group_output.append(val)
output.append(group_output)
return output
class TagsLU(NoHooksLU): # pylint: disable-msg=W0223
"""Generic tags LU.
......
......@@ -188,6 +188,8 @@ class Processor(object):
opcodes.OpQueryInstanceData: cmdlib.LUQueryInstanceData,
opcodes.OpSetInstanceParams: cmdlib.LUSetInstanceParams,
opcodes.OpGrowDisk: cmdlib.LUGrowDisk,
# node group lu
opcodes.OpQueryGroups: cmdlib.LUQueryGroups,
# os lu
opcodes.OpDiagnoseOS: cmdlib.LUDiagnoseOS,
# exports lu
......
......@@ -700,6 +700,14 @@ class OpGrowDisk(OpCode):
]
# Node group opcodes
class OpQueryGroups(OpCode):
"""Compute the list of node groups."""
OP_ID = "OP_GROUP_QUERY"
__slots__ = ["output_fields", "names"]
# OS opcodes
class OpDiagnoseOS(OpCode):
"""Compute the list of guest operating systems."""
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.984748 |
Create an n x n square matrix, where all the sub-matrix have the sum of opposite corner elements as even
Given an integer N. The task is to generate a square matrix of ( n x n ) having the elements ranging from 1 to n^2 with the following condition:
• The elements of the matrix should be distinct i.e used only once
• Numbers ranging from 1 to n^2
• Every sub-matrix you choose should have the sum of opposite corner elements as even i.e sum of top left and bottom right should be even and sum of top right and bottom left element should be even
This property should apply to all the submatrices of the matrix. You need to generate an Even Sub-Matrix
Examples:
Input: 2
Output: 1 2
4 3
Explanation: Here sum of 1+3=4 is even and 2+4=6 is even
Input: 4
Output: 1 2 3
4 5 6
7 8 9
Explanation: The sub matrix [1 2 4 5], [2 3 5 6], [4 5 7 8], [5 6 8 9], [1 2 3 4 5 6 7 8 9] satisfies the condition of opposite corner
elements having even sum
Approach:
As we know for any two elements sum to be even it can be Sum of ODD and ODD or Sum of EVEN and EVEN. In either of the two cases for the corner elements sum to be even we need to ensure that the diagonal pattern arranged elements should be either odd or even. So we make the 2d array having diagonals as all odds or all evens to find any submatrix having corner elements sum even. The below approach can be followed for the same.
• When n is odd the diagonals are already in all odd or even elements so we need not modify and generate a simple 2d array
• When n is even the matrix generated does not satisfy the property having even sum of opposite corner elements of the sub-matrices, so we reverse the alternate row elements so that diagonals of every submatrix is either all odd or all even.
Below is the implementation.
Python3
filter_none
edit
close
play_arrow
link
brightness_4
code
# Even sub-matrix
import itertools
def sub_mat_even(n):
temp = itertools.count(1)
# create a 2d array ranging
# from 1 to n^2
l = [[next(temp)for i in range(n)]for i in range(n)]
# If found even we reverse the alternate
# row elements to get all diagnol elements
# as all even or all odd
if n%2 == 0:
for i in range(0,len(l)):
if i%2 == 1:
l[i][:] = l[i][::-1]
# Printing the array formed
for i in range(n):
for j in range(n):
print(l[i][j],end=" ")
print()
n = 4
sub_mat_even(n)
chevron_right
Output:
1 2 3 4
8 7 6 5
9 10 11 12
16 15 14 13
My Personal Notes arrow_drop_up
Check out this Author's contributed articles.
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Article Tags :
Be the First to upvote.
Please write to us at [email protected] to report any issue with the above content.
|
__label__pos
| 0.940393 |
Scroll Top
ABCs of UEBA: Y is for Yield
What can you expect to get from deploying a User and Entity Behavior Analytics (UEBA) product? Let’s talk about the results. Let’s focus on the Yield. Y is for Yield because Yield is an important part of just about any business conversation today, on just about any topic. We can discuss yield in terms of return on investment (ROI), whether it be in money, time, or effort. Individuals and organizations are constantly looking for the best possible yield for a given investment.
Yield also has significance in enterprise security systems. Enterprises invest all three in attempting to identify and mitigate attacks on their network and systems. Here are some methods often used to optimize threat detection and response.
First, There Are Firewalls
Enterprises try hard to protect their networks against random attacks from the outside. Firewalls examine incoming traffic to block obvious attempts at penetration from attackers. They typically flag signals from unknown sources that attempt to gain access to data or systems, and note the type of external signal and – if possible – the source of the signal.
Then There Is Anti-Malware Software
Enterprises also use anti-malware software, especially for incoming email to individuals. Anti-malware software searches incoming emails and attached files for executable files that can gain control of an individual computer or entire network. There are several possible ways of doing this, including adopting the user identity and using those privileges to gain further control, or to take over the system and become part of a trust network. In either case, if the malware isn’t caught and the user opens the file, it is possible for an attacker to gain control of both the system and the larger network.
Anti-malware also typically sits on web browsers to ensure that users don’t visit sites that automatically download malware upon visiting one or more specific pages. This type of malware can be executed by the page itself based on a simple visit.
But Firewalls and Anti-Malware Software Have Limitations
The problem with anti-malware software, and of most security software in general, is that it searches downloads for specific signatures of known attacks. That is, it looks inside any executable software (including Excel and Word macros) to match up code with algorithms that are already known to be associated with attacks.
Of course, if an attack algorithm is not yet known, or has been changed in some manner, the anti-malware software is unlikely to recognize it. Anti-malware software vendors periodically update their malware signatures, but it may take days or even weeks before a newly-discovered malware algorithm makes it onto enterprise computer systems. That gives attackers plenty of time to steal company IP and data using these undetected algorithms.
There are still more limitations to both of these preventative approaches to security. Being able to recognize the malware algorithms is an important one. And it has to be one that the anti-malware software vendor deems important enough to expend the resources to develop a solution for. Firewalls, on the other hand, are fairly effective against direct attacks, but don’t protect against attacks on individual systems or users. Overall, enterprises need more than these traditional security approaches. The yield from both approaches, even together, is simply not enough.
Monitoring Supplements Traditional Techniques
Enterprises need an early warning system that enables them to identify attack attempts in real-time, or as close to real-time as possible, so that they can be investigated. The tool for doing this is User and Entity Behavior Analytics (UEBA), which captures interactions across the network. The interactions might be between users and servers, between two or more systems, or between applications, among others. It analyzes the traffic and makes determinations about each individual transaction.
UEBA seeks to identify traffic or transactions that are out of the ordinary. While that concept seems simple enough, identifying those anomalous transactions is the tricky part. “Out of the ordinary” can mean many different things within the context of normal traffic. This can result in the flagging of many possible transactions for further review.
In the case of many UEBA tools, you may still have too many false positive transactions to investigate in order to ensure that you aren’t being attacked. Your yield on your investment of money and effort isn’t nearly as high as you would like it.
Increase Yield With Machine Learning
Fortunately, you can increase your yield by using a UEBA solution that incorporates machine learning (ML) to model normal behaviors. Once we have a good ML predictor of what is normal for all aspects of network transactions, it becomes easier and more accurate to identify out of the ordinary behaviors. Your yield goes up, because your false positives go down, saving you time and effort in hunting down real attacks.
UEBA with ML is all about increasing your efficiency in identifying attacks. The ML models enable you to analyze the data with intelligent approaches that give you a better fit, making it possible to limit false positives to a manageable level. The ML models enable a UEBA tool to focus on actual attack attempts rather than waste time with anything that doesn’t reflect a potential real world problem.
In short, UEBA using ML models yields high quality results. It significantly reduces false positive alerts. By focusing on true positive behavior-based threats, ML model-based UEBA provides the best yield of effort to result. This is why we say Y is for Yield!
Try Gurucul’s UEBA
Don’t believe what we say, believe what you experience. Try Gurucul’s UEBA in your environment on your data to see why Y is for Yield. In 5 days we will show you just how much yield you can get with our 2000+ machine learning models. Contact Us today to get started!
Prev: ABCs of UEBA: X is for eXfiltration Next: ABCs of UEBA: Z is for Zero Trust
Share this page:
|
__label__pos
| 0.534154 |
Application of averages
Everything You Need in One Place
Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.
Learn and Practice With Ease
Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.
Instant and Unlimited Help
Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!
Get the most by viewing this topic in your current grade. Pick your course now.
?
Examples
Lessons
1. A school collected gifts from students to give to needy children in the community over the Christmas holidays. The following numbers of gifts were collected.
Grade
Total number of gifts collected
1
65
2
100
3
70
4
54
5
45
6
54
7
43
1. What are the median and mean? Round your answer to the nearest whole number.
2. Which value is an outlier?
3. Which measure of central tendency better describes the data? Explain why.
2. The following table represents the sizes of winter boots that were sold last week.
Size
5
6
7
8
9
10
Number Sold
3
2
4
4
0
1
1. What are the mean and mode sizes of shoes sold? Round your mean to the nearest whole shoe size.
2. If you were in charge of ordering in more boots, which measure of central tendency is more meaningful? Why?
3. In a running race, the mean time was 2 minutes; the mode time was 2 minutes and 10 seconds; and the median time was 1 minute and 55 seconds. Jack had a time of 2 minutes. Which measure of central tendency (mode, median or mean) would you use to make Jack feel like he could do better?
Topic Notes
?
Similar to previous sections about median and mode, and mean, in this section we practice calculating the median, mode, and mean of given data sets in word problems. The mean, median, and mode are measures of central tendency. A measure of central tendency is a value that represents the centre of a set of data. Also, in this section, we are given data sets in word problems and asked to figure out which measure of central tendency best describes the data. The mode is the best measure of central tendency for data that represents frequency of choice. In contrast, the median is the best measure if a data set contains unusually large or small numbers in relation to the rest of the data. Finally, either the median or mean can be used as a measure of central tendency if all of the numbers in a set of data are close together.
|
__label__pos
| 0.991485 |
001package org.hl7.fhir.r5.model;
002
003/*
004 Copyright (c) 2011+, HL7, Inc.
005 All rights reserved.
006
007 Redistribution and use in source and binary forms, with or without modification,
008 are permitted provided that the following conditions are met:
009
010 * Redistributions of source code must retain the above copyright notice, this
011 list of conditions and the following disclaimer.
012 * Redistributions in binary form must reproduce the above copyright notice,
013 this list of conditions and the following disclaimer in the documentation
014 and/or other materials provided with the distribution.
015 * Neither the name of HL7 nor the names of its contributors may be used to
016 endorse or promote products derived from this software without specific
017 prior written permission.
018
019 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
020 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
021 WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
022 IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
023 INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
024 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
025 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
026 WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
027 ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
028 POSSIBILITY OF SUCH DAMAGE.
029
030 */
031
032
033
034import ca.uhn.fhir.model.api.IElement;
035import ca.uhn.fhir.model.api.annotation.DatatypeDef;
036import ca.uhn.fhir.parser.DataFormatException;
037import org.apache.commons.codec.binary.Base64;
038import org.hl7.fhir.instance.model.api.IBaseHasExtensions;
039import org.hl7.fhir.instance.model.api.IPrimitiveType;
040
041import java.io.Externalizable;
042import java.io.IOException;
043import java.io.ObjectInput;
044import java.io.ObjectOutput;
045
046/**
047 * Primitive type "base64Binary" in FHIR: a sequence of bytes represented in base64
048 */
049@DatatypeDef(name = "base64Binary")
050public class Base64BinaryType extends PrimitiveType<byte[]> implements IPrimitiveType<byte[]>, IBaseHasExtensions, IElement, Externalizable {
051
052 private static final long serialVersionUID = 3L;
053 private byte[] myValue;
054
055 /**
056 * Constructor
057 */
058 public Base64BinaryType() {
059 super();
060 }
061
062 public Base64BinaryType(byte[] theBytes) {
063 super();
064 setValue(theBytes);
065 }
066
067 public Base64BinaryType(String theValue) {
068 super();
069 // Null values still result in non-null instance being created
070 setValueAsString(theValue);
071 }
072
073 protected byte[] parse(String theValue) {
074 if (theValue != null) {
075 return Base64.decodeBase64(theValue.getBytes(ca.uhn.fhir.rest.api.Constants.CHARSET_UTF8));
076 } else {
077 return null;
078 }
079 }
080
081 protected String encode(byte[] theValue) {
082 if (theValue == null) {
083 return null;
084 }
085 return new String(Base64.encodeBase64(theValue), ca.uhn.fhir.rest.api.Constants.CHARSET_UTF8);
086 }
087
088 @Override
089 public Base64BinaryType copy() {
090 return new Base64BinaryType(getValue());
091 }
092
093 @Override
094 protected DataType typedCopy() {
095 return copy();
096 }
097
098 public String fhirType() {
099 return "base64Binary";
100 }
101
102 @Override
103 public void writeExternal(ObjectOutput out) throws IOException {
104 out.writeObject(getValue());
105 }
106
107 @Override
108 public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
109 setValue((byte[]) in.readObject());
110 }
111
112 @Override
113 public String getValueAsString() {
114 return encode(myValue);
115 }
116
117 @Override
118 public void setValueAsString(String theValue) throws IllegalArgumentException {
119 if (theValue != null) checkValidBase64(theValue);
120 fromStringValue(theValue);
121 setValue(parse(theValue));
122 }
123
124 @Override
125 public byte[] getValue() {
126 return myValue;
127 }
128
129 @Override
130 public Base64BinaryType setValue(byte[] theValue) throws IllegalArgumentException {
131 myValue = theValue;
132 return (Base64BinaryType) super.setValue(theValue);
133 }
134
135 @Override
136 public boolean hasValue() {
137 return myValue != null && myValue.length > 0;
138 }
139
140 @Override
141 public boolean isEmpty() {
142 // Custom isEmpty() in order to avoid generating the text representation unneccessarily
143 return ca.uhn.fhir.util.ElementUtil.isEmpty(id, extension) && !hasValue();
144 }
145
146 @Override
147 public String primitiveValue() {
148 return encode(myValue);
149 }
150
151 /**
152 * Checks if the passed in String is a valid {@link Base64} encoded String. Will throw a {@link DataFormatException} if not
153 * formatted correctly.
154 *
155 * @param toCheck {@link String} to check if valid {@link Base64}
156 * @throws DataFormatException
157 */
158 public void checkValidBase64(String toCheck) throws DataFormatException {
159 if (!org.hl7.fhir.utilities.Base64.isBase64(toCheck.getBytes())) {
160 throw new DataFormatException("");
161 }
162 }
163}
|
__label__pos
| 0.844081 |
Confirmation
From Bitcoin Wiki
Jump to: navigation, search
After a transaction is broadcast to the Bitcoin network, it may be included in a block that is published to the network. When that happens it is said that the transaction has been mined at a depth of 1 block. With each subsequent block that is found, the number of blocks deep is increased by one. To be secure against double spending, a transaction should not be considered as confirmed until it is a certain number of blocks deep.
Number of Confirmations
The classic bitcoin client will show a transaction as "n/unconfirmed" until the transaction is 6 blocks deep. Merchants and exchanges who accept bitcoins as payment can and should set their own threshold as to how many blocks are required until funds are considered confirmed. When potential loss due to double spending as nominal, as with very inexpensive or non-fungible items, people may choose not to wait for a transaction to be confirmed, and complete the exchange as soon as it is seen on the network. Most exchanges and other merchants who bear the risk from double spending require 6 or more blocks.
There is nothing special about the default, often-cited figure of 6 blocks. It was chosen based on the assumption that an attacker is unlikely to amass more than 10% of the hashrate, and that a negligible risk of less than 0.1% is acceptable. Both these figures are arbitrary, however; 6 blocks are overkill for casual attackers, and at the same time powerless against more dedicated attackers with much more than 10% hashrate.[1]
Freshly-mined coins cannot be spent for 100 blocks. It is advisable to wait some additional time for a better chance that the transaction will be propagated by all nodes. Some older bitcoin clients won't show generated coins as confirmed until they are 120 blocks deep.
How Many Confirmations Is Enough
The website (https://people.xiph.org/~greg/attack_success.html) can be used to calculate the probability of a successful doublespend given a hashrate proportion and number of confirmations. Note that in the reality of bitcoin mining today, many more than 6 confirmations are required. (60 confirmations to have <1% odds of succeeding against an entity with 40% hash power)
Confirmation Times
Each additional confirmation is a new block being found and added to the end of the blockchain.
Miners create blocks by solving the proof of work for their proposed block. The block interval has an average of 10 minutes but not every block interval is exactly 10 minutes. It follows a statistical process known as a poisson process, where random events happen with the same probability in each time interval. Another way of expressing this is that the mining process has no memory, at every second a block has the same chance of being found. Poisson processes are well-understood but can be unintuative.
alt text
There are lots of block intervals with a time less than 10 minutes but then a few block intervals much longer which bump up the average to 10 minutes. So the bitcoin network can get unlucky and a block won't be found for a whole hour.
alt text
In a 10 minute interval, the probability of a block being found is about 63% (or 1 - e^(-1)). So approximately two-thirds of the time a block will be found in 10 minutes or less. In 30 minutes a block has a 95% chance of being found, which rises to 99.7% if the time interval is 60 minutes.
See Also
References
1. Analysis of hashrate-based double-spending
|
__label__pos
| 0.611522 |
Get instant study help
Suppose that the methods of this problem are used to forecast a value of Y for a combination of Xs v
Tutors ProblemsPosted On:2023-10-25 15:50:25Viewed:852
Suppose that the methods of this problem are used to forecast a value of Y for a combination of Xs very different from the X values in the data to which the model was fit. For example, calculate the estimated variance of the forecast error for an occupation with an average income of $50,000, an average education of 0 years, and 100% women. Is the estimated variance of the forecast error large or small? Does the variance of the forecast error adequately capture the uncertainty in using the regression equation to predict Y in this circumstance?
Best Answer
HomeworkHelp.cc
HomeworkHelp.cc
Solved by verified expert
Last updated on:2023-10-25 15:50:25
The estimated variance of the forecast error is likely to be high when you're forecasting Y for a set of Xs that are notably different from the training data. Regression models are typically created to produce predictions that fall within the range of the observed data, which explains why. The model's accuracy declines when projecting to uncharted terrain, and the variation of the forecast inaccuracy might not accurately reflect the degree of uncertainty. The uncertainty can be exacerbated by assumptions, missing variables, and problems with the quality of the data. Therefore, it's crucial to use caution and take into account other sources of information when making forecasts in such situations, even though the variance of the forecast error gives a measure of prediction uncertainty.
Explanation:
The estimated variance of the forecast error may be significant when using a regression model to predict a value of Y for a set of Xs that are significantly different from the X values in the data to which the model was fitted. This is so that the model can make predictions for a portion of the feature space that lies outside the coverage of the training data by extrapolating from the observed data.
The numbers in your example are noticeably different from the values in your training data if you are attempting to estimate a value of Y for an occupation with an average income of $50,000, an average education of 0 years, and 100% women. The model might not have enough data to reliably predict Y for this set of Xs, thus the projected variance of the forecast error is probably going to be high. More uncertainty is introduced because it is effectively generating predictions in a data space that it was not exposed to during training.
When using the regression equation to predict Y, the variance of the prediction error captures the uncertainty, but it might not fully capture the extent of the uncertainty in this situation.
This is due to a number of factors:
1. Extrapolation: When extrapolating outside of the training data range, the model's predictions lose accuracy since it is effectively speculating about how the relationship between X and Y persists outside the observed data range.
2. Assumptions : Regression models frequently make the assumption that, within the observed range of X values, the connection between X and Y is constant. Those presumptions might not be true if you leave this area entirely.
3. Potential missing variables : Potentially important variables that are missing from the regression model could increase the predictability of results since the regression model might not take all relevant variables that affect Y into consideration.
4. Data quality: Noisy training data or data that is not representative of the general population may make extrapolation forecasts less accurate.
It's important to use caution when making predictions in situations involving extreme extrapolation. You can get a sense of the uncertainty from the estimated variance of the forecast error, but it could not accurately reflect the entire scope of the hazards associated with making forecasts in unexplored territory. When making forecasts in these circumstances, it is advisable to proceed with caution and take into account additional facts or professional judgement.
Previous:The Board of Directors would like you to undertake the following tasks:
Next:Which of the following best describes the risks associated with futures contracts?
Get instant study help
Post Reply
Contact Us
qrcode
扫码联系微信客服 享一对一做题服务
|
__label__pos
| 0.983961 |
Tell me more ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
How to prove that the Sorgenfrey line is hereditarily separable?
share|improve this question
add comment
2 Answers
Hint: Let $S$ be a subset of the Sorgenfrey line. From each interval of the form $[q,r)$ in the original line, where $q,r \in \mathbb{Q}$, pick one point from $S$ if possible. Then characterize the points in $S$ that are not limits of the points you just chose.
share|improve this answer
add comment
This has a proof that for any ordered space separable implies hereditarily separable. And the Sorgenfrey line is a subspace of a separable ordered space (e.g. the double arrow).
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
|
__label__pos
| 0.637149 |
The Design and Implementation of the FreeBSD Operating System, Second Edition
Now available: The Design and Implementation of the FreeBSD Operating System (Second Edition)
[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]
FreeBSD/Linux Kernel Cross Reference
sys/netgraph/ng_iface.c
Version: - FREEBSD - FREEBSD-12-STABLE - FREEBSD-12-0 - FREEBSD-11-STABLE - FREEBSD-11-2 - FREEBSD-11-1 - FREEBSD-11-0 - FREEBSD-10-STABLE - FREEBSD-10-4 - FREEBSD-10-3 - FREEBSD-10-2 - FREEBSD-10-1 - FREEBSD-10-0 - FREEBSD-9-STABLE - FREEBSD-9-3 - FREEBSD-9-2 - FREEBSD-9-1 - FREEBSD-9-0 - FREEBSD-8-STABLE - FREEBSD-8-4 - FREEBSD-8-3 - FREEBSD-8-2 - FREEBSD-8-1 - FREEBSD-8-0 - FREEBSD-7-STABLE - FREEBSD-7-4 - FREEBSD-7-3 - FREEBSD-7-2 - FREEBSD-7-1 - FREEBSD-7-0 - FREEBSD-6-STABLE - FREEBSD-6-4 - FREEBSD-6-3 - FREEBSD-6-2 - FREEBSD-6-1 - FREEBSD-6-0 - FREEBSD-5-STABLE - FREEBSD-5-5 - FREEBSD-5-4 - FREEBSD-5-3 - FREEBSD-5-2 - FREEBSD-5-1 - FREEBSD-5-0 - FREEBSD-4-STABLE - FREEBSD-3-STABLE - FREEBSD22 - linux-2.6 - linux-2.4.22 - MK83 - MK84 - PLAN9 - DFBSD - NETBSD - NETBSD5 - NETBSD4 - NETBSD3 - NETBSD20 - OPENBSD - xnu-517 - xnu-792 - xnu-792.6.70 - xnu-1228 - xnu-1456.1.26 - xnu-1699.24.8 - xnu-2050.18.24 - OPENSOLARIS - minix-3-1-1
SearchContext: - none - 3 - 10
1 /*
2 * ng_iface.c
3 */
4
5 /*-
6 * Copyright (c) 1996-1999 Whistle Communications, Inc.
7 * All rights reserved.
8 *
9 * Subject to the following obligations and disclaimer of warranty, use and
10 * redistribution of this software, in source or object code forms, with or
11 * without modifications are expressly permitted by Whistle Communications;
12 * provided, however, that:
13 * 1. Any and all reproductions of the source or object code must include the
14 * copyright notice above and the following disclaimer of warranties; and
15 * 2. No rights are granted, in any manner or form, to use Whistle
16 * Communications, Inc. trademarks, including the mark "WHISTLE
17 * COMMUNICATIONS" on advertising, endorsements, or otherwise except as
18 * such appears in the above copyright notice or in the software.
19 *
20 * THIS SOFTWARE IS BEING PROVIDED BY WHISTLE COMMUNICATIONS "AS IS", AND
21 * TO THE MAXIMUM EXTENT PERMITTED BY LAW, WHISTLE COMMUNICATIONS MAKES NO
22 * REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, REGARDING THIS SOFTWARE,
23 * INCLUDING WITHOUT LIMITATION, ANY AND ALL IMPLIED WARRANTIES OF
24 * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
25 * WHISTLE COMMUNICATIONS DOES NOT WARRANT, GUARANTEE, OR MAKE ANY
26 * REPRESENTATIONS REGARDING THE USE OF, OR THE RESULTS OF THE USE OF THIS
27 * SOFTWARE IN TERMS OF ITS CORRECTNESS, ACCURACY, RELIABILITY OR OTHERWISE.
28 * IN NO EVENT SHALL WHISTLE COMMUNICATIONS BE LIABLE FOR ANY DAMAGES
29 * RESULTING FROM OR ARISING OUT OF ANY USE OF THIS SOFTWARE, INCLUDING
30 * WITHOUT LIMITATION, ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
31 * PUNITIVE, OR CONSEQUENTIAL DAMAGES, PROCUREMENT OF SUBSTITUTE GOODS OR
32 * SERVICES, LOSS OF USE, DATA OR PROFITS, HOWEVER CAUSED AND UNDER ANY
33 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
34 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
35 * THIS SOFTWARE, EVEN IF WHISTLE COMMUNICATIONS IS ADVISED OF THE POSSIBILITY
36 * OF SUCH DAMAGE.
37 *
38 * Author: Archie Cobbs <[email protected]>
39 *
40 * $FreeBSD: stable/12/sys/netgraph/ng_iface.c 344139 2019-02-15 00:29:44Z glebius $
41 * $Whistle: ng_iface.c,v 1.33 1999/11/01 09:24:51 julian Exp $
42 */
43
44 /*
45 * This node is also a system networking interface. It has
46 * a hook for each protocol (IP, AppleTalk, etc). Packets
47 * are simply relayed between the interface and the hooks.
48 *
49 * Interfaces are named ng0, ng1, etc. New nodes take the
50 * first available interface name.
51 *
52 * This node also includes Berkeley packet filter support.
53 */
54
55 #include "opt_inet.h"
56 #include "opt_inet6.h"
57
58 #include <sys/param.h>
59 #include <sys/systm.h>
60 #include <sys/errno.h>
61 #include <sys/kernel.h>
62 #include <sys/lock.h>
63 #include <sys/malloc.h>
64 #include <sys/mbuf.h>
65 #include <sys/errno.h>
66 #include <sys/proc.h>
67 #include <sys/random.h>
68 #include <sys/rmlock.h>
69 #include <sys/sockio.h>
70 #include <sys/socket.h>
71 #include <sys/sysctl.h>
72 #include <sys/syslog.h>
73 #include <sys/libkern.h>
74
75 #include <net/if.h>
76 #include <net/if_var.h>
77 #include <net/if_types.h>
78 #include <net/bpf.h>
79 #include <net/netisr.h>
80 #include <net/route.h>
81 #include <net/vnet.h>
82
83 #include <netinet/in.h>
84
85 #include <netgraph/ng_message.h>
86 #include <netgraph/netgraph.h>
87 #include <netgraph/ng_parse.h>
88 #include <netgraph/ng_iface.h>
89
90 #ifdef NG_SEPARATE_MALLOC
91 static MALLOC_DEFINE(M_NETGRAPH_IFACE, "netgraph_iface", "netgraph iface node");
92 #else
93 #define M_NETGRAPH_IFACE M_NETGRAPH
94 #endif
95
96 static SYSCTL_NODE(_net_graph, OID_AUTO, iface, CTLFLAG_RW, 0,
97 "Point to point netgraph interface");
98 VNET_DEFINE_STATIC(int, ng_iface_max_nest) = 2;
99 #define V_ng_iface_max_nest VNET(ng_iface_max_nest)
100 SYSCTL_INT(_net_graph_iface, OID_AUTO, max_nesting, CTLFLAG_VNET | CTLFLAG_RW,
101 &VNET_NAME(ng_iface_max_nest), 0, "Max nested tunnels");
102
103 /* This struct describes one address family */
104 struct iffam {
105 sa_family_t family; /* Address family */
106 const char *hookname; /* Name for hook */
107 };
108 typedef const struct iffam *iffam_p;
109
110 /* List of address families supported by our interface */
111 const static struct iffam gFamilies[] = {
112 { AF_INET, NG_IFACE_HOOK_INET },
113 { AF_INET6, NG_IFACE_HOOK_INET6 },
114 { AF_ATM, NG_IFACE_HOOK_ATM },
115 { AF_NATM, NG_IFACE_HOOK_NATM },
116 };
117 #define NUM_FAMILIES nitems(gFamilies)
118
119 /* Node private data */
120 struct ng_iface_private {
121 struct ifnet *ifp; /* Our interface */
122 int unit; /* Interface unit number */
123 node_p node; /* Our netgraph node */
124 hook_p hooks[NUM_FAMILIES]; /* Hook for each address family */
125 struct rmlock lock; /* Protect private data changes */
126 };
127 typedef struct ng_iface_private *priv_p;
128
129 #define PRIV_RLOCK(priv, t) rm_rlock(&priv->lock, t)
130 #define PRIV_RUNLOCK(priv, t) rm_runlock(&priv->lock, t)
131 #define PRIV_WLOCK(priv) rm_wlock(&priv->lock)
132 #define PRIV_WUNLOCK(priv) rm_wunlock(&priv->lock)
133
134 /* Interface methods */
135 static void ng_iface_start(struct ifnet *ifp);
136 static int ng_iface_ioctl(struct ifnet *ifp, u_long cmd, caddr_t data);
137 static int ng_iface_output(struct ifnet *ifp, struct mbuf *m0,
138 const struct sockaddr *dst, struct route *ro);
139 static void ng_iface_bpftap(struct ifnet *ifp,
140 struct mbuf *m, sa_family_t family);
141 static int ng_iface_send(struct ifnet *ifp, struct mbuf *m,
142 sa_family_t sa);
143 #ifdef DEBUG
144 static void ng_iface_print_ioctl(struct ifnet *ifp, int cmd, caddr_t data);
145 #endif
146
147 /* Netgraph methods */
148 static int ng_iface_mod_event(module_t, int, void *);
149 static ng_constructor_t ng_iface_constructor;
150 static ng_rcvmsg_t ng_iface_rcvmsg;
151 static ng_shutdown_t ng_iface_shutdown;
152 static ng_newhook_t ng_iface_newhook;
153 static ng_rcvdata_t ng_iface_rcvdata;
154 static ng_disconnect_t ng_iface_disconnect;
155
156 /* Helper stuff */
157 static iffam_p get_iffam_from_af(sa_family_t family);
158 static iffam_p get_iffam_from_hook(priv_p priv, hook_p hook);
159 static iffam_p get_iffam_from_name(const char *name);
160 static hook_p *get_hook_from_iffam(priv_p priv, iffam_p iffam);
161
162 /* List of commands and how to convert arguments to/from ASCII */
163 static const struct ng_cmdlist ng_iface_cmds[] = {
164 {
165 NGM_IFACE_COOKIE,
166 NGM_IFACE_GET_IFNAME,
167 "getifname",
168 NULL,
169 &ng_parse_string_type
170 },
171 {
172 NGM_IFACE_COOKIE,
173 NGM_IFACE_POINT2POINT,
174 "point2point",
175 NULL,
176 NULL
177 },
178 {
179 NGM_IFACE_COOKIE,
180 NGM_IFACE_BROADCAST,
181 "broadcast",
182 NULL,
183 NULL
184 },
185 {
186 NGM_IFACE_COOKIE,
187 NGM_IFACE_GET_IFINDEX,
188 "getifindex",
189 NULL,
190 &ng_parse_uint32_type
191 },
192 { 0 }
193 };
194
195 /* Node type descriptor */
196 static struct ng_type typestruct = {
197 .version = NG_ABI_VERSION,
198 .name = NG_IFACE_NODE_TYPE,
199 .mod_event = ng_iface_mod_event,
200 .constructor = ng_iface_constructor,
201 .rcvmsg = ng_iface_rcvmsg,
202 .shutdown = ng_iface_shutdown,
203 .newhook = ng_iface_newhook,
204 .rcvdata = ng_iface_rcvdata,
205 .disconnect = ng_iface_disconnect,
206 .cmdlist = ng_iface_cmds,
207 };
208 NETGRAPH_INIT(iface, &typestruct);
209
210 VNET_DEFINE_STATIC(struct unrhdr *, ng_iface_unit);
211 #define V_ng_iface_unit VNET(ng_iface_unit)
212
213 /************************************************************************
214 HELPER STUFF
215 ************************************************************************/
216
217 /*
218 * Get the family descriptor from the family ID
219 */
220 static __inline iffam_p
221 get_iffam_from_af(sa_family_t family)
222 {
223 iffam_p iffam;
224 int k;
225
226 for (k = 0; k < NUM_FAMILIES; k++) {
227 iffam = &gFamilies[k];
228 if (iffam->family == family)
229 return (iffam);
230 }
231 return (NULL);
232 }
233
234 /*
235 * Get the family descriptor from the hook
236 */
237 static __inline iffam_p
238 get_iffam_from_hook(priv_p priv, hook_p hook)
239 {
240 int k;
241
242 for (k = 0; k < NUM_FAMILIES; k++)
243 if (priv->hooks[k] == hook)
244 return (&gFamilies[k]);
245 return (NULL);
246 }
247
248 /*
249 * Get the hook from the iffam descriptor
250 */
251
252 static __inline hook_p *
253 get_hook_from_iffam(priv_p priv, iffam_p iffam)
254 {
255 return (&priv->hooks[iffam - gFamilies]);
256 }
257
258 /*
259 * Get the iffam descriptor from the name
260 */
261 static __inline iffam_p
262 get_iffam_from_name(const char *name)
263 {
264 iffam_p iffam;
265 int k;
266
267 for (k = 0; k < NUM_FAMILIES; k++) {
268 iffam = &gFamilies[k];
269 if (!strcmp(iffam->hookname, name))
270 return (iffam);
271 }
272 return (NULL);
273 }
274
275 /************************************************************************
276 INTERFACE STUFF
277 ************************************************************************/
278
279 /*
280 * Process an ioctl for the virtual interface
281 */
282 static int
283 ng_iface_ioctl(struct ifnet *ifp, u_long command, caddr_t data)
284 {
285 struct ifreq *const ifr = (struct ifreq *) data;
286 int error = 0;
287
288 #ifdef DEBUG
289 ng_iface_print_ioctl(ifp, command, data);
290 #endif
291 switch (command) {
292
293 /* These two are mostly handled at a higher layer */
294 case SIOCSIFADDR:
295 ifp->if_flags |= IFF_UP;
296 ifp->if_drv_flags |= IFF_DRV_RUNNING;
297 ifp->if_drv_flags &= ~(IFF_DRV_OACTIVE);
298 break;
299 case SIOCGIFADDR:
300 break;
301
302 /* Set flags */
303 case SIOCSIFFLAGS:
304 /*
305 * If the interface is marked up and stopped, then start it.
306 * If it is marked down and running, then stop it.
307 */
308 if (ifr->ifr_flags & IFF_UP) {
309 if (!(ifp->if_drv_flags & IFF_DRV_RUNNING)) {
310 ifp->if_drv_flags &= ~(IFF_DRV_OACTIVE);
311 ifp->if_drv_flags |= IFF_DRV_RUNNING;
312 }
313 } else {
314 if (ifp->if_drv_flags & IFF_DRV_RUNNING)
315 ifp->if_drv_flags &= ~(IFF_DRV_RUNNING |
316 IFF_DRV_OACTIVE);
317 }
318 break;
319
320 /* Set the interface MTU */
321 case SIOCSIFMTU:
322 if (ifr->ifr_mtu > NG_IFACE_MTU_MAX
323 || ifr->ifr_mtu < NG_IFACE_MTU_MIN)
324 error = EINVAL;
325 else
326 ifp->if_mtu = ifr->ifr_mtu;
327 break;
328
329 /* Stuff that's not supported */
330 case SIOCADDMULTI:
331 case SIOCDELMULTI:
332 error = 0;
333 break;
334 case SIOCSIFPHYS:
335 error = EOPNOTSUPP;
336 break;
337
338 default:
339 error = EINVAL;
340 break;
341 }
342 return (error);
343 }
344
345 /*
346 * This routine is called to deliver a packet out the interface.
347 * We simply look at the address family and relay the packet to
348 * the corresponding hook, if it exists and is connected.
349 */
350
351 static int
352 ng_iface_output(struct ifnet *ifp, struct mbuf *m,
353 const struct sockaddr *dst, struct route *ro)
354 {
355 uint32_t af;
356 int error;
357
358 /* Check interface flags */
359 if (!((ifp->if_flags & IFF_UP) &&
360 (ifp->if_drv_flags & IFF_DRV_RUNNING))) {
361 m_freem(m);
362 return (ENETDOWN);
363 }
364
365 /* Protect from deadly infinite recursion. */
366 error = if_tunnel_check_nesting(ifp, m, NGM_IFACE_COOKIE,
367 V_ng_iface_max_nest);
368 if (error) {
369 m_freem(m);
370 return (error);
371 }
372
373 /* BPF writes need to be handled specially. */
374 if (dst->sa_family == AF_UNSPEC)
375 bcopy(dst->sa_data, &af, sizeof(af));
376 else
377 af = dst->sa_family;
378
379 /* Berkeley packet filter */
380 ng_iface_bpftap(ifp, m, af);
381
382 if (ALTQ_IS_ENABLED(&ifp->if_snd)) {
383 M_PREPEND(m, sizeof(sa_family_t), M_NOWAIT);
384 if (m == NULL) {
385 if_inc_counter(ifp, IFCOUNTER_OQDROPS, 1);
386 return (ENOBUFS);
387 }
388 *(sa_family_t *)m->m_data = af;
389 error = (ifp->if_transmit)(ifp, m);
390 } else
391 error = ng_iface_send(ifp, m, af);
392
393 return (error);
394 }
395
396 /*
397 * Start method is used only when ALTQ is enabled.
398 */
399 static void
400 ng_iface_start(struct ifnet *ifp)
401 {
402 struct mbuf *m;
403 sa_family_t sa;
404
405 KASSERT(ALTQ_IS_ENABLED(&ifp->if_snd), ("%s without ALTQ", __func__));
406
407 for(;;) {
408 IFQ_DRV_DEQUEUE(&ifp->if_snd, m);
409 if (m == NULL)
410 break;
411 sa = *mtod(m, sa_family_t *);
412 m_adj(m, sizeof(sa_family_t));
413 ng_iface_send(ifp, m, sa);
414 }
415 }
416
417 /*
418 * Flash a packet by the BPF (requires prepending 4 byte AF header)
419 * Note the phoney mbuf; this is OK because BPF treats it read-only.
420 */
421 static void
422 ng_iface_bpftap(struct ifnet *ifp, struct mbuf *m, sa_family_t family)
423 {
424 KASSERT(family != AF_UNSPEC, ("%s: family=AF_UNSPEC", __func__));
425 if (bpf_peers_present(ifp->if_bpf)) {
426 int32_t family4 = (int32_t)family;
427 bpf_mtap2(ifp->if_bpf, &family4, sizeof(family4), m);
428 }
429 }
430
431 /*
432 * This routine does actual delivery of the packet into the
433 * netgraph(4). It is called from ng_iface_start() and
434 * ng_iface_output().
435 */
436 static int
437 ng_iface_send(struct ifnet *ifp, struct mbuf *m, sa_family_t sa)
438 {
439 struct rm_priotracker priv_tracker;
440 const priv_p priv = (priv_p) ifp->if_softc;
441 const iffam_p iffam = get_iffam_from_af(sa);
442 hook_p hook;
443 int error;
444 int len;
445
446 /* Check address family to determine hook (if known) */
447 if (iffam == NULL) {
448 m_freem(m);
449 log(LOG_WARNING, "%s: can't handle af%d\n", ifp->if_xname, sa);
450 return (EAFNOSUPPORT);
451 }
452
453 /* Copy length before the mbuf gets invalidated. */
454 len = m->m_pkthdr.len;
455
456 PRIV_RLOCK(priv, &priv_tracker);
457 hook = *get_hook_from_iffam(priv, iffam);
458 if (hook == NULL) {
459 NG_FREE_M(m);
460 PRIV_RUNLOCK(priv, &priv_tracker);
461 return ENETDOWN;
462 }
463 NG_HOOK_REF(hook);
464 PRIV_RUNLOCK(priv, &priv_tracker);
465
466 NG_OUTBOUND_THREAD_REF();
467 NG_SEND_DATA_ONLY(error, hook, m);
468 NG_OUTBOUND_THREAD_UNREF();
469 NG_HOOK_UNREF(hook);
470
471 /* Update stats. */
472 if (error == 0) {
473 if_inc_counter(ifp, IFCOUNTER_OBYTES, len);
474 if_inc_counter(ifp, IFCOUNTER_OPACKETS, 1);
475 }
476
477 return (error);
478 }
479
480 #ifdef DEBUG
481 /*
482 * Display an ioctl to the virtual interface
483 */
484
485 static void
486 ng_iface_print_ioctl(struct ifnet *ifp, int command, caddr_t data)
487 {
488 char *str;
489
490 switch (command & IOC_DIRMASK) {
491 case IOC_VOID:
492 str = "IO";
493 break;
494 case IOC_OUT:
495 str = "IOR";
496 break;
497 case IOC_IN:
498 str = "IOW";
499 break;
500 case IOC_INOUT:
501 str = "IORW";
502 break;
503 default:
504 str = "IO??";
505 }
506 log(LOG_DEBUG, "%s: %s('%c', %d, char[%d])\n",
507 ifp->if_xname,
508 str,
509 IOCGROUP(command),
510 command & 0xff,
511 IOCPARM_LEN(command));
512 }
513 #endif /* DEBUG */
514
515 /************************************************************************
516 NETGRAPH NODE STUFF
517 ************************************************************************/
518
519 /*
520 * Constructor for a node
521 */
522 static int
523 ng_iface_constructor(node_p node)
524 {
525 struct ifnet *ifp;
526 priv_p priv;
527
528 /* Allocate node and interface private structures */
529 priv = malloc(sizeof(*priv), M_NETGRAPH_IFACE, M_WAITOK | M_ZERO);
530 ifp = if_alloc(IFT_PROPVIRTUAL);
531 if (ifp == NULL) {
532 free(priv, M_NETGRAPH_IFACE);
533 return (ENOMEM);
534 }
535
536 rm_init(&priv->lock, "ng_iface private rmlock");
537
538 /* Link them together */
539 ifp->if_softc = priv;
540 priv->ifp = ifp;
541
542 /* Get an interface unit number */
543 priv->unit = alloc_unr(V_ng_iface_unit);
544
545 /* Link together node and private info */
546 NG_NODE_SET_PRIVATE(node, priv);
547 priv->node = node;
548
549 /* Initialize interface structure */
550 if_initname(ifp, NG_IFACE_IFACE_NAME, priv->unit);
551 ifp->if_output = ng_iface_output;
552 ifp->if_start = ng_iface_start;
553 ifp->if_ioctl = ng_iface_ioctl;
554 ifp->if_mtu = NG_IFACE_MTU_DEFAULT;
555 ifp->if_flags = (IFF_SIMPLEX|IFF_POINTOPOINT|IFF_NOARP|IFF_MULTICAST);
556 ifp->if_type = IFT_PROPVIRTUAL; /* XXX */
557 ifp->if_addrlen = 0; /* XXX */
558 ifp->if_hdrlen = 0; /* XXX */
559 ifp->if_baudrate = 64000; /* XXX */
560 IFQ_SET_MAXLEN(&ifp->if_snd, ifqmaxlen);
561 ifp->if_snd.ifq_drv_maxlen = ifqmaxlen;
562 IFQ_SET_READY(&ifp->if_snd);
563
564 /* Give this node the same name as the interface (if possible) */
565 if (ng_name_node(node, ifp->if_xname) != 0)
566 log(LOG_WARNING, "%s: can't acquire netgraph name\n",
567 ifp->if_xname);
568
569 /* Attach the interface */
570 if_attach(ifp);
571 bpfattach(ifp, DLT_NULL, sizeof(u_int32_t));
572
573 /* Done */
574 return (0);
575 }
576
577 /*
578 * Give our ok for a hook to be added
579 */
580 static int
581 ng_iface_newhook(node_p node, hook_p hook, const char *name)
582 {
583 const iffam_p iffam = get_iffam_from_name(name);
584 const priv_p priv = NG_NODE_PRIVATE(node);
585 hook_p *hookptr;
586
587 if (iffam == NULL)
588 return (EPFNOSUPPORT);
589 PRIV_WLOCK(priv);
590 hookptr = get_hook_from_iffam(priv, iffam);
591 if (*hookptr != NULL) {
592 PRIV_WUNLOCK(priv);
593 return (EISCONN);
594 }
595 *hookptr = hook;
596 NG_HOOK_HI_STACK(hook);
597 NG_HOOK_SET_TO_INBOUND(hook);
598 PRIV_WUNLOCK(priv);
599 return (0);
600 }
601
602 /*
603 * Receive a control message
604 */
605 static int
606 ng_iface_rcvmsg(node_p node, item_p item, hook_p lasthook)
607 {
608 const priv_p priv = NG_NODE_PRIVATE(node);
609 struct ifnet *const ifp = priv->ifp;
610 struct ng_mesg *resp = NULL;
611 int error = 0;
612 struct ng_mesg *msg;
613
614 NGI_GET_MSG(item, msg);
615 switch (msg->header.typecookie) {
616 case NGM_IFACE_COOKIE:
617 switch (msg->header.cmd) {
618 case NGM_IFACE_GET_IFNAME:
619 NG_MKRESPONSE(resp, msg, IFNAMSIZ, M_NOWAIT);
620 if (resp == NULL) {
621 error = ENOMEM;
622 break;
623 }
624 strlcpy(resp->data, ifp->if_xname, IFNAMSIZ);
625 break;
626
627 case NGM_IFACE_POINT2POINT:
628 case NGM_IFACE_BROADCAST:
629 {
630
631 /* Deny request if interface is UP */
632 if ((ifp->if_flags & IFF_UP) != 0)
633 return (EBUSY);
634
635 /* Change flags */
636 switch (msg->header.cmd) {
637 case NGM_IFACE_POINT2POINT:
638 ifp->if_flags |= IFF_POINTOPOINT;
639 ifp->if_flags &= ~IFF_BROADCAST;
640 break;
641 case NGM_IFACE_BROADCAST:
642 ifp->if_flags &= ~IFF_POINTOPOINT;
643 ifp->if_flags |= IFF_BROADCAST;
644 break;
645 }
646 break;
647 }
648
649 case NGM_IFACE_GET_IFINDEX:
650 NG_MKRESPONSE(resp, msg, sizeof(uint32_t), M_NOWAIT);
651 if (resp == NULL) {
652 error = ENOMEM;
653 break;
654 }
655 *((uint32_t *)resp->data) = priv->ifp->if_index;
656 break;
657
658 default:
659 error = EINVAL;
660 break;
661 }
662 break;
663 case NGM_FLOW_COOKIE:
664 switch (msg->header.cmd) {
665 case NGM_LINK_IS_UP:
666 if_link_state_change(ifp, LINK_STATE_UP);
667 break;
668 case NGM_LINK_IS_DOWN:
669 if_link_state_change(ifp, LINK_STATE_DOWN);
670 break;
671 default:
672 break;
673 }
674 break;
675 default:
676 error = EINVAL;
677 break;
678 }
679 NG_RESPOND_MSG(error, node, item, resp);
680 NG_FREE_MSG(msg);
681 return (error);
682 }
683
684 /*
685 * Recive data from a hook. Pass the packet to the correct input routine.
686 */
687 static int
688 ng_iface_rcvdata(hook_p hook, item_p item)
689 {
690 const priv_p priv = NG_NODE_PRIVATE(NG_HOOK_NODE(hook));
691 const iffam_p iffam = get_iffam_from_hook(priv, hook);
692 struct ifnet *const ifp = priv->ifp;
693 struct mbuf *m;
694 int isr;
695
696 NGI_GET_M(item, m);
697 NG_FREE_ITEM(item);
698 /* Sanity checks */
699 KASSERT(iffam != NULL, ("%s: iffam", __func__));
700 M_ASSERTPKTHDR(m);
701 if ((ifp->if_flags & IFF_UP) == 0) {
702 NG_FREE_M(m);
703 return (ENETDOWN);
704 }
705
706 /* Update interface stats */
707 if_inc_counter(ifp, IFCOUNTER_IPACKETS, 1);
708 if_inc_counter(ifp, IFCOUNTER_IBYTES, m->m_pkthdr.len);
709
710 /* Note receiving interface */
711 m->m_pkthdr.rcvif = ifp;
712
713 /* Berkeley packet filter */
714 ng_iface_bpftap(ifp, m, iffam->family);
715
716 /* Send packet */
717 switch (iffam->family) {
718 #ifdef INET
719 case AF_INET:
720 isr = NETISR_IP;
721 break;
722 #endif
723 #ifdef INET6
724 case AF_INET6:
725 isr = NETISR_IPV6;
726 break;
727 #endif
728 default:
729 m_freem(m);
730 return (EAFNOSUPPORT);
731 }
732 random_harvest_queue(m, sizeof(*m), RANDOM_NET_NG);
733 M_SETFIB(m, ifp->if_fib);
734 netisr_dispatch(isr, m);
735 return (0);
736 }
737
738 /*
739 * Shutdown and remove the node and its associated interface.
740 */
741 static int
742 ng_iface_shutdown(node_p node)
743 {
744 const priv_p priv = NG_NODE_PRIVATE(node);
745
746 /*
747 * The ifnet may be in a different vnet than the netgraph node,
748 * hence we have to change the current vnet context here.
749 */
750 CURVNET_SET_QUIET(priv->ifp->if_vnet);
751 bpfdetach(priv->ifp);
752 if_detach(priv->ifp);
753 if_free(priv->ifp);
754 CURVNET_RESTORE();
755 priv->ifp = NULL;
756 free_unr(V_ng_iface_unit, priv->unit);
757 rm_destroy(&priv->lock);
758 free(priv, M_NETGRAPH_IFACE);
759 NG_NODE_SET_PRIVATE(node, NULL);
760 NG_NODE_UNREF(node);
761 return (0);
762 }
763
764 /*
765 * Hook disconnection. Note that we do *not* shutdown when all
766 * hooks have been disconnected.
767 */
768 static int
769 ng_iface_disconnect(hook_p hook)
770 {
771 const priv_p priv = NG_NODE_PRIVATE(NG_HOOK_NODE(hook));
772 const iffam_p iffam = get_iffam_from_hook(priv, hook);
773
774 if (iffam == NULL)
775 panic("%s", __func__);
776 PRIV_WLOCK(priv);
777 *get_hook_from_iffam(priv, iffam) = NULL;
778 PRIV_WUNLOCK(priv);
779 return (0);
780 }
781
782 /*
783 * Handle loading and unloading for this node type.
784 */
785 static int
786 ng_iface_mod_event(module_t mod, int event, void *data)
787 {
788 int error = 0;
789
790 switch (event) {
791 case MOD_LOAD:
792 case MOD_UNLOAD:
793 break;
794 default:
795 error = EOPNOTSUPP;
796 break;
797 }
798 return (error);
799 }
800
801 static void
802 vnet_ng_iface_init(const void *unused)
803 {
804
805 V_ng_iface_unit = new_unrhdr(0, 0xffff, NULL);
806 }
807 VNET_SYSINIT(vnet_ng_iface_init, SI_SUB_PSEUDO, SI_ORDER_ANY,
808 vnet_ng_iface_init, NULL);
809
810 static void
811 vnet_ng_iface_uninit(const void *unused)
812 {
813
814 delete_unrhdr(V_ng_iface_unit);
815 }
816 VNET_SYSUNINIT(vnet_ng_iface_uninit, SI_SUB_INIT_IF, SI_ORDER_ANY,
817 vnet_ng_iface_uninit, NULL);
Cache object: e1f52736bbbd94464eb70750db0f70f0
[ source navigation ] [ diff markup ] [ identifier search ] [ freetext search ] [ file search ] [ list types ] [ track identifier ]
This page is part of the FreeBSD/Linux Linux Kernel Cross-Reference, and was automatically generated using a modified version of the LXR engine.
|
__label__pos
| 0.99446 |
Guidelines for creating passwords
A password can either be provided by the application or randomly generated by the system.
You can configure the password policy for:
• Stipulating minimum and maximum lengths for a password.
• Password History—Store recently used passwords (including invalid passwords) to prevent a user from entering the same password again. The number of recently used passwords can be configured.
• Password Strength—Ability to enforce usage of special characters for a strong password generation.
|
__label__pos
| 0.826727 |
The Importance of Two Factor Authentication
February 10, 2022 • Devin Partida
Advertisements
Cybercrime isn’t just something that happens to large companies. Individuals are at risk, too, and with so much of our lives online, security should be at the top of everyone’s minds. Two-factor authentication (2FA) is one of the best steps to ensure safety.
If you haven’t been the victim of hacking, you probably know someone who has. A friend’s social media account might’ve sent out suspicious messages, or you could’ve found purchases you didn’t make on your account. 2FA could’ve prevented these scenarios.
What Is Two-Factor Authentication?
Two-factor authentication is a type of multifactor authentication (MFA) that makes the login process more involved. Instead of just entering a password, you’ll have to perform another step afterward to get into your account.
This second step can take several forms. One of the most common is sending a one-time login code to your phone that you have to enter. You’ve probably used this feature on some accounts before, as 79% of people in 2021 have, with many sites using it to verify logins on new devices.
There are three main types of verification for this second step:
• Something you know
• Something you have
• Something you are
Things you know involve steps like answering security questions or using a PIN you set up beforehand. “Something you have” steps are the most common kind, covering things like one-time codes you get through text or email. The most secure type is “something you are,” which refers to biometrics, like facial recognition or fingerprint scans.
Why Is 2FA Necessary?
2FA becomes more important as cybercrime grows and online accounts store more information. Passwords alone typically aren’t enough to stop hackers, and strong password management can be challenging.
Studies show that 52% of people reuse passwords across multiple sites. That’s understandable, considering the average person has to remember more than 90 passwords, but it leaves you vulnerable. Hackers that breach just one account could access plenty of others.
Weak passwords are common, making it easy for hackers to brute force their way into an account. Cybercriminals can also use social engineering to figure out your password, so even unique ones aren’t always safe.
Two-factor authentication is a fairly easy way to protect against these threats. Hackers can’t get into your account even if they have your password when you enable 2FA. It’s a lot harder to breach something with that one extra step. Experts say MFA can stop 99.9% of attacks on your account.
How to Use 2FA
Thankfully, besides offering excellent protection, two-factor authentication is easy to enable. Go into your settings on any online account, and you should see an option for it. It may be referred to as two-step verification or multifactor authentication.
The app or website will walk you through the steps to set it up when clicking this option. Most often, that looks like entering a phone number or email that they’ll send a verification code to. Some sites will give you a choice to use a different method, like biometrics or security questions.
Generally speaking, one-time codes are more secure than security questions, and biometrics are the most protective. However, 2FA should also be convenient, so look for an easy and safe option.
Remember that this extra step isn’t a replacement for good password management but a complement to it. You should still use strong, unique passwords to give yourself the utmost security.
Two-Factor Authentication Is a Crucial Security Step
Cybercrime is always evolving, so nothing is 100% secure. However, using two-factor authentication can stop many attacks on your account.
Anyone can be prey for hackers today. If you don’t have 2FA enabled on your online accounts, you may be a more tempting target than most. Know what risks you face, and take steps to become as secure as possible.
bg-pamplet-2
|
__label__pos
| 0.742854 |
GETENV(3) AerieBSD 1.0 Refernce Manual GETENV(3)
NAME
getenvputenv, setenv, unsetenv environment variable functions
SYNOPSIS
#include <stdlib.h>
char* getenv(const char *name);
int setenv(const char *name, const char *value, int overwrite);
int putenv(char *string);
int unsetenv(const char *name);
DESCRIPTION
These functions set, unset, and fetch environment variables from the host environment list. For compatibility with differing environment conventions, the given arguments name and value may be appended and prepended, respectively, with an equal sign “\&=”.
The getenv(); function obtains the current value of the environment variable name. If the variable name is not in the current environment, a null pointer is returned.
The setenv(); function inserts or resets the environment variable name in the current environment list. If the variable name does not exist in the list, it is inserted with the given value. If the variable does exist, the argument overwrite is tested; if overwrite is zero, the variable is not reset, otherwise it is reset to the given value.
The putenv(); function takes an argument of the form name. The memory pointed to by string becomes part of the environment and must not be deallocated by the caller. If the variable already exists, it will be overwritten. A common source of bugs is to pass a string argument that is a locally scoped string buffer. This will result in corruption of the environment after leaving the scope in which the variable is defined. For this reason, the setenv(); function is preferred over putenv();.
The unsetenv(); function deletes all instances of the variable name pointed to by name from the list.
RETURN VALUES
These functions return zero if successful; otherwise the global variable errno is set to indicate the error and \-1 is returned.
If getenv(); is successful, the string returned should be considered read-only.
ERRORS
[EINVAL]
The setenv(); or unsetenv(); function was passed a name containing an ‘=’ character.
The putenv(); function was passed a string that did not contain an ‘=’ character.
[ENOMEM]
The setenv(); or putenv(); function failed because it was unable to allocate memory for the environment.
SEE ALSO
csh(1), sh(1), execve(2), environ(7)
STANDARDS
The getenv(); function conforms to
HISTORY
The function getenv(); appeared in Version 7 AT&T UNIX and 3BSD. The functions setenv(); and unsetenv(); appeared in 4.3BSD-Tahoe. The putenv(); function appeared in 4.3BSD-Reno.
AerieBSD 1.0 Reference Manual May 14 2010 GETENV(3)
|
__label__pos
| 0.637982 |
What are Venn diagrams?
A Venn diagram is a way of grouping different items. These groups are known as sets.
We have a set of golf clubs or a set of dishes – these are just groups of those items.
We write a set using a special type of brackets. You could have a set of friends, eg {tom, leanne, alison, nia, anna, suzanne, lucy, marie}. Notice you don’t use capitals within the brackets.
A Venn diagram begins with a box called our universal set, which is denoted by the symbol ε (epsilon).
The universal set contains everything we are interested in at that particular time. There’ll be circles inside the box which we use to group the items within the universal set. Items in the circles form different subsets.
A Venn diagram with two sets labelled Set A and Set B. Set A contains 8 numbers and Set B contains 9 numbers. The two sets overlap in the middle to form a subset containing 4 numbers
Subsets
Set A is the numbers in the circle labelled Set A.
Set A = {1, 5, 6, 7, 8, 9, 10, 12}
Set B is the numbers in the circle labelled Set B.
Set B = {2, 3, 4, 6, 7, 9, 11, 12, 13}
These are the subsets of the universal.
Intersection
The intersection is where we have items from Set A and Set B, these can be found in the section that overlaps.
We write it as {A}\cap{B}. In the example above {A}\cap{B} = {6, 7, 9, 12}.
Union
The union of a Venn diagram is the numbers that are in either Set A or Set B.
The union of the above example is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 as it’s the numbers that appear in either of the circles.
We write it as {A}\cup{B} = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13}
|
__label__pos
| 0.999753 |
C++如何调用手机短信验证码
0
PHP .NET C/C++ Go 1124 次浏览
创蓝253短信平台案例---C语言调用接口
[C++] 基于创蓝253云通讯paas平台c/c++短信接 demo
#include <arpa/inet.h>
#include <assert.h>
#include <errno.h>
#include <netinet/in.h>
#include <signal.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/wait.h>
#include <netdb.h>
#include <unistd.h>
#define SA struct sockaddr
#define MAXLINE 4096
#define MAXSUB 2000
#define MAXPARAM 2048
#define LISTENQ 1024
//线上线下接口宏开关
#define ONLINE
extern int h_errno;
int sockfd;
char *hostname = "123.59.105.84";
char *send_sms_uri = "/msg/send";
char *query_balance_uri = "/msg/balance";
/**
* * 发http post请求
* */
ssize_t http_post(char *page, char *poststr)
{
char sendline[MAXLINE + 1], recvline[MAXLINE + 1];
ssize_t n;
snprintf(sendline, MAXSUB,
"POST %s HTTP/1.0\r\n"
"Host: sms.253.com\r\n"
"Content-type: application/x-www-form-urlencoded\r\n"
"Content-length: %zu\r\n\r\n"
"%s", page, strlen(poststr), poststr);
write(sockfd, sendline, strlen(sendline));
printf("\n%s", sendline);
printf("\n--------------------------\n");
while ((n = read(sockfd, recvline, MAXLINE)) > 0) {
recvline[n] = '\0';
printf("%s\n", recvline);
}
return n;
}
/**
* * 查账户余额
* */
ssize_t get_balance(char *un, char *pw)
{
char params[MAXPARAM + 1];
char *cp = params;
sprintf(cp,"un=%s&pw=%s", un, pw);
return http_post(query_balance_uri, cp);
}
/**
* * 发送短信
* */
ssize_t send_sms(char *un, char *pw, char *phone, char *msg)
{
char params[MAXPARAM + 1];
char *cp = params;
sprintf(cp,"un=%s&pw=%s&phone=%s&msg=%s&rd=1", un, pw, phone, msg);
return http_post(send_sms_uri, cp);
}
int main(void)
{
struct sockaddr_in servaddr;
char str[50];
//建立socket连接
sockfd = socket(AF_INET, SOCK_STREAM, 0);
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_addr.s_addr = inet_addr(hostname);
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons(80);
inet_pton(AF_INET, str, &servaddr.sin_addr);
connect(sockfd, (SA *) & servaddr, sizeof(servaddr));
char *un = "账号";
char *pw = "密码";
char *phone = "手机号";
//必须带签名
char *msg = "【253云通讯】您的验证码是123400";
//get_balance(un, pw);
send_sms(un, pw, phone, msg);
close(sockfd);
exit(0);
}
请尽量让自己的答案能够对别人有帮助
1个答案
默认排序 按投票排序
0
I'm also having a hard time answering this question while I'm on arcade driving school.
|
__label__pos
| 0.980857 |
9
$\begingroup$
Hypothesis: Let $\Gamma$ be a vertex-primitive graph with two vertices $u$ and $v$ such that $$|N(u) \cap N(v)|=|N(v)|-1$$
Question: Is it true that $\Gamma$ must either be a complete graph or have prime order?
Terminology and notation:
• By $N(v)$, I mean the set of neighbours of $v$ in $\Gamma$.
• By vertex-primitive, I mean that the automorphism group acts primitively on the vertices. In other words, the automorphism group does not preserve any partition of the vertex-set apart from the trivial ones (into singletons or with just one part).
Comments:
• It is easy to see that a vertex-primitive graph with two distinct vertices having the same neighbourhood must be edgeless. From this perspective, the question is thus about the first non-trivial case.
• Complete graphs clearly satisfy the hypothesis.
• There are indeed non-complete graphs (of prime order) satisfying the hypothesis. For example, cycles of prime order. More generally, let $p\geq 5$ be a prime, let $i\in\{2,3,\ldots,\frac{p-1}{2}\}$ and let $S=\{\pm i, \pm (i+1),\ldots, \pm(\frac{p-1}{2})\}$. Then the Cayley graph $\mathrm{Cay}(\mathbb{Z}_p,S)$ is easily seen to satisfy the hypothesis (with $u=0$ and $v=1$, for example).
• In fact, computer calculations show that there are no counterexamples up to order $100$, say.
$\endgroup$
12
• $\begingroup$ Any cycle of order $n\geq 5$ satisfies the requirement that there are $u,v \in V(C)$ such that $|N(u)\cap N(v)| = |N(v)|-1$: Let $\Gamma = (\{1,\ldots,n\}, E)$ where $E = \{\{k, k+1\} : 1 \leq k < n\}$ \cup $\{\{1,n\}\}$. Then pick $u=1, k=3$. So we don't need the order $n$ to be prime. - Or did I misunderstand your notion of ''order''? $\endgroup$ Commented Nov 10, 2014 at 13:55
• $\begingroup$ Right, any cycle satisfies the second part of the hypothesis, but the only cycles that are vertex-primitive are the ones of prime order. $\endgroup$
– verret
Commented Nov 10, 2014 at 14:00
• 2
$\begingroup$ +1 - great question. Except that when I saw the title, I thought "oh, if anyone can answer that it will be verret..." :-) $\endgroup$
– Nick Gill
Commented Nov 11, 2014 at 15:33
• 2
$\begingroup$ Nick, graphs with two-transitive groups are not too interesting, as for your second question, if $u$ is adjacent to $v$, then the transposition $(uv)$ is in the group of the graph and so it is the symmetric group and we fall back into your first case. $\endgroup$ Commented Nov 12, 2014 at 1:19
• 1
$\begingroup$ @Nick : In that case $u$ and $v$ must have the same neighbours apart from each other. $\endgroup$ Commented Nov 12, 2014 at 23:22
2 Answers 2
3
$\begingroup$
Pablo Spiga found a proof a few weeks ago. Together, we then proved a slightly more general result which is now on the arxiv: http://arxiv.org/abs/1501.05046
It is more general in two ways: it deals with digraphs rather than graphs, and it gives some information in general when two vertices have neighbourhoods differing by say $k$, although we only get a complete classification of the graphs when $k=1$, which was the original question. (This is Corollary 4.2.)
$\endgroup$
2
$\begingroup$
Here is a partial solution. If the vertices $u$ and $v$ in the statement of the problem are neighbors, I claim that the graph must be complete. Here is my argument:
Let $D = N(u) \cap N(v)$ so by hypothesis, there is exactly one point in $N(v)$ that is not in $D$. Since I am assuming that $u \in N(v)$ and we know that $u \not\in D$, it must be that $N(v) = D \cup \{u\}$. Now suppose the graph is not complete so there exists a point $x$ not in $N(u)$ and different from $u$. Then $x \not\in D$ and thus $x$ is not in $N(v)$, and certainly, $x$ is different from $v$ since it is not a neighbor of $u$. Also, by the symmetry of the assumption, nothing is changed if we swap $u$ and $v$.
Now let us say that an edge is "good" if it is an image of the edge joining $u$ and $v$ under the automorphism group of the graph. By the primitivity assumption, the graph is connected by good edges. We showed that a vertex $x$ different from $u$ and not connected to $u$ is also different from $v$ and not connected to $v$, and thus $x$ has the same property with respect to any vertex that can be reached from $u$ by a chain of good edges. But this is a contradiction since $x$ can reached. This proves my claim
$\endgroup$
1
• 1
$\begingroup$ Sorry, but Gordon Royle already finished that case in a comment. $\endgroup$ Commented Nov 20, 2014 at 21:35
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.607718 |
Login
Lost your password?
Don't have an account? Sign Up
Understanding Big Data
“Big data” this term is used quite frequently whether it is in business or industries. But irrespective of the fact that you belong to the tech industry or not, the term big data is the future of every business. So here’s our beginner’s guide to understand what exactly big data stands for, how it is used across organizations, what good it does and how it can be the future of your company as well.
What is Big Data?
Big data refers to any kind of information or data sets whose size or type is beyond the ability of a traditional relational database to capture, manage and process the data with low latency. That is traditional computers or regular systems and tools fail to process or store such data sets.
Big data is also about the technology that helps collection, processing and organising information which may be structured, semi-structured and unstructured data, from different sources, and in different sizes from terabytes to zettabytes.The characteristics of big data usually includes high volume, high velocity and high variety. A high volume of data may be gathered from many sources; data is often processed with great velocity, even in real-time, to provide the most valuable analysis; and, the data can come from a wide range of sources, even in a variety of formats.Recently, two more Vs have been added – veracity and value. Is the data consistent and complete, or can its veracity be trusted? And just because it’s classified as big data, does its analysis bring value to an organization?
Where does this data come from?
The data sets are usually derived from the Internet of things (IoT), mobile devices, sensors, video/audio, networks, log files, social media, transactional applications which are generated in real time on a very large scale.
Think of the data an individual creates on a day-to-day basis. Data is generated every time any person opens the browser to search something, every time a customer buys something online or even browses a shopping platform. You generate data every time you go online and do a search on your computer; every time you use your GPS on your smartphone; every time you interact with a social media platform; the list goes on. Every digital activity leaves a digital footprint behind, and the organizations with which you interact collects and analyzes that data.
Here are some other examples where data is collected in massive amounts:
• Huge retail chains like Walmart and Costco store customer transactions.
• Social media sites like Facebook and Twitter store and access user data.
• Amazon analyzes customer data to provide product recommendations.
• Spam filters go through millions of emails every day.
• Mobile phones generate information through calls, texts and browsing, as well as GPS data.
• Satellite technology stores and collects thousands of images every day.
So what massive data is collected and information is processed?
The analysis of the data is where the real gain lies in collecting and gathering all this data. The analysis helps businesses predict outcomes and form guidelines for decision making to avoid errors.
Here are a few examples of how data is used:
With the help of AI farmers are succeeding at yielding healthier crops, controlling pests,
monitoring soil, and growing conditions, organizing data and improving a wide range of agriculture-related tasks and creating optimized plans to be followed for best results.
AI is not only helping farmers automate the tasks but is assisting them in cultivating higher yield with fewer resources. With AI and technological advancements in the future this sector will be well-equipped to deal with food production issues for the growing population.
Cities can gather data from the traffic sensors and better organise the traffic flow to avoid jams and delays. In a similar fashion, law enforcement can use data analysis to determine areas that need increased policing, for instance, or to prepare for major events.
But what are the challenges faced?
Here are some, to begin with
Security: When collecting data, sometimes personal, a lot of security concerns arise. Data encryption becomes a must for all the organisations. The access to such information should remain in the hands of a few who need this data for decision making. Your security news can however be fulfilled by Mego, one such data analytical decision twin that comes with restrictions, authentications in places to create a safe data environment.
Storage: The more the size of the data expands the further the storage requirements go. To ensure proper information (enter the feature of mego that enables storage)
Competence: Many AI based decision support strategies require a lot of rundown time, hands on training and large computers to drive compatibility and bring in efficiency. But Mego is AI driven combinations of optimiser and re-optimisers that can help to quickly narrow down to the next best optimal scenario on a basic computer without hours of running time.
As we learn more about big data through use and experiments the possibilities to leverage the power of AI will increase despite the challenges, So if your organization truly wants to explore get yourself the AI that works for you. AI that fulfills all your strategic IT KRAs, in ways that work wonders with one platform that enables, empowers and adds value to your organization for several decades to come. Mego is a Decision Support System built to assist your thinking and empower you by de-leveraging tech-support scaffoldings. To be able to reimagine thinking and unleash the true potential of your Enterprise Data Repository. To know more: www.m76analytics.com
Leave a Comment
Your email address will not be published. Required fields are marked *
*
*
|
__label__pos
| 0.937408 |
TutorMe homepage
Subjects
PRICING
COURSES
SIGN IN
Start Free Trial
Michael C.
Assistant Manager at C2
Tutor Satisfaction Guarantee
Geometry
TutorMe
Question:
What is the Pythagorean Theorem and which shape does it only apply to?
Michael C.
Answer:
The Pythagorean theorem is a^2+b^2=c^2. It only applies to three-sided figures. such as triangles.
Chemistry
TutorMe
Question:
Why is Oxygen in its natural element presented as O2 and not just O?
Michael C.
Answer:
Oxygen in its natural element is a diatomic ion. Which means that it naturally has two Oxygens always together.
Algebra
TutorMe
Question:
Find the correct value of x in 4+x=10
Michael C.
Answer:
The correct Value of x in 4+x=10 is x=6. Because of 4+6=10.
Send a message explaining your
needs and Michael will reply soon.
Contact Michael
Ready now? Request a lesson.
Start Session
FAQs
What is a lesson?
A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard.
How do I begin a lesson?
If the tutor is currently online, you can click the "Start Session" button above. If they are offline, you can always send them a message to schedule a lesson.
Who are TutorMe tutors?
Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
|
__label__pos
| 0.972073 |
Monday, December 9, 2013
Phishing Infographic: Don't Get Hooked!
By now, most folks are aware of phishing emails—or at the very least, that social engineers use email to steal average people's sensitive information. Yet, we are continually surprised that the how and why of phishing still eludes many average folks. What do phishing emails look like? How would someone get information from me through an email? What could a social engineer do with that information?
Some folks just respond better to pictures and diagrams. So...voila! Our first foray into the world of infographics, and what is hopefully the first of many.
Share this Image On Your Site
Employee Training: Out of the Box
As a training company, this is just one more way for Sight Training to encourage folks to do their homework—and by homework, we mean doing a little extra checking before you hand over sensitive information through a phishing email. Your credit card number, SSN, and bank information are yours and no one else's. Guard them at all cost.
And remember: emails are just digital versions of the in-the-flesh thieves who are behind them. They can dress up and look impressive. They can be cool, casual, and persuasive. And they can pull off an official posture with approved logos and embedded links that mimic real websites. Here are a few more tips:
1. Remember: stranger danger! Don't know who sent it? Don't open it.
2. Be wary of attachments.
3. Ignore commands and requests for action—no matter how urgent they may seem.
4. Use the phone. Try contacting the sender by telephone. If the email is from your “bank,” then you should be able to get the truth pretty quickly. And if you cannot get in touch with the sender, then delete the email and forget about it.
Slow down, take a deep breath, and think about what you are doing before you offer it up to a social engineer on a silver platter.
|
__label__pos
| 0.665145 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I want to make a TextBox control that only accepts numerical values.
How can I do this in VB6?
share|improve this question
add comment
5 Answers
up vote 3 down vote accepted
In the text box text Change event, check if the entered value is a number. If it's not a number then set the old value back again.
Dim textval As String
Dim numval As String
Private Sub TextBox1_Change()
textval = TextBox1.Text
If IsNumeric(textval) Then
numval = textval
Else
TextBox1.Text = CStr(numval)
End If
End Sub
share|improve this answer
You rather use Validate event. – pinichi Dec 22 '10 at 4:31
It depends. You should use the Validate event if you want the verification that only numbers have been entered to be triggered when the user leaves the textbox control and tries to set focus to something else. If you go that route, then and only then will an error or notification be presented. On the other hand, if you use the Change event, the user will be notified immediately that a non-numeric value they typed in is invalid. – Cody Gray Dec 22 '10 at 4:41
add comment
Right click on control box > component > Control -> Microsoft Masked Edit Control 6.0.
Or with normal textbox:
Private Sub Text1_Validate(Cancel As Boolean)
Cancel = Not IsNumeric(Text1.Text)
End Sub
share|improve this answer
add comment
Check this out:
http://www.vbforums.com/showthread.php?t=350067
You need to check each keypress, or you can do one validation at the end.
share|improve this answer
add comment
I let the API do it for me. I add this function to a .bas module and call it for any edit control that I need to set to numeric only.
Option Explicit
Private Const ES_NUMBER = &H2000&
Private Const GWL_STYLE = (-16)
Private Declare Function SetWindowLong Lib "user32" Alias "SetWindowLongA" (ByVal hwnd As Long, ByVal nIndex As Long, ByVal dwNewLong As Long) As Long
'set an editbox to numeric only - return the previous
'style on success or zero on error
Public Function ForceNumeric(ByVal EditControlhWnd As Long) As Long
Dim lngCurStyle As Long
Dim lngReturn As Long
lngCurStyle = GetWindowLong(EditControlhWnd, GWL_STYLE)
If lngCurStyle <> 0 Then
lngReturn = SetWindowLong(EditControlhWnd, GWL_STYLE, lngCurStyle Or ES_NUMBER)
End If
ForceNumeric = lngReturn
End Function
To use it call the function with the handle of the TextBox.
Private Sub Form_Load()
Dim lngResult As Long
lngResult = ForceNumeric(Text1.hwnd)
End Sub
share|improve this answer
add comment
i usually use this code:
Private Sub text1_KeyPress(KeyAscii As Integer)
If Not IsNumeric(text1.Text & Chr(KeyAscii)) And Not KeyAscii = 8 Then KeyAscii = 0
End Sub
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.626005 |
Let z1, z2 be the roots of the equation
Question:
Let $z_{1}, z_{2}$ be the roots of the equation $z^{2}+a z+12=0$ and $\mathrm{z}_{1}, \mathrm{z}_{2}$ form an equilateral triangle with origin. Then, the value of lal is
Solution:
If $0, \mathrm{z}, \mathrm{z}_{2}$ are vertices of equilateral triangles
$\Rightarrow a^{2}+z_{1}^{2}+z_{2}^{2}=0\left(z_{1}+z_{2}\right)+z_{1} z_{2}$
$\Rightarrow\left(z_{1}+z_{2}\right)^{2}=3 z_{1} z_{2}$
$\Rightarrow a^{2}=3 \times 12$
$\Rightarrow|a|=6$
Leave a comment
|
__label__pos
| 0.998916 |
skip to Main Content
Breaking RSA Encryption – An Update On The State-of-the-Art
Breaking RSA Encryption – an Update on the State-of-the-Art
You’ve heard me rambling about Quantum Computers and the impact they will have on cryptography. Probably the biggest and most well-known impact is that they will be able to use Shor’s quantum algorithm to crack all RSA/ECC cryptography. Fortunately, Quantum Computers powerful enough to do this are not yet in sight, although that does not mean that we can relax…
Classical computers can do this too – it takes just a looooooooong time
Actually, you don’t need a quantum computer at all to crack RSA/ECC, if you have a lot of time that is. You can use a “normal” (read classical) computer as well. It is just unbelievably hard for this normal computer to solve this. It would take a classical computer around 300 trillion years to break a RSA-2048 bit encryption key. That’s why we all feel that we are “safe” from these attacks. But it does illustrate that the foundation of all of our cryptography is not guaranteed to be secure, it is only known to be really, really hard to solve (like trillion of years hard). That’s what we call “computational security”.
A perfect Quantum Computer could do this in 10 seconds
Now the trick with Shor’s algorithm is that he found a way to massively reduce the complexity of breaking RSA/ECC using a quantum computer. The problem that otherwise has exponential complexity (meaning if N is the number of bits, the N is in the exponent e.g. 5^N) gets reduced to polynomial complexity (meaning the N is in the base, e.g. only N^5). And that makes a massive difference. A quantum computer with 4099 perfectly stable qubits could break the RSA-2048 encryption in 10 seconds (instead of 300 trillion years – wow).
The problem is that such a quantum computer doesn’t exist (yet).
We have neither the number of qubits needed (4099), nor the quality of qubits (perfectly stable). The biggest quantum computer has currently 72 qubits (Google Bristlecone), however it has an error rate of 0.6%. The hardest problem though is coherence time. The coherence time is a measure of how fast the qubits lose their special quantum properties, so any calculation needs to finish within the coherence time. The coherence time at the moment is typically between 50-90 microseconds, so you can forget about any calculation that takes a while!
All of these issues obviously preclude anyone from running Shor’s algorithm on a Quantum Computer, and it is unclear when a powerful enough quantum computer will be available (maybe 5 years, maybe 10 years, maybe 20 years), so we can relax right?
Well, the problem is that innovation always comes in waves and sometimes breakthroughs are exactly that: they break through the established prediction. With the massive amount of research going into this field, it is hard to keep track of all the efforts.
So where are we really?
The most complete effort to highlight what’s possible has been published by Craig Gidney from Google and Martin Ekera from the Royal Institute of Technology in Stockholm. Just last month, they published a paper called “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits“.
The most interesting part of this paper is that they derived a complete system taking the noisy/imperfect qubits into account with a gate error rate of 0.1%. IBM’s Q System One has a one-qubit gate error rate of 0.04% and an average two-qubit gate error rate of 1.6%, so we are not far off even with the current “noisy” quantum computers.
Now 20 million qubits is still a lot of qubits, but for the very first time we have an algorithm that is not just theoretical in nature (“If I assume a perfect quantum computer exists, then I can solve this“), but very practical (i.e. “we worked around the current limitations and with modest improvements on current architectures we can solve this“). This is a massive shift and I’m sure the 20 million qubits can be reduced quite a bit as well if the gate error rate is reduced and through other optimization.
The same result was determined to be achievable back in 2012 with 1 billion qubits (Fowler et al), then with 230m qubits in 2017 (O’Gorman et al), 170m qubits in 2019 with (Gheorghiu et al), taking us to the with 20m qubits in 2019 with the analysis described above (Gidney, Ekera). So we went from 1 billion qubits to 20m qubits in the space of 7 years. That’s what I mean when I talked about breakthroughs and massive research going into this field.
No alt text provided for this image
Now that’s all well and good, but even this research is “theoretical” since the authors obviously couldn’t run their algorithm on real quantum computing hardware (as this doesn’t exist yet).
So what can currently available hardware do?
Let’s first look at what’s possible on current classical computers. In 2010, researchers successfully factored a 768-bit integer (basically breaking RSA-768). That’s a number with 232 digits! They had to use many hundreds of machines over a timeframe of 2 years (!) It’ll be tough to compare this to a single quantum computer, but here we go.
So, what is the biggest number that has been factored by a quantum computer available today?
The biggest number to be factored is 35 [1], achieved on IBM’s Quantum Computer (https://arxiv.org/abs/1903.00768). 35 is a 6-bit number, so we are far away from 2048 bit RSA keys (which has 617 decimal digits – compared to these 2 digits!!!) In fact I’m sure most of you burst out laughing at this tiny number…
Now, what’s next?
Quantum Annealing
Well, while the world is pre-occupied about estimating when we will have universal quantum computers big enough and stable enough to run Shor’s algorithm, a new approach is emerging which could potentially be a much bigger risk to RSA and cryptography in general.
Quantum Annealing is emerging as a powerful force to be reckoned with. Quantum Annealing Devices (e.g. D-Wave’s Quantum Computer) are not universal quantum computers. They can’t calculate everything. They cannot run Shor’s algorithm. They can only solve special optimization problems, but because of these limitations, they have been around for a longer time, are much more mature and have many more qubits than universal quantum computers.
As it turns out, the problem of factoring integers can also be formulated as an optimization problem. A good introduction to this topic can be found in this paper by Raouf Dridi and Hedayat Alghassi (https://arxiv.org/abs/1604.05796). They were able to factor the number 200,099 in 2016 with 897 qubits.
Their algorithm ran successfully on D-Wave’s 2X Processor which has 1,100 qubits. That is already an 18-bit number.
But the biggest news came out of China when earlier this year, Chinese researchers from the Shanghai University broke this record by factoring the number 1,005,973 with only 89 qubits on D-Wave’s hardware. That is now already a 20-bit number.
Two things were very interesting about their approach.
• They were able to run this on currently available hardware, meaning the current quality of qubits is good enough to achieve these results. To factor a RSA-768 number (current factorization record on classical computers), their algorithm would “only” need 147,454 qubits. D-Wave have announced a quantum computer with 5,640 qubits already, so the more qubits there are, the more vulnerable RSA will become.
• Their algorithm uses a combination of quantum and classical computation to maximise the results. (interestingly that’s the same for Shor’s algorithm and a common approach. Use classical computers for what they are good at and quantum computers for what they are good at)
If we assume a doubling of the quantum annealing qubits every 2 years (which was the case in the past), we’ll be there in 10 years. To me it seems much more likely to achieve this goal versus the alternative route of building 1,539 logical qubits on a perfect error-free universal quantum computer to allow it to run Shor’s algorithm in 10 years.
These time estimates assume that no fundamental breakthrough from an algorithmic side will be made and the same algorithm will be run on a D-Wave device just with more qubits. This is obviously a massive simplification as there is so much research happening at the moment, which will inevitably lead to breakthroughs. In addition, it’s not just the number of qubits, but also better inter-connectivity which will further reduce the required qubits.
Funnily, Mahatma Gandhi’s quote fits this perfectly: “First they ignore you (Quantum Computers will never exist), then they laugh at you (really, you can factor the number 35? Cool…), then they fight you, then you win”.
So, time to strap in to enjoy the next years of innovation in this area 🙂
[1] There are shortcuts for special cases (e.g. when the two prime factors only differ by a little bit) and then 966,887 is the highest number that was factored, but these special cases don’t help breaking RSA, so we do not count these cases.
|
__label__pos
| 0.820639 |
Properties
Base field \(\Q(\sqrt{-2}) \)
Label 2.0.8.1-288.2-a
Conductor 288.2
Rank \( 1 \)
Related objects
Learn more about
Base field \(\Q(\sqrt{-2}) \)
Generator \(a\), with minimal polynomial \( x^{2} + 2 \); class number \(1\).
Elliptic curves in class 288.2-a over \(\Q(\sqrt{-2}) \)
Isogeny class 288.2-a contains 6 curves linked by isogenies of degrees dividing 8.
Curve label Weierstrass Coefficients
288.2-a1 \( \bigl[a\) , \( 1\) , \( 0\) , \( -5 a + 22\) , \( 26 a + 23\bigr] \)
288.2-a2 \( \bigl[a\) , \( 1\) , \( 0\) , \( 5 a + 22\) , \( -26 a + 23\bigr] \)
288.2-a3 \( \bigl[a\) , \( 1\) , \( 0\) , \( 2\) , \( 1\bigr] \)
288.2-a4 \( \bigl[0\) , \( -1\) , \( 0\) , \( -2\) , \( 0\bigr] \)
288.2-a5 \( \bigl[0\) , \( -1\) , \( 0\) , \( -4\) , \( -2\bigr] \)
288.2-a6 \( \bigl[a\) , \( 1\) , \( a\) , \( -7\) , \( 8\bigr] \)
Rank
Rank: \( 1 \)
Isogeny matrix
\(\left(\begin{array}{rrrrrr} 1 & 4 & 2 & 4 & 8 & 8 \\ 4 & 1 & 2 & 4 & 8 & 8 \\ 2 & 2 & 1 & 2 & 4 & 4 \\ 4 & 4 & 2 & 1 & 2 & 2 \\ 8 & 8 & 4 & 2 & 1 & 4 \\ 8 & 8 & 4 & 2 & 4 & 1 \end{array}\right)\)
Isogeny graph
|
__label__pos
| 0.860896 |
Help
copy record to base based on condition
610 2
cancel
Showing results for
Search instead for
Did you mean:
SondraWithAnO
4 - Data Explorer
4 - Data Explorer
Hi! I'm trying to figure out a way to copy a record from one base to another when a certain condition is met. I have people submitting proposals for classes in one form. And once accepted, I want that same information to populate the current classes base. Can I do this? And have the info appear in different cells on the new base (like in a different order, etc)?
I'm not a coding person at all, so I'm trying to figure out the most basic way to do this.
Thank you!!
Sondra.
2 Replies 2
Yeah, you could use a script but I'd recommend using Zapier or something for this instead.
Any chance you could use Sync Views for this instead? Make a view that will only display records that meet the conditions you want, set up a shared view, sync that view into your "Current Classes" base?
Thank you! Zapier totally does the job. I've never really gotten into Zapier, but I see now how incredible it can be. Thanks for the tip!!
|
__label__pos
| 0.803326 |
XMPP Service Operators - 2024-03-03
1. down
can someone use the pgp option ?
2. down
keep getting a error saying openkeychain failed to run
3. down
i keep getting a error saying openkeychain failed to run
4. down
or pgp key not public tell the contact to configure the key
5. rewtkid
nuegia.net: not that experienced, but i read you can adjust AdvDefaultPreference to low, medium, and high in /etc/radvd.conf. there is also a AdvRoutePreference option. not exactly sure if this is what you are looking for, but is what i came up with after some quick research. sorry if doesnt help
6. rewtkid
also, is nuegia.net coming back?
7. rewtkid
nuegia.net: after some further research, i see there is a ra_preference in /etc/config/dhcp, which is set to medium by default. also might be what you are looking for? not exactly sure, just reading off what i have found in the documentation.
8. moparisthebest
> They are public, ~32million lavrentiy: quick math, 32 million 20 byte hashes would be 640 megabytes and a binary search would be nearly instant, I bet someone has already done this for http uploading which is most file sharing these days
9. lavrentiy
moparisthebest, but that'd require keeping it entirely in ram
10. lavrentiy
and it'll be larger in ram
11. rewtkid
ra_preference medium 'Route preference medium, high or low.'
12. nuegia.net
I'm not using dhcp at all in my network, only SLAAC
13. moparisthebest
No you can do it from disk too, and also no it'd take exactly the same in ram
14. rewtkid
oh i see, i am not sure then. apologies.
15. nuegia.net
Not sure if OpenWRT exposes AdvDefaultPreference in radvd.conf
16. nuegia.net
does OpenWRT use radvd?
17. rewtkid
doesnt seem like it does, not sure what the other post was talking about.
18. nuegia.net
where did you see ra_preference?
19. rewtkid
https://openwrt.org/docs/techref/odhcpd
20. nuegia.net
thanks
21. nuegia.net
weird. ok so I think in OpenWRT, odhcpd is performing the job of radvd
22. rewtkid
"odhcpd provides server services for DHCP, RA, stateless SLAAC and stateful DHCPv6"
23. rewtkid
yes, seems so.
24. nuegia.net
despite the name
25. nuegia.net
thanks
26. moparisthebest
lavrentiy: similarish approach to https://github.com/moparisthebest/phonehash , write the hashes to disk, sorted, in 20 byte chunks without delimiters, seek into it directly with offsets
27. rewtkid
no problem.
28. moparisthebest
This is way easier because everything fits in memory for sorting
29. lavrentiy
moparisthebest, interesting!
30. rewtkid
nuegia.net: sorry for being a distraction, but i see your site is back up. does this mean spyware will be making a return too? also hope things are going better now.
31. nuegia.net
that only seems to control all router advertisements for an interface, not the specific delegated blocks
32. nuegia.net
not fully yet
33. rewtkid
again apologies if im not helping much, not too experienced, but is it a possibility to create multiple interfaces for each delegated block, and specify the ra_preference option for each block accordingly? i might be (and probably am) way off.
34. rewtkid
anyways, maybe ask in virtual tinkerspace chat. someone there might also be able to help.
35. rewtkid
nuegia.net: friend who also uses openwrt says it sounds like it should work. still not entirely sure. just going off the basic documentation i have read here.
36. rewtkid
"interfaces" being virtual interfaces, of course.
37. Polarian
> No you can do it from disk too, and also no it'd take exactly the same in ram surely it depends how you store it... if you are using a list it would also contain pointers
38. Polarian
if its an array in contiguous memory then yes...
39. nuegia.net
sorry, brownout. can't afford to replace sulfated batteries yet.
|
__label__pos
| 0.671536 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
PerlMonks
Re^2: Example from perlobj failing?
by HelenCr (Monk)
on Mar 15, 2013 at 07:49 UTC ( #1023637=note: print w/ replies, xml ) Need Help??
in reply to Re: Example from perlobj failing?
in thread Example from perlobj failing?
Chromatic, NetWallah, tobyink: thank you for your explanations. Makes sense.
For the benefit of PerlMonks users: here is a corrected example, with added printouts, which are cool.
Many thanks - Helen
# Based on: http://perldoc.perl.org/perlobj.html use strict; use warnings; use parent; use 5.014; package A; sub new { return bless {}, shift; } sub speak { say 'A::speak: entered'; my $self = shift; # $self->SUPER::speak(); say 'A'; say 'A::speak: leaving'; } package B; use parent -norequire, 'A'; sub speak { say 'B::speak: entered'; my $self = shift; $self->SUPER::speak(); say 'B'; say 'B::speak: leaving'; } package C; use parent -norequire, 'B'; sub speak { say 'C::speak: entered'; my $self = shift; $self->SUPER::speak(); say 'C'; say 'C::speak: leaving'; } my $c = C->new(); say 'calling C::speak'; $c->speak(); say 'returned from C::speak';
Comment on Re^2: Example from perlobj failing?
Download Code
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1023637]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (10)
As of 2015-06-02 09:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
What kind of chocolate gives you the most pleasure?
Results (83 votes), past polls
|
__label__pos
| 0.957625 |
Posts Tagged ‘right click not responding windows 10’
Fix: Right-Click Context Menu Not Showing / Responding in Windows
November 16th, 2016 by Admin
Mouse right-click not working on your desktop or Windows Explorer? Whenever you try to right-click anything on the desktop or in Windows Explorer / Start Menu, you might see no response at all and the context menu won’t open. In this tutorial we’ll show you several methods to fix the problem of right-click context menu not showing / responding in Windows 10, 8 and 7.
Method 1: Enable Windows Explorer’s Context Menu Using Group Policy
There is a chance that your Windows Explorer’s context menu is disabled by group policy setting. Here’s how to tweak it:
1. Press the Windows key + R to open the Run box. Type gpedit.msc and press Enter.
gpedit
2. In the Local Group Policy Editor window, navigate to: User Configuration -> Administrative Templates -> Windows Components, and then click on File Explorer (or Windows Explorer).
3. On the right side of the window, scroll down until you see the setting “Remove Windows Explorer’s Default Context Menu“. Double-click on it to modify.
explorer-context-menu-policy
4. Select either Not Configured or Disabled, and click OK. Reboot your computer and see if the right-click context menu now works.
enable-explorer-context-menu
If you have no access to Local Group Policy Editor, please use this registry hack instead to enable Windows Explorer’s context menu:
1. Press the Windows key + R to open the Run box. Type regedit and press Enter.
regedit-via-run
2. In the left pane of Registry Editor, browse down to the following key:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer
3. Double-click the 32-bit DWORD value NoViewContextMenu on the right hand side, and set it to 0. (it will disable Windows Explorer’s context menu if you set NoViewContextMenu to 1)
remove-explorer-context-menu
Method 2: Remove Third-Party Shell Extensions from Context Menu
The right-click menu not showing issue might be caused by Shell Extensions. To fix it, try to disable all third-party shell extensions from the right-click context menu. This can be done using the software CCleaner.
ccleaner
Head over to the Piriform website and download the free version of CCleaner. After running CCleaner, click the Tools section in the left hand side. On the right hand side, click Startup and then click Context Menu. From there you can disable or delete any third-party shell extensions.
Method 3: System Restore
If you still couldn’t get the right-click context menu to work, restoring your system back to a previous working condition will be your good choice. To learn how to perform a system restore, please check out this article: Recover Unbootable Windows 10 or 8 with Restore Point.
|
__label__pos
| 0.610435 |
Combinations Generator
Enter a range of numbers (like 1-49) or a list of numbers to randomize (like 10 20 30 40 50). You can also mix ranges and list (like 1-10, 90-100). You can also add alphanumeric lists or words (like a,b,c or apple, orange, banana). If you have a range with negative numbers, you can enter it using a ':' (like -1000:-100). To generate a non-repeating sequence, generate same amount of numbers as present in the range. (e.g. 10 numbers from 1-10 will produce a shuffled sequence from 1-10)
Select Uniqueness and Order
For lottery numbers like Powerball: Uncheck "Order Matters" and check "Unique". For lottery numbers like Pick3: Check "Order Matters" and uncheck "Unique". For pin codes, passwords, etc: Check "Order Matters" and uncheck "Unique". For no repeats: Check "Unique". For numbers with replacement: Uncheck "Unique". If numbers to be generated per line are more than the numbers available in the range, the random number generator will automatically switch to allow numbers with replacement (i.e. duplicates)
Select Odd / Even
Select Format
This advanced random number generator lets you specify various options to tune the random numbers to your liking. The numbers generated are cryptographically strong (see Cryptographically secure numbers) numbers generated using the javascript Window.crypto method. The numbers are generated locally in the browser and do not travel across any networks and are not sourced from any single hardware device. On top of using the entropy provided by the window.crypto method (which is small enough to produce cryptographically strong random numbers), it utilizes another algorithm to minimize the entropy per machine even further.
|
__label__pos
| 0.999225 |
Dismiss
Announcing Stack Overflow Documentation
We started with Q&A. Technical documentation is next, and we need your help.
Whether you're a beginner or an experienced developer, you can contribute.
Sign up and start helping → Learn more about Documentation →
This is related to this other question. I'm fetching a URL through a proxy using this simple code:
package main
import (
"fmt"
"io/ioutil"
"net/http"
"net/url"
)
func main() {
proxyUrl, err := url.Parse("87.236.233.92:8080")
httpClient := &http.Client { Transport: &http.Transport { Proxy: http.ProxyURL(proxyUrl) } }
response, err := httpClient.Get("http://stackoverflow.com")
if err != nil {
fmt.Println(err.Error())
} else {
body, _ := ioutil.ReadAll(response.Body)
fmt.Println("OK: ", len(body))
}
}
If I run this code, I am getting this error:
Get http://stackoverflow.com: http: error connecting to proxy 87.236.233.92:8080: GetServByName: The requested name is valid, but no data of the requested type was found.
I know that the proxy address is valid and if I fetch the URL through the proxy by other means it work. Any idea why I'm getting this error?
share|improve this question
up vote 7 down vote accepted
Specify your proxy with http:// in and it should work, eg
proxyUrl, err := url.Parse("http://87.236.233.92:8080")
if err != nil {
fmt.Println("Bad proxy URL", err)
return
}
share|improve this answer
Brilliant, thanks a lot, I was starting to think it was some bug in the go runtime. – this.lau_ Feb 3 '13 at 13:23
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.654309 |
Maple 15 Questions and Posts Maple 15 Questions and Posts Feed
These are Posts and Questions associated with the product, Maple 15
I have two polynomials f(x,y,z) and g(x,y,z) and ask MAPLE to find conditions on the coefficients of f and g such that the Jacobian determinant in x and y is purely a polynomial in z. MAPLE finds 4 solutions, one of which is g=0, but does not find the solution f=0. I attach the relevant MAPLE worksheet.
The mechanism of transport of the material of the sewing machine M 1022 class: mathematical animation. BELORUS.mw
Hi, currently im using maple 15
the coding did work but it is not the same with the answer
here, i attach the coding with the answer
coding:
derivation := proc (A, n)
local i, j, k, t, s1, s2, m, D, sols, eqns, Andre;
eqns := {};
D := matrix(n, n);
Andre := matrix(n, n);
for i to n-1 do
for j from i+1 to n do
for m to n do
s1 := sum(A[i, j, k]*D[m, k], k = 1 .. n);
s2 := sum(A[k, j, m]*D[k, i]+A[i, k, m]*D[k, j], k = 1 .. n);
eqns := `union`(eqns, {s1 = s2})
end do end do end do;
sols := [solve(eqns)];
t := nops(sols);
for i to t do
for j to n do
for k to n do
Andre[k, j] := subs(sols[i], D[k, j])
end do end do;
print(Andre)
end do end proc
the maple result showing:
> AS1 := array(sparse, 1 .. 2, 1 .. 2, 1 .. 2, [(1, 1, 2) = 1]);
> derivation(AS1, 2);
[D11 0]
[D21 D22]
> AS2 := array(sparse, 1 .. 2, 1 .. 2, 1 .. 2, [(1, 1, 1) = 1, (1, 2, 2) = 1]);
> derivation(AS2, 2);
[0 D12]
[D21 D22]
the maple should showing
> derivation(AS1, 2);
[D11 0]
[D21 2D11]
> AS2 := array(sparse, 1 .. 2, 1 .. 2, 1 .. 2, [(1, 1, 1) = 1, (1, 2, 2) = 1]);
> derivation(AS2, 2);
[0 0]
[D21 D22]
please help., thank you
how to field plot this system?
restart;
with(Physics[Vectors]);
with(DEtools);
with(VectorCalculus);
eq2 := ...;
eq3 := ...;
eq4 := ...;
with(DynamicSystems);
sys := DiffEquation([eq2 = t, eq3 = t], inputvariable = [b(t)], outputvariable = [a(t), c(t)]);
ts := .1;
in_t := t;
sol := Simulate(sys, [in_t]);
with(DEtools):
dfieldplot([...],[a(t),b(t),c(t)],t=−2..2,a=−1..2,b=−1..2,c=−1..2,arrows=SLIM,color=black,dirfield=[10,10]);
Continuation.
One way to get rolling without slipping animation in 3d. The trajectory and circle are divided into segments of equal length. In the next segment of the trajectory we construct circle, taking into account the fact that it turned on one segment. Rolling sphere or cylinder can be simulated, if we take plottools templates of the same radius, and replace them on the site of our circle.
ROLLING_WITHOUT_3d.mw
not the same ordering every time of monomials after determinant and map sign positive and op in maple 15
sometimes i need to use Reverse or Rotate List to adjust.
why ordering is different in list of monomials?
is it caused by virus?
Spiral (equidistant) around the curve. In this case, a spiral around the spiral.
So without any sense.
spiral_around_curve.mw
If we re-save the animation with the program Easy GIF Animator, its size is reduced by about 10 times, and sometimes much more.
polygon_2_color.mw
Imitation coloring both sides of the polygon in 3d. We build a new polygon in parallel with our polygon on a very short distance t. (We need any three points on the polygon plane, do not lie on a straight line.) This place in the program is highlighted in blue.
Paint the polygons are in different colors.
In a post of April 15, 2013 by Kitonum, the procedure named Picture accepts a list of polygon segments, creates a plot of these as a 2D polygon's boundaries and fills the polygon with a color.
The code below attempts to modify Picture to produce a 3D filled polygon in a plane parallel to the xy plane.
When invoked by the code below the procedure, the filling color conforms to the straight line boundaries but overflows the curved, parabolic boundary. How can this be corrected?
Picture:=proc(L, C, N::posint:=100, Boundary::list:=[linestyle=1])
local i, var, var1, var2,e, e1, e2,e3, P, h ;
global Q,Border;
for i to nops(L) do
#` set P`[i] = list of points for each segment.
#` for a segment defined as a list of points, P[i] = the segment's definition`
#` for a curve definition, approximate it with a list of [x,y] points of its function evaluated at N even intervals in its
# range`
if type(L[i],listlist(algebraic)) then P[i]:=op(L[i]); else
#` for curve def'n, set var = def'n and h= `(variable range)/(2)
var:=lhs(L[i,2]); var1:=lhs(rhs(L[i,2])); var2:= rhs(rhs(L[i,2])); h:=(var2-var1)/(N);
#` for function def'n, set e=function`
if type(L[i,1], algebraic) then e:=L[i,1];
#` for polar function r=f(t) create N values of the [cos*r,sin*r] i.e. the equivalent [x,y] values for r valued at N even
# divisions of its range`
if nops(L[i])=3 then P[i]:=seq(subs(var=var1+h*i,[e*cos(var), e*sin(var)]), i=0..N); else
#` for non-polar function y=f(x) create N values of [x,y] for x values at N even divisions of its range`
P[i]:=seq([var1+h*i, subs(var=var1+h*i,e)], i=0..N) fi; else
#` for parametric function [f`(t),g(t)] create N values of [f(t),g(t)] for t values at N even divisions of its range.
e1:=L[i,1,1]; e2:=L[i,1,2];
#` P`[i]:=seq(subs(var=var1+i*h,[e1, e2]), i=0..N):
P[i]:=seq([subs(var=var1+i*h,e1), subs(var=var1+i*h,e2),0], i=0..N) fi; fi; od; #` MODIFIED FOR 3 D `[f(t), g(t), 0]
Q:=[seq(P[i], i=1..nops(L))];
Border:=plottools[curve]([op(Q), Q[1]], op(Boundary));
#` the shaded figure is a polygon whose vertices are Q, whose interior color is C`
#` return a list of the polygon and its border`
[plottools[polygon](Q, C), Border];
end proc:
L := [[[0, 0, 0], [0, 1, 0]], [[x, x^2+1, 0], x = 0 .. 2], [[2, 5, 0], [2, 2, 0]], [[x, x, 0], x = 2 .. 0]]:
plots[display](Picture(L, color = yellow), axes = normal, scaling = constrained)
I seperate the variables in Real and Imigneray parts, as follows:
restart:
Dijits:=20:
------------------------- Defining the nature of the variables used ----------------------
assume(t,real):
x(0):=-1:y(0):=1:z(0):=conjugate(y(0)):N:=10:Delta:=5:omega:=10^(6):N1:=1+2*N:M:=sqrt(N*(N+1)):
t0:=0.0:tN:=30.0: M1:=5000;:th:=evalf((tN-t0)/M1):
5000
ini1:=u(0)=Re(y(0)), v(0)=Im(z(0)),w(0)=x(0);
u(0) = 1, v(0) = 0, w(0) = -1
var:={u(t),v(t),w(t)}:
dsys1 :=diff(w(t),t)=-(N1+M*cos(2*omega*t))*w(t)-1+2*u(t)*cos(2*omega*t)+2*v(t)*sin(2*omega*t), diff(u(t),t)=-N1*u(t)+Delta*v(t)-2*M+(2*M*u(t)-N1-w(t))*cos(2*omega*t)-2*M*v(t)*sin(2*omega*t), diff(v(t),t)=-N1*v(t)-Delta*u(t)-2*M+(2*M*u(t)-N1-w(t))*sin(2*omega*t)+2*M*v(t)*cos(2*omega*t):
dsol1 :=dsolve({dsys1,ini1},var,numeric, output=listprocedure, abserr=1e-9, relerr=1e-8,range=0..1,maxfun=5000):
Warning, cannot evaluate the solution further right of .46544244e-3, maxfun limit exceeded (see ?dsolve,maxfun for details)
dsolu:=subs(dsol1,u(t)):dsolv:=subs(dsol1,v(t)):dsolw:=subs(dsol1,w(t)):
t1:=array(0..M1,[]): u1:=array(0..M1,[]): v1:=array(0..M1,[]): w1:=array(0..M1,[]): pt1:=array(0..M1,[]):pt2:=array(0..M1,[]):pt3:=array(0..M1,[]):
for i from 0 to M1 do t1[i]:=evalf(th*i):u1[i]:=evalf(dsolu(t1[i]));v1[i]:=evalf(dsolv(t1[i])):w1[i]:=evalf(dsolw(t1[i])):pt1[i]:=[t1[i],u1[i]]:pt2[i]:=[t1[i],v1[i]]:pt3[i]:=[t1[i],w1[i]]:od:
Error, (in dsolu) cannot evaluate the solution further right of 0.46544244e-3, maxfun limit exceeded (see ?dsolve,maxfun for details)
with(plots):
unassign('i'):mytab1:=[seq(pt1[i],i=0..M1)]:mytab2:=[seq(pt2[i],i=0..M1)]:mytab3:=[seq(pt3[i],i=0..M1)]:
plot(mytab3,t=0..5,tickmarks=[6, 6],axes=boxed);
but I got an error
I have downloaded the zip file for CalcP7, unzipped it, and can access its commands in a worksheet after issuing the command with(CalcP7), but "No Matches Found" displays when entering the command ?CalcP7. The download included a file named "aplication" (one "p") of type HDB, but Maple15 can't seem to access its contents.
Are CalcP7's help pages displayable? If so, what is necessary to access them?
I have had no trouble downloading the user package DirectSearch and accessing both its commands and its help pages.
I tried to load my document containing some notes, but then I got the message "There were problems during the loading process, Your worksheet may become incomplete", and as the message said my worksheet were incomplete. Is there a way to restore the document? I have tried following this and added the line it suggested:
http://www.maplesoft.com/support/help/Maple/view.aspx?path=worksheetmaybeincomplete
But it didn't work.
I have attached the file.
Noter.mw
plots[implicitplot3d](max(-x+y+z, x-y+z, x+y-z) = 1.0, x = 0 .. 1, y = 0 .. 1, z = 0 .. 1);
The help page for max does not explain or show an example of max(sequence of expressions)= a constant.
PrimesQuestion.mw
Please let me know if this link correctly accesses my worksheet. If not, I will copy its contents into this question.
Which ODE in the worksheet, if any, provides the correct answer?
restart
f := proc (x) local t; if not type(evalf(x), 'numeric') then ('procname')(x) else evalf(Int(exp(-(1/10)*t^2), t = 0 .. x)) end if end proc
solA := dsolve({diff(y(x), x) = y(x)+f(x), y(0) = 0}, numeric, known = f)
solA(1)
[x = 1., y(x) = HFloat(0.7081492947996167)]
(1)
f2 := evalf(Int(exp(-(1/10)*t^2), t = 0 .. 1)); f(1)
.9676433126
.9676433126
(2)
solB := dsolve({diff(y(x), x) = y(x)+f2, y(0) = 0}, numeric, output = listprocedure)
solB(1)
[x(1) = 1., (y(x))(1) = HFloat(1.6626837619970016)]
(3)
YinSolB := subs(solB, y(x))
YinSolBeval := solve(YinSolB(a) = .7081, a); solB(YinSolBeval)
.5491485953
[x(.5491485953) = .5491485953, (y(x))(.5491485953) = HFloat(0.7081000000284681)]
(4)
NULL
1 2 3 4 5 6 7 Last Page 3 of 44
|
__label__pos
| 0.98666 |
Excel Formula: Color Row Green If Remarks Show Paid
Formula: Write an Excel Formula for Excel that colors a row green if the remarks show paid.
Formula Generator | 10 months ago
In Excel, you can use conditional formatting to highlight rows based on specific conditions. This guide will show you how to write an Excel formula that colors a row green if the remarks show paid. The formula uses the IF function and can be applied as a conditional formatting rule to easily identify rows that meet the specified condition. Let's dive into the details!
Excel Formula
=IF($D1="paid", "green", "")
Formula Explanation
This formula uses the IF function to check if the value in column D of the current row is "paid". If it is, the formula returns the text "green", which can be used as a conditional formatting rule to color the row green. If the value in column D is not "paid", the formula returns an empty string.
Step-by-step explanation
1. The formula starts with the IF function, which has three arguments: the logical test, the value if true, and the value if false.
2. The logical test is $D1="paid", which checks if the value in column D of the current row is equal to "paid". The dollar sign before the column letter locks the column reference, so when the formula is applied to other rows, the column reference remains the same.
3. If the logical test is true, the formula returns the text "green". This is the value if true argument of the IF function.
4. If the logical test is false, the formula returns an empty string (""). This is the value if false argument of the IF function.
5. The result of the formula can be used as a conditional formatting rule to color the row green when the value in column D is "paid".
Example
Suppose we have the following data in columns A to D:
| A | B | C | D |
|-------|-------|-------|---------|
| | | | |
| 1 | Item1 | Desc1 | paid |
| 2 | Item2 | Desc2 | unpaid |
| 3 | Item3 | Desc3 | paid |
| 4 | Item4 | Desc4 | paid |
| 5 | Item5 | Desc5 | unpaid |
If we apply the formula =IF($D1="paid", "green", "") to the entire row, it will color the rows where the value in column D is "paid" green. In this example, rows 1, 3, and 4 will be colored green, indicating that the remarks show "paid".
This article was generated with AI. AI can make mistakes, consider checking important information.
|
__label__pos
| 0.999319 |
14 lines C++ solution
• 1
Z
class Solution {
public:
TreeNode *sortedArrayToBST(vector<int> &num) {
return build(num, 0, num.size()-1);
}
TreeNode *build(vector<int> &num, int begin, int end){
if(begin>end) return NULL;
int mid = (begin + end)/2;
TreeNode *n = new TreeNode(num[mid]);
n->left = build(num, begin, mid-1);
n->right = build(num, mid+1, end);
return n;
}
};
• 0
B
This post is deleted!
• 0
B
why is num.size()-1 instead of num.size() in line 4? I didn't see anything bad but it response me an error in arrays.
• 0
B
This post is deleted!
• 0
D
it will exceed the index range
Log in to reply
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
__label__pos
| 0.916712 |
Кластеризация CppComet
В комет сервере есть возможность кластеризации в которой каждый сервер кластера может принимать запросы и пересылать их тем серверам кластера которые надо уведомить о событии. (Можно провести аналогию с мастер-мастер репликацией у баз данных)
Операции вставки данных (insert и set) выполняются асинхронно, это значит что вы не будете ждать пока запрос будет разослан по всем серверам кластера.
Операции выборки данных (select и show) работают синхронно. Так как должны получить данные и вернут их в ответе.
Для включения механизма кластеризации надо добавить в конфигурационный файл параметры для подключения к серверам кластера в секцию [ws] и в секцию [cometqlproxy]
Механизм кластеризации был разработан не давно. Если обнаружатся проблемы в настройке или в работе или пробелы в документации не стесняйтесь задавать вопросы и писать баг репорты.
Настройка секции [ws]
В секции [ws] есть параметр cluster он представляет из себя список параметров для подключения к другим нодам кластера. Каждое новое подключение задаётся с новой строки. Квадратные скобочки перед началом параметра указывают на то что эта строка не переопределяет предыдущее значение параметра cluster а дополняет его ещё одним элементом списка значений.
; The parameters of the cluster (without spaces or anything else between the parameters, are case sensitive)
cluster = []Server=127.0.0.1,Database=CometQL_v1,Uid=root,Pwd=0000000000000000000000000000000000000000000000000000000000000000,Port=3311
cluster = []Server=127.0.0.1,Database=CometQL_v1,Uid=root,Pwd=0000000000000000000000000000000000000000000000000000000000000000,Port=3321
В примере выше параметру cluster присвоено две строки подключения к двум другим нодам кластера. Строка из себя представляет несколько параметров
• Server - имя хоста комет сервера
• Database - версия апи всегда CometQL_v1 (до тех пор пока нет версии 2 или какой бы то ни было ещё)
• Uid - имя пользователя обычно root
• Pwd - пароль для подключения
Если у нас кластер из трёх нод то в конфигурации надо указывать две соседних ноды. Таким образом при поступлении данных из JavaScript API будут уведомлены другие сервера кластера.
Настройка секции [cometqlproxy]
Секция cometqlproxy представляет собой интерфейс для обращений через CometQL API к кластеру. По своей структуре она похожа на секцию cometql
Если в не кластерном режиме все запросы от CometQL уходили в модуль cometql на тот порт который был казан в секции [cometql] то в режиме кластера вы можете активировать на одной ноде или на нескольких нодах кластера модуль cometqlproxy и направлять запросы на него (сами запросы менять не придётся) модуль cometqlproxy принимая CometQL запрос парсит его и в зависимости от запроса направляет его на одну или несколько нод кластера. Если запрос был на вставку данных (insert и set) то он будет направлен на одну из живых нод кластера. Если запрос на получение данных то данные будут извлечены либо с одной ноды либо в худшем случаи будут опрошены все живые ноды, после чего будет собран общий ответ из ответов от всех нод.
В секции cometqlproxy тоже есть параметр cluster похожий на свой аналог из секции ws за тем исключением что в нём надо перечислить все ноды кластера а не только соседей. Тоесть если ваш кластер из трёх нод то в секциях ws каждой из нод будет по две строки для подключения к другим нодам а в секции cometqlproxy будут указаны все три ноды кластера.
Так сделано потому что модуль cometqlproxy может работать на инстансе комет сервера просто в режиме проксирования запросов к другим комет серверам без поднятия на этой же ноде ещё и модулей cometql и WS
[cometqlproxy]
; Clustering Functions
;
ip = 0.0.0.0
thread_num = 3 ; number of threads for receive message from cometql
statistics = 10
port = 3301
; The parameters of the cluster (without spaces or anything else between the parameters, are case sensitive)
cluster = []Server=127.0.0.1,Database=CometQL_v1,Uid=root,Pwd=0000000000000000000000000000000000000000000000000000000000000000,Port=3301
cluster = []Server=127.0.0.1,Database=CometQL_v1,Uid=root,Pwd=0000000000000000000000000000000000000000000000000000000000000000,Port=3311
cluster = []Server=127.0.0.1,Database=CometQL_v1,Uid=root,Pwd=0000000000000000000000000000000000000000000000000000000000000000,Port=3321
Обсуждение
Ваш комментарий. Вики-синтаксис разрешён:
E Q O F U
|
__label__pos
| 0.609681 |
Important note: If you’re using Linen, Paperpunch, Photography 1.x, React, Shelf, Titan, Traction, or Vigilance see the instructions for replacing functions in older themes at the bottom of this page.
Replacing theme functions
To replace a function, make sure you first have the child theme installed. Copy the function you wish to replace from the parent theme to the functions.php file in your child theme and make your modifications.
For example, to change the duet_page_title function, your child theme functions.php file would look like this:
<?php
add_action( 'wp_enqueue_scripts', 'duet_child_enqueue_scripts' );
function duet_child_enqueue_scripts() {
if ( ! is_admin() ) {
// Add parent theme stylesheet
wp_enqueue_style(
'duet_parent_style',
get_template_directory_uri() . '/style.css',
false,
null
);
}
}
// Your custom functions go here...
function duet_page_title( $title ) {
// Your customizations here
return "My custom title";
}
This is just the tip of the iceberg. Using this approach you may modify or remove any functions from the parent theme from the safety of your child theme.
Replacing theme functions in older themes
By default all of the theme specific functions in Linen, Paperpunch, Photography, React, Shelf, Titan, Traction, and Vigilance are loaded through the -extend.php file of your parent theme. These functions power different parts of your theme.
Let’s learn how to override a default function through a child theme. In this example, we will replace the printHeaderItems() function. First, we open up the functions.php file in our child theme and insert this code:
function load_traction_child_extend() {
global $traction;
remove_action( 'after_setup_theme', 'load_traction_pro_theme' );
locate_template( array( 'traction-child-extend.php' ), true );
if ( class_exists( 'MyChildTheme' ) ) {
$traction = new MyChildTheme;
}
}
add_action( 'after_setup_theme', 'load_traction_child_extend', 5 );
The code above first creates a new function to load our theme object from the child theme. Next, it removes the theme object loading function from our parent theme (so it doesn’t load twice). Finally, it calls a new file (we are about to create it) and loads a new class that will hold our custom functions from that file.
Create a brand new file called traction-child-extend.php in your child theme folder and insert the following code into that new file:
<?php
class MyChildTheme extends Traction {
function printHeaderItems() {
// Custom Code Goes Here...
}
}
This new printHeaderItems function in our child theme will replace / override the function from our parent theme.
|
__label__pos
| 0.872884 |
Qual è la differenza tra un’interfaccia e una class astratta?
Qual è esattamente la differenza tra un’interfaccia e una class astratta?
interfacce
Un’interfaccia è un contratto : la persona che scrive l’interfaccia dice ” hey, accetto le cose in questo modo “, e la persona che usa l’interfaccia dice ” OK, la class che scrivo sembra così “.
Un’interfaccia è una shell vuota . Ci sono solo le firme dei metodi, il che implica che i metodi non hanno un corpo. L’interfaccia non può fare nulla. È solo uno schema.
Ad esempio (pseudo codice):
// I say all motor vehicles should look like this: interface MotorVehicle { void run(); int getFuel(); } // My team mate complies and writes vehicle looking that way class Car implements MotorVehicle { int fuel; void run() { print("Wrroooooooom"); } int getFuel() { return this.fuel; } }
L’implementazione di un’interfaccia consuma pochissima CPU, perché non è una class, solo un mucchio di nomi, e quindi non ci sono costose ricerche da fare. È fantastico quando è importante, come nei dispositivi embedded.
Lezioni astratte
Le classi astratte, a differenza delle interfacce, sono classi. Sono più costosi da usare, perché c’è una ricerca da fare quando si eredita da loro.
Le classi astratte assomigliano molto alle interfacce, ma hanno qualcosa in più: puoi definire un comportamento per loro. È più che altro una persona che dice “queste classi dovrebbero assomigliare a questo, e hanno questo in comune, quindi riempire gli spazi vuoti!”.
Per esempio:
// I say all motor vehicles should look like this: abstract class MotorVehicle { int fuel; // They ALL have fuel, so lets implement this for everybody. int getFuel() { return this.fuel; } // That can be very different, force them to provide their // own implementation. abstract void run(); } // My teammate complies and writes vehicle looking that way class Car extends MotorVehicle { void run() { print("Wrroooooooom"); } }
Implementazione
Mentre le classi e le interfacce astratte dovrebbero essere concetti diversi, le implementazioni rendono questa affermazione a volte non vera. A volte, non sono nemmeno ciò che pensi di essere.
In Java, questa regola è fortemente applicata, mentre in PHP, le interfacce sono classi astratte senza alcun metodo dichiarato.
In Python, le classi astratte sono più un trucco di programmazione che puoi ottenere dal modulo ABC e in realtà utilizzano metaclassi e quindi classi. E le interfacce sono più legate alla digitazione anatra in questa lingua ed è un misto tra convenzioni e metodi speciali che chiamano i descrittori (i metodi __method__).
Come al solito con la programmazione, c’è teoria, pratica e pratica in un’altra lingua 🙂
Le principali differenze tecniche tra una class astratta e un’interfaccia sono:
• Le classi astratte possono avere costanti, membri, metodi stub (metodi senza corpo) e metodi definiti , mentre le interfacce possono avere solo costanti e stub di metodi .
• Metodi e membri di una class astratta possono essere definiti con qualsiasi visibilità , mentre tutti i metodi di un’interfaccia devono essere definiti come public (sono definiti pubblici per impostazione predefinita).
• Quando si eredita una class astratta, una class figlio concreta deve definire i metodi astratti , mentre una class astratta può estendere un’altra class astratta e i metodi astratti dalla class genitore non devono essere definiti.
• Allo stesso modo, un’interfaccia che estende un’altra interfaccia non è responsabile dell’implementazione dei metodi dall’interfaccia genitore. Questo perché le interfacce non possono definire alcuna implementazione.
• Una class figlio può estendere solo una singola class (astratta o concreta), mentre un’interfaccia può estendersi o una class può implementare più altre interfacce .
• Una class figlio può definire metodi astratti con visibilità uguale o meno restrittiva , mentre una class che implementa un’interfaccia deve definire i metodi con la stessa visibilità (pubblica) esatta.
Un’interfaccia contiene solo la definizione / firma della funzionalità, e se abbiamo alcune funzionalità comuni e firme comuni, allora dobbiamo usare una class astratta. Usando una class astratta, possiamo fornire sia il comportamento che la funzionalità nello stesso tempo. Un altro sviluppatore che eredita una class astratta può utilizzare facilmente questa funzionalità, in quanto è necessario solo riempire gli spazi vuoti.
inserisci la descrizione dell'immagine qui
Preso da:
http://www.dotnetbull.com/2011/11/difference-between-abstract-class-and.html
http://www.dotnetbull.com/2011/11/what-is-abstract-class-in-c-net.html http://www.dotnetbull.com/2011/11/what-is-interface-in -c-net.html
Una spiegazione può essere trovata qui: http://www.developer.com/lang/php/article.php/3604111/PHP-5-OOP-Interfaces-Abstract-Classes-and-the-Adapter-Pattern.htm
Una class astratta è una class che è solo parzialmente implementata dal programmatore. Può contenere uno o più metodi astratti. Un metodo astratto è semplicemente una definizione di funzione che serve a dire al programmatore che il metodo deve essere implementato in una class figlia.
Un’interfaccia è simile a una class astratta; infatti le interfacce occupano lo stesso spazio dei nomi delle classi e delle classi astratte. Per questo motivo, non è ansible definire un’interfaccia con lo stesso nome di una class. Un’interfaccia è una class completamente astratta; nessuno dei suoi metodi è implementato e invece di una sottoclass di class da esso, si dice che implementa quell’interfaccia.
Comunque trovo questa spiegazione delle interfacce un po ‘confusa. Una definizione più comune è: un’interfaccia definisce un contratto che le classi di implementazione devono soddisfare. Una definizione di interfaccia consiste in firme di membri pubblici, senza alcun codice di implementazione.
Alcune importanti differenze:
Sotto forma di tabella:
Differenza
Come affermato da Joe da javapapers :
1. La differenza principale è che i metodi di un’interfaccia Java sono implicitamente astratti e non possono avere implementazioni. Una class astratta Java può avere metodi di istanza che implementano un comportamento predefinito.
2. Le variabili dichiarate in un’interfaccia Java sono per default definitive. Una class astratta può contenere variabili non finali.
3. I membri di un’interfaccia Java sono pubblici per impostazione predefinita. Una class astratta Java può avere i soliti sapori dei membri della class come privati, protetti, ecc.
4. L’interfaccia Java deve essere implementata usando la parola chiave “implements”; Una class astratta Java dovrebbe essere estesa usando la parola chiave “extends”.
5.Un’interfaccia può estendere solo un’altra interfaccia Java, una class astratta può estendere un’altra class Java e implementare più interfacce Java.
6. Una class Java può implementare più interfacce ma può estendere solo una class astratta.
7. L’interfaccia è assolutamente astratta e non può essere istanziata; Anche una class astratta Java non può essere istanziata, ma può essere invocata se esiste un main ().
8. In confronto con le classi astratte java, le interfacce java sono lente in quanto richiede un’ulteriore indiretta.
Non voglio evidenziare le differenze, che sono già state dette in molte risposte (riguardo i modificatori finali statici pubblici per le variabili nell’interfaccia e il supporto per metodi privati protetti in classi astratte)
In termini semplici, vorrei dire:
interfaccia: per implementare un contratto con più oggetti non correlati
class astratta: per implementare lo stesso comportamento o un comportamento diverso tra più oggetti correlati
Dalla documentazione Oracle
Prendi in considerazione l’utilizzo di classi astratte se:
1. Vuoi condividere il codice tra diverse classi strettamente correlate.
2. Ti aspetti che le classi che estendono la tua class astratta abbiano molti metodi o campi comuni o richiedano modificatori di accesso diversi da quelli pubblici (come protetti e privati).
3. Si desidera dichiarare campi non statici o non finali.
Prendi in considerazione l’utilizzo di interfacce se:
1. Ti aspetti che le classi non correlate implementino la tua interfaccia. Ad esempio, molti oggetti non collegati possono implementare un’interfaccia Serializable .
2. Si desidera specificare il comportamento di un particolare tipo di dati, ma non si preoccupa di chi implementa il suo comportamento.
3. Vuoi sfruttare l’ereditarietà multipla del tipo.
la class astratta stabilisce “è un” rapporto con classi concrete. l’interfaccia fornisce “ha una” capacità per le classi.
Se stai cercando Java come linguaggio di programmazione, ecco alcuni altri aggiornamenti:
Java 8 ha ridotto il divario tra l’ interface e le classi abstract in una certa misura fornendo una funzione di metodo default . Un’interfaccia non ha un’implementazione per un metodo non è più valida ora.
Fare riferimento a questa pagina di documentazione per ulteriori dettagli.
Dai un’occhiata a questa domanda SE per esempi di codice per capire meglio.
Come dovrei aver spiegato la differenza tra un’interfaccia e una class astratta?
Il punto principale è che:
• L’astratto è orientato agli oggetti . Offre i dati di base che un ‘object’ dovrebbe avere e / o funzioni che dovrebbe essere in grado di fare. Riguarda le caratteristiche di base dell’object: cosa ha e cosa può fare. Quindi gli oggetti che ereditano dalla stessa class astratta condividono le caratteristiche di base (generalizzazione).
• L’interfaccia è orientata alla funzionalità . Definisce le funzionalità che un object dovrebbe avere. Indipendentemente da quale object sia, finché può fare queste funzionalità, che sono definite nell’interfaccia, va bene. Ignora tutto il resto. Un object / class può contenere diverse (gruppi di) funzionalità; quindi è ansible per una class implementare più interfacce.
Sto costruendo un edificio di 300 piani
L’ interfaccia del progetto dell’edificio
• Ad esempio, Servlet (I)
Edificio costruito fino a 200 piani – parzialmente completato — astratto
• Implementazione parziale, ad esempio servlet generico e HTTP
Costruzione completata – calcestruzzo
• Implementazione completa, ad esempio, proprio servlet
Interfaccia
• Non sappiamo nulla di implementazione, solo requisiti. Possiamo andare per un’interfaccia.
• Ogni metodo è pubblico e astratto per impostazione predefinita
• È una class astratta pura al 100%
• Se dichiariamo pubblico, non possiamo dichiararlo privato e protetto
• Se dichiariamo astratto, non possiamo dichiarare definitivo, statico, sincronizzato, strictfp e nativo
• Ogni interfaccia ha pubblico, statico e finale
• La serializzazione e il transitorio non sono applicabili, perché non possiamo creare un’istanza per l’interfaccia
• Non volatile perché è definitivo
• Ogni variabile è statica
• Quando dichiariamo una variabile all’interno di un’interfaccia, dobbiamo inizializzare le variabili durante la dichiarazione
• Istanza e blocco statico non consentiti
Astratto
• Implementazione parziale
• Ha un metodo astratto. Un’aggiunta, usa il cemento
• Nessuna limitazione per i modificatori del metodo di class astratti
• Nessuna restrizione per i modificatori di variabili di class astratte
• Non possiamo dichiarare altri modificatori se non astratti
• Nessuna limitazione per inizializzare le variabili
Tratto dal sito DurgaJobs
Quando si desidera fornire un comportamento polimorfico in una gerarchia di ereditarietà, utilizzare le classi astratte.
Quando si desidera un comportamento polimorfico per classi completamente indipendenti, utilizzare un’interfaccia.
Lavoriamo ancora su questa domanda:
La prima cosa da dirti è che 1/1 e 1 * 1 hanno lo stesso risultato, ma ciò non significa che la moltiplicazione e la divisione siano uguali. Ovviamente, mantengono un buon rapporto, ma badate entrambi sono diversi.
Sottolineerò le principali differenze e il resto è già stato spiegato:
Le classi astratte sono utili per modellare una gerarchia di classi. A prima vista, qualsiasi requisito è parzialmente chiaro su cosa esattamente deve essere costruito, ma sappiamo cosa build. E così le tue classi astratte sono le tue classi base.
Le interfacce sono utili per consentire ad altre gerarchie o classi di sapere che cosa sono capace di fare. E quando dici che sono capace di qualcosa, devi avere quella capacità. Le interfacce lo contrassegneranno come obbligatorio per una class per implementare le stesse funzionalità.
In realtà è piuttosto semplice.
Puoi pensare ad un’interfaccia come una class alla quale è permesso solo avere metodi astratti e nient’altro.
Quindi un’interfaccia può solo “dichiarare” e non definire il comportamento che la class deve avere.
Una class astratta consente di dichiarare entrambi (utilizzando metodi astratti) e definire (utilizzando le implementazioni del metodo completo) il comportamento che si desidera che la class abbia.
E una class regolare ti consente solo di definire, non dichiarare, il comportamento / le azioni che vuoi che la class abbia.
Un’ultima cosa,
In Java, è ansible implementare più interfacce, ma è ansible estenderne solo una (Classe astratta o Classe) …
Ciò significa che l’ereditarietà del comportamento definito è limitata a consentire solo uno per class … cioè se si desidera una class che incapsula il comportamento delle Classi A, B e C, si dovrebbe fare quanto segue: Classe A estende B, Classe C estende A .. è un po ‘un modo per avere ereditarietà multiple …
Interfacce d’altra parte, si potrebbe semplicemente fare: interfaccia C implementa A, B
Quindi, in effetti, Java supporta l’ereditarietà multipla solo in “comportamento dichiarato”, cioè interfacce e solo ereditarietà con comportamento definito .. a meno che non si faccia il giro del modo in cui ho descritto …
Spero che abbia senso.
L’unica differenza è che si può partecipare a eredità multiple e altre no.
La definizione di un’interfaccia è cambiata nel tempo. Pensi che un’interfaccia abbia solo dichiarazioni di metodo e siano solo contratti? Per quanto riguarda le variabili finali statiche e le definizioni predefinite dopo Java 8?
Le interfacce sono state introdotte in Java a causa del problema dei diamanti con ereditarietà multipla e questo è ciò che effettivamente intendono fare.
Le interfacce sono i costrutti creati per superare il problema dell’ereditarietà multipla e possono avere metodi astratti, definizioni predefinite e variabili finali statiche.
Vedi Perché Java consente le variabili finali statiche nelle interfacce quando sono intese solo come contratti? .
Il confronto tra l’interfaccia e la class astratta è sbagliato. Dovrebbero esserci invece altri due confronti: 1) interfaccia vs class e 2) astratta contro class finale .
Interfaccia vs class
Interface è un contratto tra due oggetti. Ad esempio, sono un postino e tu sei un pacco da consegnare. Mi aspetto che tu conosca il tuo indirizzo di consegna. Quando qualcuno mi consegna un pacchetto, deve conoscere il suo indirizzo di consegna:
interface Package { String address(); }
La class è un gruppo di oggetti che obbediscono al contratto. Ad esempio, sono una scatola del gruppo “Box” e obbedisco al contratto richiesto dal Postino. Allo stesso tempo obbedisco ad altri contratti:
class Box implements Package, Property { @Override String address() { return "5th Street, New York, NY"; } @Override Human owner() { // this method is part of another contract } }
Astratto vs Finale
La class astratta è un gruppo di oggetti incompleti. Non possono essere usati, perché mancano alcune parti. Ad esempio, sono una casella astratta di rilevamento GPS: so come controllare la mia posizione sulla mappa:
abstract class GpsBox implements Package { @Override public abstract String address(); protected Coordinates whereAmI() { // connect to GPS and return my current position } }
Questa class, se ereditata / estesa da un’altra class, può essere molto utile. Ma da solo – è inutile, dal momento che non può avere oggetti. Le classi astratte possono build elementi di classi finali.
La class finale è un gruppo di oggetti completi, che possono essere utilizzati, ma non possono essere modificati. Sanno esattamente come lavorare e cosa fare. Ad esempio, sono un Box che va sempre all’indirizzo specificato durante la sua costruzione:
final class DirectBox implements Package { private final String to; public DirectBox(String addr) { this.to = addr; } @Override public String address() { return this.to; } }
Nella maggior parte delle lingue, come Java o C ++, è ansible avere solo una class , né astratta né definitiva. Una tale class può essere ereditata e può essere istanziata. Non penso che questo sia strettamente in linea con il paradigma orientato agli oggetti, però.
Ancora una volta, il confronto delle interfacce con le classi astratte non è corretto.
In breve le differenze sono le seguenti:
Differenze sintattiche tra interfaccia e class astratta :
1. Metodi e membri di una class astratta possono avere visibilità. Tutti i metodi di un’interfaccia devono essere pubblici . // Non è più valido da Java 9
2. Una class di bambini concreti di una class astratta deve definire tutti i metodi astratti. Una class bambino astratta può avere metodi astratti. Un’interfaccia che estende un’altra interfaccia non deve fornire un’implementazione predefinita per i metodi ereditati dall’interfaccia genitore.
3. Una class figlio può estendere solo una singola class. Un’interfaccia può estendere più interfacce. Una class può implementare più interfacce.
4. Una class figlio può definire metodi astratti con visibilità uguale o meno restrittiva, mentre la class che implementa un’interfaccia deve definire pubblici tutti i metodi dell’interfaccia.
5. Le classi astratte possono avere costruttori ma non interfacce .
6. Le interfacce di Java 9 hanno metodi statici privati.
In Interfaces ora:
public static – supportato
public abstract – supportato
public default – supportato
private static – supportato
private abstract – errore di compilazione
private default – errore di compilazione
private – supportato
Interfaccia: gira (gira a sinistra, gira a destra.)
Classe astratta: ruota.
Classe: il volante, deriva dalla ruota, espone il giro dell’interfaccia
Uno è per categorizzare il comportamento che può essere offerto attraverso una vasta gamma di cose, l’altro è per modellare un’ontologia delle cose.
Non è davvero la risposta alla domanda originale, ma una volta che hai la risposta alla differenza tra loro, entrerai nel dilemma di quando-per-utilizzare: quando utilizzare le interfacce o le classi astratte? Quando usare entrambi?
Ho una conoscenza limitata di OOP, ma vedere le interfacce come un equivalente di un aggettivo in grammatica ha funzionato per me fino ad ora (correggimi se questo metodo è falso!). Ad esempio, i nomi delle interfacce sono come attributi o capacità che puoi dare a una class, e una class può avere molti di essi: ISerializable, ICountable, IList, ICacheable, IHappy, …
Se hai metodi comuni che possono essere usati da più classi, vai per le classi astratte. Altrimenti se si desidera che le classi seguano un progetto preciso per le interfacce.
Gli esempi seguenti lo dimostrano.
Classe astratta in Java:
abstract class animals { // They all love to eat. So let's implement them for everybody void eat() { System.out.println("Eating..."); } // The make different sounds. They will provide their own implementation. abstract void sound(); } class dog extends animals { void sound() { System.out.println("Woof Woof"); } } class cat extends animals { void sound() { System.out.println("Meoww"); } }
Di seguito è una implementazione di interfaccia in Java:
interface Shape { void display(); double area(); } class Rectangle implements Shape { int length, width; Rectangle(int length, int width) { this.length = length; this.width = width; } @Override public void display() { System.out.println("****\n* *\n* *\n****"); } @Override public double area() { return (double)(length*width); } } class Circle implements Shape { double pi = 3.14; int radius; Circle(int radius) { this.radius = radius; } @Override public void display() { System.out.println("O"); // :P } @Override public double area() { return (double)((pi*radius*radius)/2); } }
Alcuni punti chiave importanti in poche parole:
1. Le variabili dichiarate nell’interfaccia Java sono per default definitive. Le classi astratte possono avere variabili non definitive.
2. Le variabili dichiarate nell’interfaccia Java sono di default statiche. Le classi astratte possono avere variabili non statiche.
3. I membri di un’interfaccia Java sono pubblici per impostazione predefinita. Una class astratta Java può avere i soliti sapori dei membri della class come privati, protetti, ecc.
L’ereditarietà viene utilizzata per due scopi:
• Per consentire a un object di considerare i membri di dati di tipo principale e le implementazioni del metodo come proprie.
• Per consentire un riferimento a un object di un tipo da utilizzare con il codice che si aspetta un riferimento all’object supertipo.
Nei linguaggi / framework che supportano l’ereditarietà multipla generalizzata, spesso è poco necessario classificare un tipo come “interfaccia” o “class astratta”. I linguaggi e le strutture popolari, tuttavia, consentiranno a un tipo di considerare i membri dei dati di un altro tipo o le implementazioni del metodo come proprie, anche se consentono a un tipo di essere sostituibile per un numero arbitrario di altri tipi.
Le classi astratte possono avere membri di dati e implementazioni di metodi, ma possono essere ereditate solo da classi che non ereditano da altre classi. Le interfacce non mettono quasi nessuna restrizione sui tipi che le implementano, ma non possono includere alcun membro di dati o implementazioni di metodi.
Ci sono momentjs in cui è utile che i tipi siano sostituibili a molte cose diverse; ci sono altre volte in cui è utile che gli oggetti considerino i membri dei dati di tipo genitore e le implementazioni dei metodi come propri. Fare una distinzione tra interfacce e classi astratte consente a ciascuna di queste abilità di essere utilizzata nei casi in cui è più rilevante.
Punti chiave:
• La class astratta può avere proprietà, campi dati, metodi (completi / incompleti) entrambi.
• Se il metodo o le proprietà vengono definiti in una parola chiave astratta che deve essere sovrascritta nella class derivata. (Funziona come una funzionalità strettamente accoppiata)
• Se si definisce la parola chiave astratta per metodo o proprietà nella class astratta, non è ansible definire il corpo del metodo e il valore get / set per le proprietà e questo deve eseguire l’override nella class derivata.
• La class astratta non supporta l’ereditarietà multipla.
• La class astratta contiene costruttori.
• Una class astratta può contenere modificatori di accesso per i sottotitoli, le funzioni, le proprietà.
• Solo il Membro completo della class astratta può essere statico.
• Un’interfaccia può ereditare solo da un’altra interfaccia e non può ereditare da una class astratta, dove una class astratta può ereditare da un’altra class astratta o da un’altra interfaccia.
Vantaggio:
• È un tipo di contratto che costringe tutte le sottoclassi a portare avanti le stesse gerarchie o standard.
• Se varie implementazioni sono dello stesso tipo e usano comportamenti o stati comuni, la class astratta è meglio usarla.
• Se aggiungiamo un nuovo metodo a una class astratta, abbiamo la possibilità di fornire un’implementazione predefinita e quindi tutto il codice esistente potrebbe funzionare correttamente.
• Permette un’esecuzione rapida rispetto all’interfaccia (interfaccia richiede più tempo per trovare il metodo effettivo nelle classi corrispondenti).
• Può usare per accoppiamento stretto e senza stringere.
trova i dettagli qui … http://pradeepatkari.wordpress.com/2014/11/20/interface-and-abstract-class-in-c-oops/
Il modo più breve per riassumere è che interface è:
1. Completamente astratto, a parte static metodi default e static ; mentre ha definizioni (firme + metodi) per metodi default e static , ha solo dichiarazioni (firme del metodo) per altri metodi.
2. Sobject a regole più lassiste delle classi (una class può implementare più interface e interface può ereditare da più interface ). Tutte le variabili sono implicitamente costanti, se specificate come public static final o meno. Tutti i membri sono implicitamente public , se specificati come tali o meno.
3. Generalmente utilizzato come garanzia che la class di implementazione avrà le caratteristiche specificate e / o compatibile con qualsiasi altra class che implementa la stessa interfaccia.
Nel frattempo, una class abstract è:
1. Ovunque da completamente astratto a completamente implementato, con la tendenza ad avere uno o più metodi abstract . Può contenere sia dichiarazioni che definizioni, con dichiarazioni contrassegnate come abstract .
2. Una class a tutti gli effetti, soggetta alle regole che governano le altre classi (può ereditare solo da una class), a condizione che non possa essere istanziata (perché non si garantisce che sia completamente implementata). Può avere variabili membro non costanti. Può implementare il controllo di accesso dei membri, limitando i membri come pacchetti protected , private o privati (non specificati).
3. Generalmente utilizzato per fornire la maggior parte dell’implementazione che può essere condivisa da più sottoclassi o per fornire la maggior parte dell’implementazione che il programmatore è in grado di fornire.
Or, if we want to boil it all down to a single sentence: An interface is what the implementing class has , but an abstract class is what the subclass is .
Many junior developers make the mistake of thinking of interfaces, abstract and concrete classs as slight variations of the same thing, and choose one of them purely on technical grounds: Do I need multiple inheritance? Do I need some place to put common methods? Do I need to bother with something other than just a concrete class? This is wrong, and hidden in these questions is the main problem: “I” . When you write code for yourself, by yourself, you rarely think of other present or future developers working on or with your code.
Interfaces and abstract classs, although apparently similar from a technical point of view, have completely different meanings and purposes.
Sommario
1. An interface defines a contract that some implementation will fulfill for you .
2. An abstract class provides a default behavior that your implementation can reuse.
Alternative summary
1. An interface is for defining public APIs
2. An abstract class is for internal use, and for defining SPIs
On the importance of hiding implementation details
A concrete class does the actual work, in a very specific way. For example, an ArrayList uses a contiguous area of memory to store a list of objects in a compact manner which offers fast random access, iteration, and in-place changes, but is terrible at insertions, deletions, and occasionally even additions; meanwhile, a LinkedList uses double-linked nodes to store a list of objects, which instead offers fast iteration, in-place changes, and insertion/deletion/addition, but is terrible at random access. These two types of lists are optimized for different use cases, and it matters a lot how you’re going to use them. When you’re trying to squeeze performance out of a list that you’re heavily interacting with, and when picking the type of list is up to you, you should carefully pick which one you’re instantiating.
On the other hand, high level users of a list don’t really care how it is actually implemented, and they should be insulated from these details. Let’s imagine that Java didn’t expose the List interface, but only had a concrete List class that’s actually what LinkedList is right now. All Java developers would have tailored their code to fit the implementation details: avoid random access, add a cache to speed up access, or just reimplement ArrayList on their own, although it would be incompatible with all the other code that actually works with List only. That would be terrible… But now imagine that the Java masters actually realize that a linked list is terrible for most actual use cases, and decided to switch over to an array list for their only List class available. This would affect the performance of every Java program in the world, and people wouldn’t be happy about it. And the main culprit is that implementation details were available, and the developers assumed that those details are a permanent contract that they can rely on. This is why it’s important to hide implementation details, and only define an abstract contract. This is the purpose of an interface: define what kind of input a method accepts, and what kind of output is expected, without exposing all the guts that would tempt programmers to tweak their code to fit the internal details that might change with any future update.
An abstract class is in the middle between interfaces and concrete classs. It is supposed to help implementations share common or boring code. For example, AbstractCollection provides basic implementations for isEmpty based on size is 0, contains as iterate and compare, addAll as repeated add , and so on. This lets implementations focus on the crucial parts that differentiate between them: how to actually store and retrieve data.
APIs versus SPIs
Interfaces are low-cohesion gateways between different parts of code. They allow libraries to exist and evolve without breaking every library user when something changes internally. It’s called Application Programming Interface , not Application Programming Classes. On a smaller scale, they also allow multiple developers to collaborate successfully on large scale projects, by separating different modules through well documented interfaces.
Abstract classs are high-cohesion helpers to be used when implementing an interface, assuming some level of implementation details. Alternatively, abstract classs are used for defining SPIs, Service Provider Interfaces.
The difference between an API and an SPI is subtle, but important: for an API, the focus is on who uses it, and for an SPI the focus is on who implements it.
Adding methods to an API is easy, all existing users of the API will still compile. Adding methods to an SPI is hard, since every service provider (concrete implementation) will have to implement the new methods. If interfaces are used to define an SPI, a provider will have to release a new version whenever the SPI contract changes. If abstract classs are used instead, new methods could either be defined in terms of existing abstract methods, or as empty throw not implemented exception stubs, which will at least allow an older version of a service implementation to still compile and run.
A note on Java 8 and default methods
Although Java 8 introduced default methods for interfaces, which makes the line between interfaces and abstract classs even blurrier, this wasn’t so that implementations can reuse code, but to make it easier to change interfaces that serve both as an API and as an SPI (or are wrongly used for defining SPIs instead of abstract classs).
Which one to use?
1. Is the thing supposed to be publicly used by other parts of the code, or by other external code? Add an interface to it to hide the implementation details from the public abstract contract, which is the general behavior of the thing.
2. Is the thing something that’s supposed to have multiple implementations with a lot of code in common? Make both an interface and an abstract, incomplete implementation.
3. Is there ever going to be only one implementation, and nobody else will use it? Just make it a concrete class.
1. “ever” is long time, you could play it safe and still add an interface on top of it.
A corollary: the other way around is often wrongly done: when using a thing , always try to use the most generic class/interface that you actually need. In other words, don’t declare your variables as ArrayList theList = new ArrayList() , unless you actually have a very strong dependency on it being an array list, and no other type of list would cut it for you. Use List theList = new ArrayList instead, or even Collection theCollection = new ArrayList if the fact that it’s a list, and not any other type of collection doesn’t actually matter.
By definition, interfaces cannot have an implementation for any methods, and member variables cannot be initialized.
However, abstract classs can have methods implementated and member variables initialized.
Use abstract classs when you expect changes in your contract, ie, say in future you might need to add a new method.
In this situation, if you decide to use an interface, when the interface is changed to include interface, your application will break when you dumped the new interface dll.
To read in detail, visit difference between abstract class and a interface
I’d like to add one more difference which makes sense. For example, you have a framework with thousands of lines of code. Now if you want to add a new feature throughout the code using a method enhanceUI(), then it’s better to add that method in abstract class rather in interface. Because, if you add this method in an interface then you should implement it in all the implemented class but it’s not the case if you add the method in abstract class.
Differences between abstract class and interface on behalf of real implementation.
Interface : It is a keyword and it is used to define the template or blue print of an object and it forces all the sub classs would follow the same prototype,as for as implementation, all the sub classs are free to implement the functionality as per it’s requirement.
Some of other use cases where we should use interface.
Communication between two external objects(Third party integration in our application) done through Interface here Interface works as Contract.
Abstract Class: Abstract,it is a keyword and when we use this keyword before any class then it becomes abstract class.It is mainly used when we need to define the template as well as some default functionality of an object that is followed by all the sub classs and this way it removes the redundant code and one more use cases where we can use abstract class , such as we want no other classs can directly instantiate an object of the class, only derived classs can use the functionality.
Example of Abstract Class:
public abstract class DesireCar { //It is an abstract method that defines the prototype. public abstract void Color(); // It is a default implementation of a Wheel method as all the desire cars have the same no. of wheels. // and hence no need to define this in all the sub classs in this way it saves the code duplicasy public void Wheel() { Console.WriteLine("Car has four wheel"); } } **Here is the sub classs:** public class DesireCar1 : DesireCar { public override void Color() { Console.WriteLine("This is a red color Desire car"); } } public class DesireCar2 : DesireCar { public override void Color() { Console.WriteLine("This is a red white Desire car"); } }
Example Of Interface:
public interface IShape { // Defines the prototype(template) void Draw(); } // All the sub classs follow the same template but implementation can be different. public class Circle : IShape { public void Draw() { Console.WriteLine("This is a Circle"); } } public class Rectangle : IShape { public void Draw() { Console.WriteLine("This is a Rectangle"); } }
You can find clear difference between interface and abstract class.
Interface
• Interface only contains abstract methods.
• Force users to implement all methods when implements the interface.
• Contains only final and static variables.
• Declare using interface keyword.
• All methods of an interface must be defined as public.
• An interface can extend or a class can implement multiple other interfaces.
Abstract class
• Abstract class contains abstract and non-abstract methods.
• Does not force users to implement all methods when inherited the abstract class.
• Contains all kinds of variables including primitive and non-primitive
• Declare using abstract keyword.
• Methods and members of an abstract class can be defined with any visibility.
• A child class can only extend a single class (abstract or concrete).
inserisci la descrizione dell'immagine qui
Here is a very basic understanding over interface vs abstract class.
An abstract class is a class whose object cannot be created or a class which cannot be instantiated. An abstract method makes a class abstract. An abstract class needs to be inherited in order to override the methods that are declared in the abstract class. No restriction on access specifiers. An abstract class can have constructor and other concrete(non abstarct methods ) methods in them but interface cannot have.
An interface is a blueprint/template of methods.(eg. A house on a paper is given(interface house) and different architects will use their ideas to build it(the classs of architects implementing the house interface) . It is a collection of abstract methods , default methods , static methods , final variables and nested classs. All members will be either final or public , protected and private access specifiers are not allowed.No object creation is allowed. A class has to be made in order to use the implementing interface and also to override the abstract method declared in the interface. An interface is a good example of loose coupling(dynamic polymorphism/dynamic binding) An interface implements polymorphism and abstraction.It tells what to do but how to do is defined by the implementing class. For Eg. There’s a car company and it wants that some features to be same for all the car it is manufacturing so for that the company would be making an interface vehicle which will have those features and different classs of car(like Maruti Suzkhi , Maruti 800) will override those features(functions).
Why interface when we already have abstract class? Java supports only multilevel and hierarchal inheritance but with the help of interface we can implement multiple inheritance.
To give a simple but clear answer, it helps to set the context : you use both when you do not want to provide full implementations.
The main difference then is an interface has no implementation at all (only methods without a body) while abstract classs can have members and methods with a body as well, ie can be partially implemented.
In an interface all methods must be only definitions, not single one should be implemented.
But in an abstract class there must an abstract method with only definition, but other methods can be also in the abstract class with implementation…
I read a simple yet effective explanation of Abstract class and Interface on php.net
Which is as follows.
An Interface is like a protocol. It doesn’t designate the behavior of the object; it designates how your code tells that object to act. An interface would be like the English Language: defining an interface defines how your code communicates with any object implementing that interface.
An interface is always an agreement or a promise. When a class says “I implement interface Y”, it is saying “I promise to have the same public methods that any object with interface Y has”.
On the other hand, an Abstract Class is like a partially built class. It is much like a document with blanks to fill in. It might be using English, but that isn’t as important as the fact that some of the document is already written.
An abstract class is the foundation for another object. When a class says “I extend abstract class Y”, it is saying “I use some methods or properties already defined in this other class named Y”.
So, consider the following PHP:
You would have your class implement a particular interface if you were distributing a class to be used by other people. The interface is an agreement to have a specific set of public methods for your class.
You would have your class extend an abstract class if you (or someone else) wrote a class that already had some methods written that you want to use in your new class.
These concepts, while easy to confuse, are specifically different and distinct. For all intents and purposes, if you’re the only user of any of your classs, you don’t need to implement interfaces.
|
__label__pos
| 0.5603 |
Tutorialspoint
1 Answer
David Meador
, 18 Views
Single threaded processes contain the execution of instructions in a single sequence. In other words, one command is processes at a time.
The opposite of single threaded processes are multithreaded processes. These processes allow the execution of multiple parts of a program at the same time. These are lightweight processes available within the process.
Multithreaded Processes Implementation
Multithreaded processes can be implemented as user-level threads or kernel-level threads. Details about these are provided using the following diagram:
Multithreaded Processes
1. User-level Threads
2. The user-level threads are implemented by users and the kernel is not aware of the existence of these threads. It handles them as if they were single-threaded processes. User-level threads are small and much faster than kernel level threads. Also, there is no kernel involvement in synchronization for user-level threads.
3. Kernel-level Threads
4. Kernel-level threads are handled by the operating system directly and the thread management is done by the kernel. The context information for the process as well as the process threads is all managed by the kernel. Because of this, kernel-level threads are slower than user-level threads.
Advantages of Multithreaded Processes
Some of the advantages of multithreaded processes are given as follows:
1. All the threads of a process share its resources such as memory, data, files etc. A single application can have different threads within the same address space using resource sharing.
2. It is more economical to use threads as they share the process resources. Comparatively, it is more expensive and time consuming to create processes as they require more memory and resources.
3. Program responsiveness allows a program to run even if part of it is blocked using multithreading. This can also be done if the process is performing a lengthy operation.
4. In a multiprocessor architecture, each thread can run on a different processor in parallel using multithreading. This increases concurrency of the system. This is in direct contrast to a single processor system, where only one process or thread can run on a processor at a time.
Disadvantages of Multithreaded Processes
Some of the disadvantages of multithreaded processes are given as follows:
1. Multithreaded processes are quite complicated. Coding for these can only be handled by expert programmers.
2. It is difficult to handle concurrency in multithreaded processes. This may lead to complications and future problems.
3. Identification and correction of errors is much more difficult in multithreaded processes as compared to single threaded processes.
Advertisements
|
__label__pos
| 0.995344 |
Rank correlation
From Infogalactic: the planetary knowledge core
Jump to: navigation, search
In statistics, a rank correlation is any of several statistics that measure the relationship between rankings of different ordinal variables or different rankings of the same variable, where a "ranking" is the assignment of the labels "first", "second", "third", etc. to different observations of a particular variable. A rank correlation coefficient measures the degree of similarity between two rankings, and can be used to assess the significance of the relation between them. For example, two common nonparametric methods of significance that use rank correlation are the Mann–Whitney U test and the Wilcoxon signed-rank test.
Context
If, for example, one variable is the identity of a college basketball program and another variable is the identity of a college football program, one could test for a relationship between the poll rankings of the two types of program: do colleges with a higher-ranked basketball program tend to have a higher-ranked football program? A rank correlation coefficient can measure that relationship, and the measure of significance of the rank correlation coefficient can show whether the measured relationship is small enough to likely be a coincidence.
If there is only one variable, the identity of a college football program, but it is subject to two different poll rankings (say, one by coaches and one by sportswriters), then the similarity of the two different polls' rankings can be measured with a rank correlation coefficient.
Correlation coefficients
Some of the more popular rank correlation statistics include
1. Spearman's ρ
2. Kendall's τ
3. Goodman and Kruskal's γ
An increasing rank correlation coefficient implies increasing agreement between rankings. The coefficient is inside the interval [−1, 1] and assumes the value:
• 1 if the agreement between the two rankings is perfect; the two rankings are the same.
• 0 if the rankings are completely independent.
• −1 if the disagreement between the two rankings is perfect; one ranking is the reverse of the other.
Following Diaconis (1988), a ranking can be seen as a permutation of a set of objects. Thus we can look at observed rankings as data obtained when the sample space is (identified with) a symmetric group. We can then introduce a metric, making the symmetric group into a metric space. Different metrics will correspond to different rank correlations.
General correlation coefficient
Kendall (1944) showed that his \tau (tau) and Spearman's \rho (rho) are particular cases of a general correlation coefficient.
Suppose we have a set of n objects, which are being considered in relation to two properties, represented by x and y, forming the sets of values \{x_i\}_{i\le n} and \{y_i\}_{i\le n}. To any pair of individuals, say the i-th and the j-th we assign a x-score, denoted by a_{ij}, and a y-score, denoted by b_{ij}. The only requirement made to this functions is anti-symmetry, so a_{ij}=-a_{ji} and b_{ij}=-b_{ji}. Then the generalised correlation coefficient \Gamma is defined by
\Gamma = \frac{\sum_{i,j = 1}^n a_{ij}b_{ij}}{\sqrt{\sum_{i,j = 1}^n a_{ij}^2 \sum_{i,j = 1}^n b_{ij}^2}}
Kendall's \tau as a particular case
If r_i is the rank of the i-member according to the x-quality, we can define
a_{ij} = \sgn(r_j-r_i)
and similarly for b. The sum \sum a_{ij}b_{ij} is twice the amount of concordant pairs minus the discordant pairs (see Kendall tau rank correlation coefficient). The sum \sum a_{ij}^2 is just the number of terms a_{ij}, equal to n(n-1), and so for \sum b_{ij}^2. It follows that \Gamma is equal to the Kendall's \tau coefficient.
Spearman's \rho as a particular case
If r_i, s_i are the ranks of the i-member according to the x and the y-quality respectively, we can simply define
a_{ij} = r_j-r_i
b_{ij} = s_j-s_i
The sums \sum a_{ij}^2 and \sum b_{ij}^2 are equal, since both r_i and s_i range from 1 to n. Then we have:
\Gamma = \frac{\sum (r_j-r_i)(s_j-s_i)}{\sum(r_j-r_i)^2}
now
\sum_{i,j = 1}^n (r_j-r_i)(s_j-s_i)= \sum_{i=1}^n \sum_{j=1}^n r_is_i + \sum_{i=1}^n \sum_{j=1}^n r_js_j - \sum_{i=1}^n \sum_{j=1}^n (r_is_j+r_js_i)
=2n\sum_{i=1}^n r_is_i - 2 \sum_{i=1}^n r_i \sum_{j=1}^n s_j
=2n\sum_{i=1}^n r_is_i - \frac12 n^2(n+1)^2
since \sum r_i and \sum s_j are both equal to the sum of the first n natural numbers, namely \frac12n(n+1).
We also have
S = \sum_{i=1}^n (r_i-s_i)^2 = 2 \sum r_i^2 - 2\sum r_is_i
and hence
\sum(r_j-r_i)(s_j-s_i) = 2n\sum r_i^2 - \frac12n^2(n+1)^2 - nS
\sum r_i^2 being the sum of squares of the first n naturals equals \frac16n(n+1)(2n+1). Thus, the last equation reduces to
\sum(r_j-r_i)(s_j-s_i) = \frac16n^2(n^2-1) - nS
Further
\sum(r_j-r_i)^2 = 2n\sum r_i^2-2\sum r_ir_j
= 2n\sum r_i^2-2(\sum r_i)^2 = \frac16n^2(n^2-1)
and thus, substituting into the original formula these results we get
\Gamma_R = 1-\frac{6\sum d_i^2}{n^3-n}
where d_i = x_i - y_i, is the difference between ranks.
which is exactly the Spearman's rank correlation coefficient \rho.
Rank-biserial correlation
Gene Glass (1965) noted that the rank-biserial can be derived from Spearman's \rho. "One can derive a coefficient defined on X, the dichotomous variable, and Y, the ranking variable, which estimates Spearman's rho between X and Y in the same way that biserial r estimates Pearson's r between two normal variables” (p. 91). The rank-biserial correlation had been introduced nine years before by Edward Cureton (1956) as a measure of rank correlation when the ranks are in two groups.
Kerby simple difference formula
Dave Kerby (2014) recommended the rank-biserial as the measure to introduce students to rank correlation, because the general logic can be explained at an introductory level. The rank-biserial is the correlation used with the Mann–Whitney U test, a method commonly covered in introductory college courses on statistics. The data for this test consists of two groups; and for each member of the groups, the outcome is ranked for the study as a whole.
Kerby showed that this rank correlation can be expressed in terms of two concepts: the percent of data that support a stated hypothesis, and the percent of data that do not support it. The Kerby simple difference formula states that the rank correlation can be expressed as the difference between the proportion of favorable evidence (f) minus the proportion of unfavorable evidence (u).
r = f - u
Example and interpretation
To illustrate the computation, suppose a coach trains long-distance runners for one month using two methods. Group A has 5 runners, and Group B has 4 runners. The stated hypothesis is that method A produces faster runners. The race to assess the results finds that the runners from Group A do indeed run faster, with the following ranks: 1, 2, 3, 4, and 6. The slower runners from Group B thus have ranks of 5, 7, 8, and 9.
The analysis is conducted on pairs, defined as a member of one group compared to a member of the other group. For example, the fastest runner in the study is a member of four pairs: (1,5), (1,7), (1,8), and (1,9). All four of these pairs support the hypothesis, because in each pair the runner from Group A is faster than the runner from Group B. There are a total of 20 pairs, and 19 pairs support the hypothesis. The only pair that does not support the hypothesis are the two runners with ranks 5 and 6, because in this pair, the runner from Group B had the faster time. By the Kerby simple difference formula, 95% of the data support the hypothesis (19 of 20 pairs), and 5% do not support (1 of 20 pairs), so the rank correlation is r = .95 - .05 = .90.
The maximum value for the correlation is r = 1, which means that 100% of the pairs favor the hypothesis. A correlation of r = 0 indicates that half the pairs favor the hypothesis and half do not; in other words, the sample groups do not differ in ranks, so there is no evidence that they come from two different populations. An effect size of r = 0 can be said to describe no relationship between group membership and the members' ranks.
References
• Cureton, E. E. (1956). Rank-biserial correlation. Psychometrika 21, 287-290. doi:10.1007/BF02289138
• Everitt, B. S. (2002), The Cambridge Dictionary of Statistics, Cambridge: Cambridge University Press, ISBN 0-521-81099-X<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
• Diaconis, P. (1988), Group Representations in Probability and Statistics, Lecture Notes-Monograph Series, Hayward, CA: Institute of Mathematical Statistics, ISBN 0-940600-14-5<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
• Glass, G. V. (1965). A ranking variable analogue of biserial correlation: implications for short-cut item analysis. Journal of Educational Measurement, 2(1), 91–95. DOI: 10.1111/j.1745-3984.1965.tb00396.x
• Kendall, M. G. (1970), Rank Correlation Methods, London: Griffin, ISBN 0-85264-199-0<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
• Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.CP.3.1. link to pdf
External links
|
__label__pos
| 0.965427 |
Alois Kraus
blog
Home | Contact | Syndication | Login
120 Posts | 8 Stories | 322 Comments | 162 Trackbacks
News
Article Categories
Archives
Post Categories
Image Galleries
Programming
This time I wanted to write something about the .NET Framework that is solved not optimal of the .NET Framework. My unsuspecting victim is the well known System.Diagnostics.Trace class. Static classes have the big plus that they are easy to use but they can be very hard to maintain if you want to extend it (no derivation) and have a lifetime problem with the objects that are created inside it. The Trace facade is in my opinion a prominent example of how you should not design a reliable static facade class. It's has many features that are nice but it's at the same time a fine example of design that has lost its vision. It tries to do too many things at the same time without succeeding in the optimal solution for all the goals it might have. Tracing should be fast, right? The .NET Trace definitely not the fastest solution. Yes it's flexible and can be very good configured inside your App.config file. But if you have used the Microsoft Logging Application Block of Enterprise Library 2.0 once then you know how much farther design for flexibility can be done. But I must admit that the Entlib TraceListeners does derive from the .NET ones. Below is a +- table about the .NET Trace regarding Performance Reliability and Extensibility.
In the following article I will show you what reliability issues there are and how to solve them.
Performance Reliability Extensibility/
Ease of use
TraceSwitches allow to configure at run time what should be traced.
- DefaultTraceListener uses OutputDebugString which is by far the slowest method to trace.
- I loose traces if I do not call Trace.Close/Flush from time to time.
- Static Trace.Close is a mistake because I cannot really close something that is static.
- A Trace.Write after a Trace.Close causes loss of all following traces in case of file based listeners.
(+ Some Listeners do reopen itself after being closed. The File Listener can do this if you init it with a file name)
Dynamic Trace Switch Configuration
+ Dynamic Trace Listener Configuration
- Other (Logging) solutions are far more flexible.
- It is nearly impossible to write a program that does capture all traces even during application shutdown.
The most obvious reliability problem I did find is demonstrated with a very simple Console Application:
class Program
{
static void Main(string[] args)
{
Program pr = new Program(); // Create instance that will be finalized later
TextWriterTraceListener listener = new TextWriterTraceListener(@"C:\trace.txt");
Trace.Listeners.Clear(); // Remove default trace listener
Trace.Listeners.Add(listener);
Trace.WriteLine("First Trace"); // Generate some trace messages
Trace.WriteLine("Perhaps last Trace.");
}
}
This small program does generate a 0 byte trace log file on my hard disk. Why? Because the StreamWriter is not flushed during Application exit. You can set the Trace.AutoFlush property to true for all trace listeners but you will loose about 30-50% throughput by doing this. Tracing should not alter application timing because if your (e.g. multithreading) timing dependant error goes away when you attach a debugger or enabling tracing you are left with the last option to insert printf statements inside your code and try it then again. For this reason we should strive for maximal speed and reliable delivery of our trace messages at the same time to make this worst case scenario happen less often..
Ok if I want performance then I will call Trace.Flush from time to time to ensure that my traces are written. During application exit I have to place the Trace.Close (Trace.Flush produces the same problem) at a strategic place to ensure that my traces are written.
class Program
{
~Program()
{
Trace.Close(); // ensure that all traces are flushed (Trace.Flush has the same effect that is shown below)
}
static void Main(string[] args)
{
Program pr = new Program(); // Create instance that will be finalized later
TextWriterTraceListener listener = new TextWriterTraceListener(@"C:\trace.txt");
Trace.Listeners.Add(listener);
Trace.WriteLine("First Trace"); // Generate some trace messages
Trace.WriteLine("Perhaps last Trace.");
}
}
Ok that does look better. We do now ensure that we flush the underlying StreamWriter in our Finalizer. But wait Finalizers are not always called during application shutdown. We can remedy this situation by deriving from CriticalFinalizerObject. Due to our efforts to make it more reliable we are greeted with the following output:
Unhandled Exception: System.ObjectDisposedException: Cannot access a closed file.
at System.IO.__Error.FileNotOpen()
at System.IO.FileStream.Write(Byte[] array, Int32 offset, Int32 count)
at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder)
at System.IO.StreamWriter.Dispose(Boolean disposing)
at System.IO.StreamWriter.Close()
at System.Diagnostics.TextWriterTraceListener.Close()
at System.Diagnostics.TraceInternal.Close()
at System.Diagnostics.Trace.Close()
at TraceTest.Program.Finalize()
Hey who did invite this exception to our party? To better understand whats going on we have to understand what does happen under the covers of the static Trace facade.
From the functional point of view our simple file trace is exactly the same as the following code snippet:
static void Main(string[] args)
{
FileStream file = new FileStream(@"C:\Trace.txt", FileMode.Create);
StreamWriter writer = new StreamWriter(file);
writer.WriteLine("Our Trace Line");
}
Do you see the problem here? Who does close the StreamWriter to flush its cache? If you run this program the result is a 0 byte Trace.txt with nothing inside it. This is exactly what we got when we configured the TextWriterTraceListener into the static Trace facade. Lets have a look into FileStream and StreamWriter.
StreamWriter Classes
As you can see from the class diagram a StreamWriter does hold a reference to the underlying file stream which in turn does encapsulate the real file handle via a safe handle. What is interesting in this picture that StreamWriter does NOT implement a finalizer whereas FileStream and SafeHandle do. This was decided by the CLR team as working model for streams to ensure that programmers close the darn StreamWriter before they exit or they will loose (consistently) data (= 0 byte Trace.txt). If StreamWriter did implement a finalizer it could work or not work depending on the order the finalizers are called. We can now imagine why the ObjectDisposedException was thrown.
Bad Case
Our problem here was that during AppDomain shutdown all finalizers are called in an arbitrary order. In our case we did try to flush our unsaved data while the FileStream was already finalized! If the CLR does change its GC implementation the following could also happen:
Good Case
We are left in the unfortunate situation that an exception can happen or not depending when we call (Trace.) Close. This is certainly a problem if we trace inside our finalizers and try to get the output away before our underlying file stream is finalized.
Static Facade Resource Management
The 10000$ question is if we can manage resources that are acquired by static facades. If we do nothing about this we loose trace messages. The protocol for static facades should be the following:
1. Acquire (static) Resources.
2. Use static facade
3. During application shutdown release resources as late as possible.
4. After the resource release we ignore future calls to our static facade.
Sounds rather easy. And the truth it is easy to manage static resources. The new resource handling protocol for static facades goes like this
1. Acquire Resources (FileStream, ...)
2. Create your own static resource holder which does derive from CriticalFinalizerObject
3. Prevent finalization of the acquired resources (GC.SuppressFinalize)
4. Finalize the resources in our static resource holder in the order which is needed.
Thats really easy to do. Lets have a look at my brand new StaticResourceManager class:
/// <summary>
/// Ensures deterministic StreamWriter Flush, and File close at application exit for
/// static facades. The cleanup is done after all normal finalizers have been called.
/// </summary>
public class StaticResourceManager : CriticalFinalizerObject
{
List<StreamWriter> writers = new List<StreamWriter>();
public void AddStream(StreamWriter writer)
{
writers.Add(writer);
FileStream fStream = GetIfFileStream(writer.BaseStream);
if (fStream != null)
{
GC.SuppressFinalize(fStream); // prevent GC on FileStream
// prevent file close at application exit before want to let it happen
GC.SuppressFinalize(fStream.SafeFileHandle);
}
}
static FileStream GetIfFileStream(Stream stream)
{
if (stream is FileStream)
return (FileStream)stream;
else
return null;
}
/// <summary>
/// Deterministic cleanup of StreamWriters
/// 1. StreamWriter Close -> FileStream -> Close -> possible Writes
/// 2. FileHandle Close
/// </summary>
~StaticResourceManager()
{
foreach (StreamWriter writer in writers)
{
FileStream fstream = GetIfFileStream(writer.BaseStream);
SafeFileHandle handle = null;
if (fstream != null)
{
handle = fstream.SafeFileHandle;
}
writer.Close(); // Close StreamWriter first
if (handle != null// Close file handle now hurray
handle.Close();
}
}
}
Now we can finally write a much more reliable TraceListener which does use it internally. To show you that it does really work you can try out my improved Trace Sample:
class Program
{
// Our ResourceManager will ensure proper cleanup at application exit
private static StaticResourceManager myResourceManager = new StaticResourceManager();
Program()
{
FileStream file = new FileStream(@"C:\Trace.txt", FileMode.Create);
StreamWriter writer = new StreamWriter(file);
myResourceManager.AddStream(writer); // ensure flush and close at Application exit
// Use Tracing as usual
TextWriterTraceListener listener = new TextWriterTraceListener(writer);
Trace.Listeners.Add(listener);
Trace.WriteLine("First Trace"); // Generate some trace messages
Trace.WriteLine("Perhaps last Trace.");
}
~Program()
{
Trace.WriteLine("Hello last Trace in finalizer.");
}
static void Main(string[] args)
{
Program p = new Program();
}
}
Finally we will see our beloved complete set of trace messages in our Trace.txt
Trace.txt
First Trace
Perhaps last Trace.
Hello last Trace in finalizer.
Please note that this simple fix is still not the complete story since our listener should survive a Close call of the static Trace facade after all.
Conclusions
I have shown that some things do not work as expected by most developers when working with the static trace facade with true resource based Trace Listeners. A Close method in a static facade is a true crime which delegates the responsibility back to the users of the .NET Trace facility. But since Tracing can and should be used by virtually every .NET component that is written we cannot ensure anymore that another component has not called Trace.Close for any reason. Resource management should be handled by the facade itself properly. In fact it can be done quite easy and I have no idea why MS did not implement a solution which does ensure that all finalizers can use tracing with no risk of loosing messages. The trick is that all Critical Finalizers are called by the GC after all normal finalizers have been visited. Finally I do not understand why I am able to programatically remove a trace listener that has been added to the global Listener collection by perhaps a totally different component which does rely on it. I should only be able to remove elements that I have previously added by myself (could be done by a token that is returned by the add operation and must be supplied to the remove operation). That is all for the moment and I hope you did enjoy this little article and have gained a better understanding why you see a 0 byte file when using the .NET Trace right away.
posted on Monday, June 12, 2006 9:11 PM
Feedback
# re: Why .NET Tracing is not reliable 10/4/2006 6:57 PM Marie
What I wonder is why everyone didn't immediately say "Doh! I am losing messages. This Trace thing isn't very useful." For example, if the program throws an exception and crashes and flush isn't called - so the last messages, pertaining to the crash are not received. None of my test examples worked properly with Trace, and given how long the functionality (or lack of it) has been around, why are there not more comments on this issue? Anyway, good article/blog. We found it useful.
marie (at) siliconcoach (dot) com
# re: Why .NET Tracing is not reliable 10/4/2006 7:09 PM Alois Kraus
Hi Marie,
good question. I think the answer is that most people use Visual Studio for debugging and only few actually add a thorough set of traces into their apps. Another thing is that the default Trace Listener writes to OutputDebugString which does (mostly) not cause message loss. You have to enable the FileTraceListener explicitly inside your App.config file or add it programmatically to the Trace Listener collection which obviously very few people did so far.
Yours,
Alois Kraus
# re: Why .NET Tracing is not reliable 10/22/2006 10:41 PM Brian
In defense of the CLR team, I believe they intended everyone to be using the Dispose-model and to be employing the 'using' statement within their Application entry point -- static void Main.
This does mean that the classes would have to implement IDisposable, but thats not hard at all. Since the Dispose method can be called multiple times it doesn't suffer from the race-conditions of the ~finalize code.
So, if Program implemented IDisposable, one could place Trace/Debug.Close() within it. Unexpected crashes of the system with no exception firing are an extreme case that I don't believe tracing would help you anyway, but if it was an exception that crashed your program it should be handled -- and your Trace closed.
From what I've read here, the problem seems to be with Finalizers, not necessarily .NET.
# re: Why .NET Tracing is not reliable 10/22/2006 11:35 PM Alois Kraus
Hi Brian,
the dispose/close call in main does work only if you do have only one thread in your system.
The finally block of your main is never called with this little program:
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
namespace ExceptionInMultithreadedApp
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("In Console Main");
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
try
{
new Thread( delegate()
{
Console.WriteLine("Multhreading Start");
throw new InvalidCastException("From nowhere to nowhere");
}).Start();
}
finally
{
Thread.Sleep(5000);
Console.WriteLine("We should see the message here?");
}
}
static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
Console.WriteLine("Unhandled exception did happen in " + e.ExceptionObject);
}
}
}
You will get the following output:
In Console Main
Multhreading Start
Unhandled exception did happen in System.InvalidCastException: From nowhere to nowhere
at ExceptionInMultithreadedApp.Program.<Main>b__0() in C:\Source\ExceptionInMultithreadedApp\ExceptionInMultithreaded
App\Program.cs:line 20
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
We should see the message here?
Unhandled Exception: System.InvalidCastException: From nowhere to nowhere
at ExceptionInMultithreadedApp.Program.<Main>b__0() in C:\Source\ExceptionInMultithreadedApp\ExceptionInMultithreaded
App\Program.cs:line 20
at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
Your program can bail out at any point without going back to main by simply calling Environment.Exit(), Environment.FailFast or throwing an exception in another thread. There is no simple "insert final Trace.Close inside here" place I do know of.
What is especially anoying is that for self created AppDomains there is no way to get notified of Unhandled Exceptions at all!
Yours,
Alois Kraus
# re: Why .NET Tracing is not reliable 4/14/2007 7:41 PM Sergei
I agree with Brian. IDisposable is a way to handle an issue and unexpected (unhandled) crashes are not a scenario to worry about.
I just wrote a windows service and I use .NET tracing. The goal for the service is to recover from even unexpected (recoverable) exceptions, thus every exception is handled and the exceptions that are not handled (out of memory) would be reported by windows.
# re: Why .NET Tracing is not reliable 9/20/2007 8:29 PM paatrice
Hello,
I had a similar problem when I try to write while between client and server.
I use this code
------------------------------
System.Net.WebClient Client = new System.Net.WebClient();
Client.UploadFile(baseLocation + fn, HIF.Value);
-------------------------------
Certains file works and others can't be upload. (message : System.ObjectDisposedException: Cannot access a closed file.)
I try to understand your code but it's too hard for me. Could you help me to modify my code and allow download all file .
Tanks you for your help.
Patrice
# re: Why .NET Tracing is not reliable 11/12/2007 4:09 AM Ayman Wassif
I fixed my problem by suppressing the Finalize on the base stream after I opened the file
Here is the Line I added after opening the file
GC.SuppressFinalize(myLogStream.BaseStream);
Thsn at the class distructor I could Flush and Close the file
# re: Why .NET Tracing is not reliable 7/3/2008 1:14 AM Advait Supnekar
Thanks for the step by step explanation
# re: Why .NET Tracing is not reliable 5/2/2011 9:15 AM David Boarman
nice...I needed a reason to rewrite my Logging utility.
# re: Why .NET Tracing is not reliable 8/16/2012 7:14 PM Jeroen
I'd rather dispose the classes i constructed over putting them in a 'dispose'bin.
class Program
{
TextWriterTraceListener _listener;
static void Main(string[] args)
{
Program pr = new Program(); // Create instance that will be finalized later
_listener = new TextWriterTraceListener(@"C:\trace.txt");
Trace.Listeners.Add(_listener);
Trace.WriteLine("First Trace"); // Generate some trace messages
Trace.WriteLine("Perhaps last Trace.");
}
~Program()
{
Trace.Flush(); // ensure that all traces are flushed
Trace.Listeners.Remove(_listener);
listener.Dispose();
}
}
# re: Why .NET Tracing is not reliable 8/16/2012 7:18 PM Jeroen
This is even better i guess.
class Program
{
static void Main(string[] args)
{
Program pr = new Program(); // Create instance that will be finalized later
using(TextWriterTraceListener listener = new TextWriterTraceListener(@"C:\trace.txt"))
{
Trace.Listeners.Add(_listener);
Trace.WriteLine("First Trace"); // Generate some trace messages
Trace.WriteLine("Perhaps last Trace.");
Run(); // Do something
Trace.Listeners.Remove(_listener);
}
}
}
Post A Comment
Title:
Name:
Email:
Comment:
Verification:
|
__label__pos
| 0.665499 |
Retro Eye care Haitian Deep Dark Default
Shadow
Syntax
SHOW SHADOW shadowRule | RULES [FROM schemaName]
SHOW SHADOW TABLE RULES [FROM schemaName]
SHOW SHADOW ALGORITHMS [FROM schemaName]
shadowRule:
RULE ruleName
• Support querying all shadow rules and specified table query
• Support querying all shadow table rules
• Support querying all shadow algorithms
Return Value Description
Shadow Rule
Column Description
rule_name Rule name
source_name Source database
shadow_name Shadow database
shadow_table Shadow table
Shadow Table Rule
Column Description
shadow_table Shadow table
shadow_algorithm_name Shadow algorithm name
Shadow Algorithms
Column Description
shadow_algorithm_name Shadow algorithm name
type Shadow algorithm type
props Shadow algorithm properties
is_default Default
Shadow Rule status
Column Description
status Enable
Example
SHOW SHADOW RULES
mysql> show shadow rules;
+--------------------+-------------+-------------+--------------+
| rule_name | source_name | shadow_name | shadow_table |
+--------------------+-------------+-------------+--------------+
| shadow_rule_1 | ds_1 | ds_shadow_1 | t_order |
| shadow_rule_2 | ds_2 | ds_shadow_2 | t_order_item |
+--------------------+-------------+-------------+--------------+
2 rows in set (0.02 sec)
SHOW SHADOW RULE ruleName
mysql> show shadow rule shadow_rule_1;
+------------------+-------------+-------------+--------------+
| rule_name | source_name | shadow_name | shadow_table |
+------------------+-------------+-------------+--------------+
| shadow_rule_1 | ds_1 | ds_shadow_1 | t_order |
+------------------+-------------+-------------+--------------+
1 rows in set (0.01 sec)
SHOW SHADOW TABLE RULES
mysql> show shadow table rules;
+--------------+--------------------------------------------------------------------------------+
| shadow_table | shadow_algorithm_name |
+--------------+--------------------------------------------------------------------------------+
| t_order_1 | user_id_match_algorithm,simple_hint_algorithm_1 |
+--------------+--------------------------------------------------------------------------------+
1 rows in set (0.01 sec)
SHOW SHADOW ALGORITHMS
mysql> show shadow algorithms;
+-------------------------+--------------------+-------------------------------------------+----------------+
| shadow_algorithm_name | type | props | is_default |
+-------------------------+--------------------+-------------------------------------------+----------------+
| user_id_match_algorithm | REGEX_MATCH | operation=insert,column=user_id,regex=[1] | false |
| simple_hint_algorithm_1 | SIMPLE_HINT | shadow=true,foo=bar | false |
+-------------------------+--------------------+-------------------------------------------+----------------+
2 rows in set (0.01 sec)
|
__label__pos
| 0.983141 |
Sign up ×
Ask Ubuntu is a question and answer site for Ubuntu users and developers. It's 100% free.
So I need to find all files which have d, o, and g in the filename. The letters do not need to be side by side (aadaaoaagaa would be a correct filename, aaoaadaag would not be a correct filename). It does not matter if the letters are uppercase or lowercase (aadaaOaag is a correct filename, as is aaDoGa.
How would I use filename substitution on the terminal command line to list all of the files like this? Preferably without using loops or anything too advanced. All I have been introduced to so far is filename substitution characters (*,?,[])
share|improve this question
2
Just ls *[dD]*[oO]*[gG]* is not good enough? – Nykakin Sep 28 '13 at 4:15
4 Answers 4
To list them when you are in the directory where those files are in, you can use, as Nykakin said in this comment the following command:
ls *[dD]*[oO]*[gG]*
or, if you want each file on its own line, you can use:
printf '%s\n' *[dD]*[oO]*[gG]*
or, using grep:
ls | grep [dD].*[oO].*[gG]
share|improve this answer
ls [dD]*[oO]*[gG] The problem with this is, I think it makes it possible for the letters to get mixed up since there is the * between everything, meaning those could also referance a d, D, o, O, g or G. They have to be in order, it cannot be like oDog because the o has to come after the d – John Stacen Sep 28 '13 at 11:03
@JohnStacen Well, you didn't said nothing about the cases where an "o" or "O" is before these letters in your question. You can eliminate these occurrences using grep -v. For example: ls *[dD]*[oO]*[gG]* | grep -v [oO].*[dD] – Radu Rădeanu Sep 28 '13 at 14:19
If I understand your problem correctly, you want all those filenames with a d, an o and a g, in any case (upper or lower). Radu Rădeanu gave you an answer that works for the letters occuring in the order d-o-g. For example aaGaaaoaaDaa will not be found. You could alter his solution using the six possible permutations as, e.g.,
ls *[dD]*[oO]*[gG]* *[oO]*[gG]*[dD]* *[gG]*[dD]*[oO]* *[oO]*[dD]*[gG]* *[dD]*[gG]*[oO]* *[gG]*[oO]*[dD]*
(setting shopt -s nullglob before is a good idea).
This becomes a bit messy, so you might want to also use shopt -s nocaseglob so that the match is performed without regard to the case of alphabetic characters (quoted from the Bash Reference Manual) as:
$ shopt -s nullglob nocaseglob
$ ls *d*o*g* *o*g*d* *g*d*o* *o*d*g* *d*g*o* *g*o*d*
Another option is to use the find command as, e.g.,
$ find . -maxdepth 1 -iname '*d*' -iname '*o*' -iname '*g*' -type f
The -maxdepth 1 option is to limit to the current directory and not its subdirectories and -type f to limit the search to regular files. Change these options to fit your needs. -iname is for a case-insensitive search on the name (using glob patterns). Don't forget the quotes!
One advantage of find is that you'll be able to -exec stuff if you need to perform any operation on these files, e.g., renaming them, or appending stuff to them, etc. Another advantage is that if you have a huge number of such files, the globbing will take ages, and might even overflow the maximum number of arguments allowed. find will be alright, whatever the number of files is.
Hope this helps!
Edit. It seems I missed the in this order part in your title. Hahaha. Well, then the find command would simply be:
$ find . -maxdepth 1 -iname '*d*o*g*' -type f
and the ls/printf solution would be:
$ shopt -s nullglob nocaseglob
$ ls *d*o*g*
$ printf '%s\n' *d*o*g*
share|improve this answer
I would use Perl for this:
perl -le 'print for grep {/\A[a-cefh-np-z]*d[a-fh-z]*o[a-z]*g[a-z.]*\Z/i} <*>'
Explanation
• <*> is magical: it returns the result of globbing the pattern * from the current directory.
• grep{} only returns those items of the glob matching the criteria inside it.
• print outputs the result of grep one item per line due to the -l option.
• The single criterion used for grep is that the file name consist exactly of:
• any number of letters in the set [a-cefh-np-z] (the alphabet minus o and g)
• the letter d
• any number of letters in the set [a-fh-z] (the alphabet minus g)
• the letter o
• any number of any letter whatsoever
• the letter g.
• any number of letters or a period (for extensions).
• The i modifier is applied to the regex criterion to ensure the match is case-insensitive as required in the question.
I know this isn't exactly what you asked for, but I believe Bash wildcard patterns are too simple to handle this task.
share|improve this answer
Another way to look at the problem would be to keep only filenames for which once eliminated every character outside of D,O or G, the result would be exactly DOG, case insensitive.
This could be done in shell with search-replace-test instead of wildcards:
#!/bin/bash
shopt -s nocasematch
shopt -s nullglob
for i in *
do
if [[ ${i//[^dogDOG]/} == dog ]]; then
echo $i;
fi;
done
share|improve this answer
for i in `ls` is just plain wrong and broken. Use for i in * instead. – gniourf_gniourf Sep 29 '13 at 8:26
@gniourf_gniourf: I've changed to for i in * for spaces in filenames but one could argue that since * expands as * when the dir is empty, it's also incorrect strictly speaking to get the directory contents. It doesn't matter much here, though. – Daniel Vérité Sep 29 '13 at 11:53
To avoid that, use shopt -s nullglob or, to have an explicit error use shopt -s failglob. Globbings should always be used with either of these options. Since you're already shopting, you might as well replace the shopt line with shopt -s nocasematch nullglob. – gniourf_gniourf Sep 29 '13 at 12:46
Also, avoid using backticks! use $(...) instead. Moreover, to remove all the letters dogDOG from a variable, you can use parameter expansions: ${i//[^dogDOG]/}. It's much nicer and doesn't require the external tool tr. I'd hence write if [[ ${i//[^dogDOG]/} == dog ]]; then – gniourf_gniourf Sep 29 '13 at 12:48
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.738114 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7760663 B2
Publication typeGrant
Application numberUS 10/834,607
Publication dateJul 20, 2010
Filing dateApr 28, 2004
Priority dateApr 19, 2004
Fee statusPaid
Also published asDE602005009991D1, EP1589692A1, EP1589692B1, US20050232239
Publication number10834607, 834607, US 7760663 B2, US 7760663B2, US-B2-7760663, US7760663 B2, US7760663B2
InventorsSlawomir K. Ilnicki, Lance A. Tatman
Original AssigneeJds Uniphase Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Packet tracing using dynamic packet filters
US 7760663 B2
Abstract
Packet tracing in switched packet networks. Tracing of live packet data in a network is performed by discovering the measurement path, setting up dynamic filters along the path to collect traffic information, and collecting data as detected by the dynamic filters. Collected data is sent to a measuring entity. Filter setup may be repeated to capture data as routing changes.
Images(3)
Previous page
Next page
Claims(18)
1. A method of tracing packets from a source to a destination in a switched network comprising:
establishing starting and ending points for the tracing operation,
sending filter configuration packets from the starting to ending points to establish dynamic packet filters in tracing entities along the route to be traced,
detecting traffic at the tracing entities with dynamic packet filters and collecting trace information, and
sending the trace information to a monitoring station.
2. The method of claim 1 where the step of sending filter configuration packets is repeated.
3. The method of claim 1 where the filter configuration packet mimics the traffic to be traced.
4. The method of claim 1 where the dynamic packet filters have a specified lifetime after which they expire.
5. The method of claim 1 where multiple filters are established for tracing different types of traffic.
6. The method of claim 1 where the step of sending filter configuration packets starts with the step of a probe manager sending a UDP packet to a starting node where the UDP packet is unwrapped and forwarded to the destination mimicking the traffic to be traced.
7. The method of claim 1 where the filter configuration packet is authenticated by the starting point.
8. The method of claim 1 where the filter configuration packet is authenticated by at least one of the tracing entities.
9. The method of claim 1 where tracing entities are grouped into domains with a group ID so that tracing can be conducted only within those domains.
10. The method of claim 1 where tracing entities report failure in instantiating a filter to the initiating entity.
11. The method of claim 1 where tracing entities report success in instantiating a filter to the initiating entity.
12. The method of claim 1 where the last tracing entity stops the forwarding process.
13. The method of claim 12 where the forwarding process is stopped based on the number of hops from the starting point.
14. The method of claim 12 where the forwarding process is stopped based on proximity to the destination.
15. The method of claim 1 where the trace information sent to the monitoring station is encrypted.
16. The method of claim 1 where the trace information is timestamped at the tracing entity.
17. The method of claim 1 where trace information is collected at the tracing entity prior to being sent to the monitoring station.
18. The method of claim 1 where the trace information is aggregated at the tracing entity prior to being sent to the monitoring station.
Description
TECHNICAL FIELD
Embodiments in accordance with the invention relate generally to tracing IP packets through digital networks.
BACKGROUND
Modern digital networks are IP networks, based on packet-switched Internet Protocols. Packets of information travel from a source node connected to the network to a destination node connected to the network. The path these packets take through the myriad of possible routes through the network is chosen by routers, and may change. The path between source and destination may not be the same for each packet, and may not be the same in each direction.
This routing poses a question which is simple to ask, but difficult to answer: what path does a packet take through the network?
Tracing a path of IP packets through the network is generally accomplished by using the well-known traceroute utility. Traceroute attempts to report the route or path (the set of IP addresses of router interfaces) through which a certain type of packet (a UDP packet) travels to reach a particular destination port. Traceroute manipulates the time-to-live (TTL) attribute of the packets in the IP packet header it sends to get such information. The TTL attribute of a packet as used by traceroute is not a timer in the clock or time-of-day sense, but rather a counter which is decremented each time the packet passes through a router. When TTL is decremented to zero, the packet is dropped, and the router returns an ICMP Timer Expired message to the sender, including its own IP address as a source IP address in the IP packet header. So, by beginning with a TTL of 1 and incrementing the TTL until the destination is reached, a path may be “traced.” However, this “traced path” is an aggregate path which represents only a theoretical route, as it is built from a series of UDP packets. The path traced may not represent the actual path taken by packets, as the route may change during the mapping process. Additionally, the path is only traced in a single direction, and there is no guarantee that return traffic takes a reciprocal route. Nevertheless, the traceroute tool gives an approximate path with approximate round trip delays to each hop on a path that in many cases is good enough for network troubleshooting.
The ping utility also provides a round trip delay measurement between source and destination, but does not report on the path itself. Ping uses ICMP echo messages and ICMP echo reply messages. Because it uses ICMP messages, it may not provide an accurate measurement of real traffic round trip delay. ICMP messages may be routed differently than other network traffic, for example using different priorities or different routes. In addition, routers are usually designed to drop ICMP messages when the router becomes congested.
Approaches such as traceroute, ping, and their derivatives rely on special packet types, and provide aggregate data based on special test packets. These two techniques rely on active measurement by inserting special packets into the network. Such special packets may not be routed through the network in the same way as other traffic. Providing reliable information on packet routing involves measuring real traffic. Such information includes information on how long it takes a specific packet to travel from one node to another. As networks may have congestion points which introduce packet jitter, knowledge of congestion points and jitter is very often essential in determining network problems or anomalies.
What is needed is a way to obtain unidirectional IP path information on real network data, including timestamping of intercepted packets.
SUMMARY
In accordance with the invention, automatic packet tracing from a source to a destination takes place in three phases, discovery, dynamic filter setup, and data collection. In the discovery phase, determinations are made about the measurement path start and end. In the second phase, dynamic packet filter setup, packet filters are set up along the path to capture specific traffic. The last phase, data collection, occurs where time-stamped packet header information is captured according to the deployed dynamic packet filters, and the data is delivered to the measuring party. Dynamic filter setup may be repeated to track routing changes.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will best be understood by reference to the following detailed description of embodiments in accordance with the invention when read in conjunction with the accompanying drawings, wherein:
FIG. 1 shows a packet switched network,
FIG. 2 shows a probe configuration packet, and
FIG. 3 shows a filter information packet.
DETAILED DESCRIPTION
The invention relates to packet tracing in packet-based networks. The following description is presented to enable one skilled in the art to make and use the invention, and is provided in the context of a patent application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments. Thus, the invention is not intended to be limited to the embodiments show but is to be accorded the widest scope consistent with the appended claims and with the principles and features described herein.
With reference now to the figures and in particular with reference to FIGS. 1 through 3, representative embodiments of the invention are shown.
FIG. 1 shows network 100 with routers 120, 130, 140, 150, 170, and 180. As shown, multiple paths exist between starting point 110 and exit path 160.
Packet tracing according to the present invention takes place in three phases: discovery, dynamic filter setup, and data collection. Filter setup may be performed from a remote location using proxies built into interface probes. Filter packet setup discovers the path by mimicking actual traffic, and allows traffic data capture even when there are routing changes. Data is collected on real traffic using passive eavesdropping. Collected data is sent to a measuring station, which may be separate from the starting and ending nodes on the network. The collected data may also include timing information such as packet arrival time.
This packet tracing and data collection is organized and initiated by a measuring entity which may not be connected with the start and end points of the path being measured. As an example, such measurements may be performed by a third party on the request of an Autonomous System (AS) operator who wishes to better understand traffic flowing between two points of a network. FIG. 1 shows measuring entity 300 and probe manager 200 outside network 100.
It is assumed that relevant tracing entities along the network are routers or probes capable of accepting and installing configurable filters. Messages may be sent to a tracing entity for example to set up a filter or destroy a filter. A tracing entity may require authentication of the received packet filter configuration message prior to accepting a filter. Instantiated filters typically have a set lifetime, usually referred to as “Time To Live,” or TTL, after which they expire. For the purposes of this application, times ranging from seconds to hours may be used. While in operation, the tracing entities passively monitor traffic, and collect information according to active filter specifications; more than one filter may be active at any given time. Collected information is sent to a measuring entity that collects and analyzes data, specified as the destination in the filter. Such filtering functionality may be built into a router or probe. An example which could provide suitable implementation of configurable filter support could be a GBIC (GigaBit Interface Converter) module including filtering support. GBIC modules are used to interface media such as optical fiber or copper wires to network equipment such as hubs, switches, routers, and the like. Such an implementation is described in the patent application “Assisted Port Monitoring with Distributed Filtering,” Ser. No. 10/407,719, incorporated herein by reference.
According to the present invention, the measuring entity specifies what type of traffic is to be monitored, and for what period of time. Specification of traffic in this context means information including but not limited to: application destination port, protocol, and source and destination IP addresses. Note that IP addresses may be IP prefixes representing sets or subnets of sources and/or destinations as defined in well known Classless Inter-Domain Routing (CIDR) standards.
In the first phase, discovery, a determination of the source and destination points on the network is made. Measuring entity 300 passes packet filter information such as source and destination IP addresses, destination port, and protocol, for example, to probe manager 200. Probe manager 200 may be part of the network to be probed, or may be external to it. Probe manager 200 may also be part of measuring entity 300.
The probe manager, on verifying access to the required measurement infrastructure, suggests to the measuring entity possible starting points for traffic matching the filter by analyzing routing information. For example, a specific staring interface (probe or router) may be determined by analyzing BGP or OSPF tables and router configuration information, which port of which router is configured with what peer.
The probe manager may also maintain a mapping of probe IP addresses and router ports so that mapping which starting point to use for testing may be more easily determined. Once this determination is made, and if there is more than one starting point, the measuring entity may narrow the choices by checking for the presence of the desired traffic flowing through the identified potential starting points. This may not be necessary if the entity already collects such information using other means such as sniffers for traffic flow analysis. If the measuring entity decides which interfaces to use prior to the beginning of the measurement, then the entity will ask the probe manager to set a packet filter at the start point(s) and begin collecting data.
Once the starting point is determined, the measuring entity may wish to define the end of the measuring path. In the case of tracking packets through his own AS, this may not be needed because tracing will terminate at the egress of the network. If the requirement is to trace packets within a certain perimeter, such as the core, or over a backbone network, then an end point will need to be determined. Note that the starting and ending points of the tracing operation may be different than the source and destination of the packet traffic being traced.
For the following discussion, one starting point will be assumed, although the invention is equally applicable to multiple starting points. As shown in FIG. 1, node 110 is selected as the starting point.
According to the present invention, once the starting point is determined, the probe manager creates a filter setup packet which is sent to the starting point proxy to set up filters at the starting probe tracing entity. The starting point tracing entity then forwards the filter setup along to the destination.
As shown in FIG. 1, this packet is sent from probe manager 200 through router 180 to router 120 to starting node 110.
Note that this filter setup packet is “wrapped” to resemble the same type of traffic as the traffic to be traced. This insures that the setup packets will follow a route similar to that of the actual traffic. This is shown in the example probe configuration packet of FIG. 2 having IP header 210, UDP header 220, and filter payload 230. FIG. 3 shows more detail of payload 230 as filter information packet 310. The fields shown in FIGS. 2 and 3 are exemplary in nature and will vary depending on the actual protocols and filter specifications in use.
At each point along the route where the filter setup packet is recognized by a suitable tracing entity, the filter setup information is extracted. Since filter resources are limited, the tracing entity may not be able to accept a filter or set of filters. If the tracing entity can accept the filter, the filter is installed. In either case, an optional status message signifying success or failure of filter instantiation may be sent to the probe manager. Then the filter setup packet is passed along toward the destination.
As shown in FIG. 1, the filter setup packet from probe manager 200 is unwrapped at node 110, and from there forwarded along the network to destination path 160. As an example, assume the packet travels from router 120 through routers 130, 170, and 150 to reach destination path 160.
Note that because any active trace or probe configuration packet may disrupt the destination application, the source IP address as well as the source port of such packets should be used with care. It is recommended that the source IP address of any active trace or probe configuration packet should have the source address as the initial starting point. The source port must be a well known port assigned for this purpose. As an example, it could be port 7, which is assigned by the Internet Assigned Numbers Authority (IANA) for the well known UDP/TCP echo facility.
The second phase, dynamic filter setup, begins when the probe manager has the necessary information regarding the beginning and the end of the measured path as well as packet filter information. The probe manager begins the second phase by wrapping a special filter packet and sending it to the starting probe proxy. The packet sent by probe manager 200 to starting point proxy 110 is a UDP packet with the destination address pointing to the proxy itself. The source port should be a well-known port indicating a configuration probe packet, or some other unique identifier such as a specific identifier (often referred to as a magic number or cookie) as a part of the packet payload. It should be noted that the filter packet is the same as the traced packets; however, its payload contains all necessary packet filter attributes. In other words, the filter packet will look like the traffic to be traced, with a payload used for the initial filter packet setup. The packet is unwrapped by the tracing entity and then sent by the tracing entity down the path to the destination. All the tracing entities on the path, for example routers 120, 130, 170, and 150, will either set up their own filters using the information extracted from the intercepted filter packet, or refuse to set up the filter, but in any case they will forward this configuration packet along the path.
Because the packet sent by probe manager 200 or proxy 110 is a single packet which will not be retransmitted based on any form of feedback (such as a NAK) because there is no entity responsible for the retransmission of the packet, and the packet may be lost, it may be desirable to send multiple copies of the configuration packet to the destination to guarantee that all filters along the path are set up.
Due to the possibility of dynamic routing changes, filter configuration packets should be resent by probe manager 200 or proxy 110 to cover possible routing path changes. The frequency will depend on the nature of the routing changes. If, for example, filter specified data is missing from a specific tracing entity but present from other tracing entity, this may indicate a path change. Sending a filter packet through tracing entities that already have the same configuration will only refresh filter attributes, e.g. filter TTL, and more importantly will establish a filter on tracing entities which were not previously configured, and by doing so provide coverage for the new path. Eventually filters on the old path will expire. The probe manager may have the option of directly communicating with any tracing entity under its control to remove a specific filter.
As an example, if the route shown in FIG. 1 shifts from router 130 through router 170 to router 150 to passing from router 130 through router 140 to router 150, router 140 has not been previously configured with the desired filters. When probe manager 200 retransmits the filter configuration packet, the unwrapped packet reaches router 120, which already has a filter established. The effect of the newly transmitted filter configuration packet is to reset the Time To Live (TTL) value for router 120. The filter configuration packet continues through router 130, resetting its TTL value. The filter configuration packet then reaches router 140, where a new filter is established. Propagating further to router 160, its TTL is refreshed. As the newly transmitted filter configuration packet did not reach router 170 previously on the packet path, the filter established on router 170 will expire before the filters on routers 120, 130, 140, and 150 expire.
It may be desirable to selectively trace traffic through the network, for example by specifying domains and providing group identification. As an example, referring once again to FIG. 1, routers 130 and 140, and any tracing entities they contain could be grouped and identified as a “north” path, with routers 170 and 180 and any tracing entities they contain grouped and identified as a “south” path. Filters at tracing entities on router 160 could then distinguish between the groups.
As the filter configuration packet mimics real traffic, it may be desirable to prevent this packet from reaching the destination system. Limiting the propagation of the filter configuration packet may be implemented in many ways. As examples, forwarding of the filter configuration packet may be limited to a specific number of hops. The filter configuration packet may instruct a particular tracing entity to stop forwarding. The filter configuration packet may be stopped when a tracing entity recognizes proximity to the destination. As shown in FIG. 1, limiting forwarding to 4 hops, specifying that the tracing entity at router 150 stop forwarding, or having the tracing entity stop forwarding if it would forward to path 160 will all achieve the desired result.
The third phase, data collection, begins once filters are set up along the route. The tracing entities containing active filters passively monitor traffic and start collecting data (usually in the form of packet header fingerprint plus timestamp) on real traffic described by the filter. Collected data may be transmitted to the measuring station as it is collected, or buffered and sent on the occurrence of an event, such as a periodic time-out, or buffer full event. If sufficient processing power is available at the tracing entity, aggregation and/or processing of trace information may be performed, resulting in the transmission of aggregated information to the monitoring entity. Data from the tracing entity may be encrypted prior to transmission. The data is analyzed at the measurement station. As shown in FIG. 1, collected data for example from routers 120 and 130 travels through router 170 to measuring entity 300.
The foregoing detailed description of the present invention is provided for the purpose of illustration and is not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Accordingly the scope of the present invention is defined by the appended claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US6363056Jul 15, 1998Mar 26, 2002International Business Machines CorporationLow overhead continuous monitoring of network performance
US6442141 *Aug 31, 1998Aug 27, 20023Com CorporationNetwork delay and loss simulator
US20020143905 *Mar 30, 2001Oct 3, 2002Priya GovindarajanMethod and apparatus for discovering network topology
US20030039212Oct 17, 2001Feb 27, 2003Lloyd Michael A.Method and apparatus for the assessment and optimization of network traffic
US20030128692 *Jan 4, 2002Jul 10, 2003Mitsumori Derek HisamiVoice over internet protocol (VoIP) network performance monitor
US20030214913 *May 17, 2002Nov 20, 2003Chao KanPassive network monitoring system
US20040064725 *Sep 18, 2002Apr 1, 2004Microsoft CorporationMethod and system for detecting a communication problem in a computer network
US20040120269 *Dec 11, 2003Jun 24, 2004Satoshi SuminoSwitching apparatus
US20050108760 *Feb 16, 2004May 19, 2005Sony CorporationUniversal network interface for home network
US20060098586 *Jul 6, 2005May 11, 2006Farrell Craig AMethod and apparatus for application route discovery
EP1401147A1Sep 16, 2002Mar 24, 2004Agilent Technologies, Inc.Measuring network operational parameters as experienced by network operational traffic
Non-Patent Citations
Reference
1European Search Report dated Jun. 23, 2005-2 pages.
2European Search Report dated Jun. 23, 2005—2 pages.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US20110078237 *Sep 3, 2010Mar 31, 2011Oki Electric Industry Co., Ltd.Server, network device, client, and network system
US20120265870 *Apr 15, 2011Oct 18, 2012Microsoft CorporationData taps on a server-managed data integration process
EP2802103A1May 6, 2014Nov 12, 2014JDS Uniphase CorporationMethod and system for measuring packet loss
Classifications
U.S. Classification370/254, 370/252, 709/224, 370/360
International ClassificationH04L12/28, H04L12/56, H04L12/26, H04L12/24
Cooperative ClassificationH04L43/10
European ClassificationH04L43/10
Legal Events
DateCodeEventDescription
Jul 21, 2004ASAssignment
May 25, 2010ASAssignment
Owner name: JDS UNIPHASE CORPORATION,CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:024433/0138
Effective date: 20100430
Owner name: JDS UNIPHASE CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:024433/0138
Effective date: 20100430
Jan 20, 2014FPAYFee payment
Year of fee payment: 4
Aug 28, 2015ASAssignment
Owner name: VIAVI SOLUTIONS INC., CALIFORNIA
Free format text: CHANGE OF NAME;ASSIGNOR:JDS UNIPHASE CORPORATION;REEL/FRAME:036504/0327
Effective date: 20150731
|
__label__pos
| 0.557758 |
googleads
How To Validate Ruby Objects With Active Model Validations
Ruby on Rails Web Development
How To Validate Ruby Objects With Active Model Validations
Why We Need Validations First?
With the help of validations you can ensure that only the valid data is saved into your database. For example, when your application demands unique email address for each users then we’re in a situation to collect only the valid data. Now you can achieve this with the help of Model level Validations in Rails. Rails provides several built-in helpers which will cover most of the validation part, Rails will also allows you to create your own custom validation methods.
Types of Validations?
There are many ways to validate your data before saving into your database and here i will list down the 4 main ways to validate the data in Rails.
• Database Constraints – We can create validations at database level, but it is very difficult to manage validations at database level, because for every modification of validation you need to run the migration again. So maintenance will become more difficult.
• Client side Validations – Client side validations are performed by browser, we will write these validations in JavaScript/Jquery, however we cannot use these validations alone. Because there might be a chance for Javascript is turned off in user browser, but these validation is very useful to give the immediate feedback to the user about what exactly went wrong. It is highly recommended to use both client and server side validations.
• Controller-level Validations – We can add validations in your Rails Controller, but it is very difficult to test and maintain, Rails always will recommend us to keep the controller very light so that it will not add more code to your controller.
• Model Level ValidationsRails always recommends to keep your validations at model level, this makes validations easy to test and maintain. In the coming section you could see how easy is that to add and test validations in Rails.
Validation Helpers in Rails?
• Presence:
This Validator will not allow the record to save in DB unless specified attributes are present.
Example 1:
Model:
class User < ApplicationRecord
validates :name, :email, :age, presence: true
end
Following example will throw error on empty name, email, age
user = User.new
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: nil>
user.save
begin transaction
rollback transaction
user.errors.messages
=> {:name=>["can't be blank"], :email=>["can't be blank"], :age=>["can't be blank"]}
Related: Authenticating Rails Web Services With JWT
Absence:
This is the vice versa of the presence helper, this helper will always ensure that specified attribute must be empty.
Example:
class User < ApplicationRecord
validates :email, :age, absence: true
end
Following example throws error on the presence of email.
user = User.new(name: 'Abcd', email: '[email protected]', age: '22')
=> #<User id: nil, name: "Abcd", email: "[email protected]", created_at: nil, updated_at: nil, age: "22">
user.save
begin transaction
rollback transaction
user.errors.messages
=> {:email=>["must be blank"], :age=>["must be blank"]}
Uniqueness:
This helper always checks for the attribute uniqueness before record gets saved into the database.
Example:
class User < ApplicationRecord
validates :email, uniqueness: true
end
In First example I created user with email [email protected], in 2nd example when I try to create the user with same email, it throws error.
1. 1) user = User.create(name: 'TestUser', email: '[email protected]', age: '23')
begin transaction
(204.5ms) commit transaction
=> #<User id: 5, name: "TestUser", email: "[email protected]", created_at: "2018-11-21 02:16:35", updated_at: "2018-11-21 02:16:35", age: "23">
2) user = User.create(name: 'TestUser2', email: '[email protected]', age: '24')
begin transaction
rollback transaction
=> #<User id: nil, name: "TestUser2", email: "[email protected]", created_at: nil, updated_at: nil, age: "24">
user.errors.messages
=> {:email=>["has already been taken"]}
Numericality:
This helper will save the record into database only if the attribute value is numeric. You can also pass different constraints like :greater_than, :greater_than_or_equal_to, :equal_to, :less_than, :less_than_or_equal_to, :odd, :even, :other_than all the constraints are self explanatory.
Example:
class User < ApplicationRecord
validates :age, numericality: true
end
In this example, I tried to create the user age with string which was expected to be number, so it throws error
user = User.create(age: 'sdsdsd')
(0.2ms) begin transaction
(0.2ms) rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "sdsdsd">
user.errors.messages
=> {:age=>["is not a number"]}
Following examples demonstrates how to use greater_than, less_than, other_than options with numericality helper.
class User < ApplicationRecord
validates :age, numericality: { greater_than: 20, less_than: 90, other_than: 28 }
end
user= User.create(age: 12)
begin transaction
rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "12">
user.errors.messages
=> {:age=>["must be greater than 20"]}
user= User.create(age: 91)
begin transaction
rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "91">
user.errors.messages
=> {:age=>["must be less than 90"]}
user= User.create(age: 28)
begin transaction
rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "28">
user.errors.messages
=> {:age=>["must be other than 28"]}
Length:
This helper will validate the length of the attribute value and you can pass many constraints like minimum, maximum, in, is. “In” constraint for specifying length with in a range(which includes min and max values).”is” constraint for exact value length.
Example:
Following example demonstrates how to use length helper with different options like minimum, maximum, is, in.
class User < ApplicationRecord
validates :name, length: { minimum: 2, maximum: 10 }
validates :age, length: { is: 2 }
end
user=User.create(name: 'A', age: 222)
begin transaction
rollback transaction
user.errors.messages
=> {:name=>["is too short (minimum is 2 characters)"], :age=>["is the wrong length (should be 2 characters)"]}
user=User.create(name: 'Afdfdfdfdfdfdfdfdfdafe grst', age: 222)
begin transaction
rollback transaction
user.errors.messages
=> {:name=>["is too long (maximum is 10 characters)"], :age=>["is the wrong length (should be 2 characters)"]}
Following example demonstrates the length helper with in contraint.
class User < ApplicationRecord
validates :name, length: { in: 2..10 }
end
user=User.create(name: 'A')
begin transaction
rollback transaction
user.errors.messages
=> {:name=>["is too short (minimum is 2 characters)"]}
user=User.create(name: 'Axfdfdsf sf sa fdasfdasfds fs dsfdsf')
begin transaction
rollback transaction
user.errors.messages
=> {:name=>["is too long (maximum is 10 characters)"]}
Best To Read: How To Build Rock Solid Ruby On Rails App With BDD
Inclusion:
This helper take a set of values and ensures that attribute value must match with any one of the value from the set of predefined values.
Example:
class User < ApplicationRecord
validates :age, inclusion: { in: %w(10 20 30) }
end
user = User.create(age: 90)
begin transaction
rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "90">
user.errors
=> #<ActiveModel::Errors:0x00000002f54d88 @base=#<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "90">, @messages={:age=>["is not included in the list"]}, @details={:age=>[{:error=>:inclusion, :value=>"90"}]}>
Exclusion:
This is the total vice versa of inclusion helper, it will take a set of predefined values and it will ensure that attribute value must not match with any of the value in the list.
Example:
class User < ApplicationRecord
validates :age, exclusion: { in: %w(10 20 30) }
end
user = User.create(age: 10)
begin transaction
rollback transaction
=> #<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "10">
> user.errors
=> #<ActiveModel::Errors:0x00000001db7648 @base=#<User id: nil, name: nil, email: nil, created_at: nil, updated_at: nil, age: "10">, @messages={:age=>["is reserved"]}, @details={:age=>[{:error=>:exclusion, :value=>"10"}]}>
Format:
This helper take input as a regular expression and ensures that attribute value must match with regular expression.
Example:
class User < ApplicationRecord
validates :name, format: { with: /\A[a-zA-Z]+\z/, message: "only characters are allowed" }
end
user = User.create(name: '123')
begin transaction
rollback transaction
> user.errors
=> #<ActiveModel::Errors:0x00000004ccad90 @base=#<User id: nil, name: "123", email: nil, created_at: nil, updated_at: nil, age: nil>, @messages={:name=>["only characters are allowed"]}, @details={:name=>[{:error=>:invalid, :value=>"123"}]}>
Confirmation:
This helper is useful when you have text boxes in your form which expects the same values, for example email, email_confirmation, password, password_confirmation. This helper will creates a virtual attribute for you, you need to add _confirmation to the field.
Example:
class User < ApplicationRecord
validates :email, confirmation: true
validates :email_confirmation, presence: true
end
user = User.create(email: '[email protected]')
begin transaction
rollback transaction
user.errors.full_messages
=> ["Email confirmation can't be blank"]
Acceptance:
This helper will always ensure that the checkbox in the form is checked when your form is submitted, one of the use case of this validation is, when the user is about to sign up then he/she must accept the term and conditions. Also, you don’t need to have any field in your database for this, rails will create a virtual attribute for you.
Example:
class User < ApplicationRecord
validates :terms_of_service, acceptance: true
end
Allow_nil
There are some cases where you need to validate the attribute only if the attribute value is present, else we need to skip the validation. In that case allow_nil option skips the validation if the attribute value is nil and it will do the validation only when the attribute value present.
Example:
First example will save the record if age attribute value is nil, second example will return errors if age value is not in the specified list.
class User < ApplicationRecord
validates :age, inclusion: { in: %w(10 20 30) }, allow_nil: true
end
user = User.create(name: 'Test', age: nil)
begin transaction
commit transaction
=> #<User id: 15, name: "Test", email: nil, created_at: "2018-11-23 01:49:10", updated_at: "2018-11-23 01:49:10", age: nil>
user = User.create(name: 'Test', age: 40)
begin transaction
rollback transaction
user.errors.full_messages
=> ["Age is not included in the list"]
Allow_blank
This option is almost similar to the allow_nil but only difference is “allow_nil” will accept the skip validation only when the value is nil where as allow_blank will skip the validation if the attribute value is nil, or empty string (“ ”)
Example:
class User < ApplicationRecord
validates :age, inclusion: { in: %w(10 20 30) }, allow_blank: true
end
allow_nil throws error when age value is empty string
user = User.create(name: 'Test', age: "")
begin transaction
rollback transaction
Allow blank saves the record although age value is empty string,
user = User.create(name: 'Test', age: "")
begin transaction
commit transaction
=> #<User id: 16, name: "Test", email: nil, created_at: "2018-11-23 01:55:47", updated_at: "2018-11-23 01:55:47", age: "">
Message
By default each validation helper has the default error message set by the rails, this message option will allow you to customize the error messages for you.
Example:
The default validation error message of presence helper is “Name can’t be blank”(consider attribute name is “name”).
class User < ApplicationRecord
validates :name, presence: { message: "is required" }
end
user = User.create(name: "")
(0.2ms) begin transaction
(0.1ms) rollback transaction
user.errors.full_messages
=> ["Name is required"]
In the same way, you can edit any validation helper error message.
On:
The default behavior of each validation helper is, they will run the validation before record gets stored into database i.e either while creating the record or while updating it. Also, this on option allow you to specify when the validation should happen.
Example:
class User < ApplicationRecord
validates :age, presence: true, on: :create # skip validation on update
validates :email, uniqueness: true, on: :update # skip validation on create
validates :name, presence: { message: "is required" } # validates on create and update
end
Alright! Mostly we have seen covered all the major validations in Rails which can greatly help us in building user-centric forms. Also would appreciate your great suggestions to emphasize Rails validations. Concerned to build user centric site in Rails? Looking for the correct to take up? Get in touch with the highly skilled experts from Rails application development to guide you on building the user-friendly rails application.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.870828 |
• kvuong
• NEWBIE
• 0 Points
• Member since 2007
• Chatter
Feed
• 0
Best Answers
• 0
Likes Received
• 0
Likes Given
• 1
Questions
• 1
Replies
I'm trying to create a report that counts the number of times an item is selected from a multiselect picklist. For example if there are three people and they have the following selected:
Person A: Item A, B, C
Person B: Item A, C
Person C: Item B
The report would tell me that two people did A, two B, two C. Further, I'd need to be able to sum up the number of times Person A does item A and etc.
Any suggestions?
• March 17, 2008
• Like
• 0
Hello -
I have a client who wants to add a stoplight image next to a custom "STATUS" field. I found this formula in the help section of the Salesforce site: IMAGE(image_url, alternate_text, height, width)
However, this formula directs you to find an image URL. Is it possible to insert an image from a saved file such as a gif or jpeg? Does the image have to be linked to a URL to insert?
Any input is appreciated.
Thanks!
|
__label__pos
| 0.945444 |
In Files
• openssl/ossl_pkey.c
Class/Module Index [+]
Quicksearch
OpenSSL::PKey::EC
Constants
NAMED_CURVE
Public Class Methods
builtin_curves => [[name, comment], ...] click to toggle source
See the OpenSSL documentation for EC_builtin_curves()
static VALUE ossl_s_builtin_curves(VALUE self)
{
EC_builtin_curve *curves = NULL;
int n;
int crv_len = rb_long2int(EC_get_builtin_curves(NULL, 0));
VALUE ary, ret;
curves = ALLOCA_N(EC_builtin_curve, crv_len);
if (curves == NULL)
return Qnil;
if (!EC_get_builtin_curves(curves, crv_len))
ossl_raise(rb_eRuntimeError, "EC_get_builtin_curves");
ret = rb_ary_new2(crv_len);
for (n = 0; n < crv_len; n++) {
const char *sname = OBJ_nid2sn(curves[n].nid);
const char *comment = curves[n].comment;
ary = rb_ary_new2(2);
rb_ary_push(ary, rb_str_new2(sname));
rb_ary_push(ary, comment ? rb_str_new2(comment) : Qnil);
rb_ary_push(ret, ary);
}
return ret;
}
OpenSSL::PKey::EC.new() click to toggle source
OpenSSL::PKey::EC.new(ec_key)
OpenSSL::PKey::EC.new(ec_group)
OpenSSL::PKey::EC.new("secp112r1")
OpenSSL::PKey::EC.new(pem_string)
OpenSSL::PKey::EC.new(pem_string [, pwd])
OpenSSL::PKey::EC.new(der_string)
See the OpenSSL documentation for:
EC_KEY_*
static VALUE ossl_ec_key_initialize(int argc, VALUE *argv, VALUE self)
{
EVP_PKEY *pkey;
EC_KEY *ec = NULL;
VALUE arg, pass;
VALUE group = Qnil;
char *passwd = NULL;
GetPKey(self, pkey);
if (pkey->pkey.ec)
ossl_raise(eECError, "EC_KEY already initialized");
rb_scan_args(argc, argv, "02", &arg, &pass);
if (NIL_P(arg)) {
ec = EC_KEY_new();
} else {
if (rb_obj_is_kind_of(arg, cEC)) {
EC_KEY *other_ec = NULL;
SafeRequire_EC_KEY(arg, other_ec);
ec = EC_KEY_dup(other_ec);
} else if (rb_obj_is_kind_of(arg, cEC_GROUP)) {
ec = EC_KEY_new();
group = arg;
} else {
BIO *in = ossl_obj2bio(arg);
if (!NIL_P(pass)) {
passwd = StringValuePtr(pass);
}
ec = PEM_read_bio_ECPrivateKey(in, NULL, ossl_pem_passwd_cb, passwd);
if (!ec) {
OSSL_BIO_reset(in);
ec = PEM_read_bio_EC_PUBKEY(in, NULL, ossl_pem_passwd_cb, passwd);
}
if (!ec) {
OSSL_BIO_reset(in);
ec = d2i_ECPrivateKey_bio(in, NULL);
}
if (!ec) {
OSSL_BIO_reset(in);
ec = d2i_EC_PUBKEY_bio(in, NULL);
}
BIO_free(in);
if (ec == NULL) {
const char *name = StringValueCStr(arg);
int nid = OBJ_sn2nid(name);
(void)ERR_get_error();
if (nid == NID_undef)
ossl_raise(eECError, "unknown curve name (%s)\n", name);
if ((ec = EC_KEY_new_by_curve_name(nid)) == NULL)
ossl_raise(eECError, "unable to create curve (%s)\n", name);
EC_KEY_set_asn1_flag(ec, OPENSSL_EC_NAMED_CURVE);
EC_KEY_set_conv_form(ec, POINT_CONVERSION_UNCOMPRESSED);
}
}
}
if (ec == NULL)
ossl_raise(eECError, NULL);
if (!EVP_PKEY_assign_EC_KEY(pkey, ec)) {
EC_KEY_free(ec);
ossl_raise(eECError, "EVP_PKEY_assign_EC_KEY");
}
rb_iv_set(self, "@group", Qnil);
if (!NIL_P(group))
rb_funcall(self, rb_intern("group="), 1, arg);
return self;
}
Public Instance Methods
check_key => true click to toggle source
Raises an exception if the key is invalid.
See the OpenSSL documentation for EC_KEY_check_key()
static VALUE ossl_ec_key_check_key(VALUE self)
{
EC_KEY *ec;
Require_EC_KEY(self, ec);
if (EC_KEY_check_key(ec) != 1)
ossl_raise(eECError, "EC_KEY_check_key");
return Qtrue;
}
dh_compute_key(pubkey) => String click to toggle source
See the OpenSSL documentation for ECDH_compute_key()
static VALUE ossl_ec_key_dh_compute_key(VALUE self, VALUE pubkey)
{
EC_KEY *ec;
EC_POINT *point;
int buf_len;
VALUE str;
Require_EC_KEY(self, ec);
SafeRequire_EC_POINT(pubkey, point);
/* BUG: need a way to figure out the maximum string size */
buf_len = 1024;
str = rb_str_new(0, buf_len);
/* BUG: take KDF as a block */
buf_len = ECDH_compute_key(RSTRING_PTR(str), buf_len, point, ec, NULL);
if (buf_len < 0)
ossl_raise(eECError, "ECDH_compute_key");
rb_str_resize(str, buf_len);
return str;
}
dsa_sign_asn1(data) => String click to toggle source
See the OpenSSL documentation for ECDSA_sign()
static VALUE ossl_ec_key_dsa_sign_asn1(VALUE self, VALUE data)
{
EC_KEY *ec;
unsigned int buf_len;
VALUE str;
Require_EC_KEY(self, ec);
StringValue(data);
if (EC_KEY_get0_private_key(ec) == NULL)
ossl_raise(eECError, "Private EC key needed!");
str = rb_str_new(0, ECDSA_size(ec) + 16);
if (ECDSA_sign(0, (unsigned char *) RSTRING_PTR(data), RSTRING_LENINT(data), (unsigned char *) RSTRING_PTR(str), &buf_len, ec) != 1)
ossl_raise(eECError, "ECDSA_sign");
rb_str_resize(str, buf_len);
return str;
}
dsa_verify_asn1(data, sig) => true or false click to toggle source
See the OpenSSL documentation for ECDSA_verify()
static VALUE ossl_ec_key_dsa_verify_asn1(VALUE self, VALUE data, VALUE sig)
{
EC_KEY *ec;
Require_EC_KEY(self, ec);
StringValue(data);
StringValue(sig);
switch (ECDSA_verify(0, (unsigned char *) RSTRING_PTR(data), RSTRING_LENINT(data), (unsigned char *) RSTRING_PTR(sig), (int)RSTRING_LEN(sig), ec)) {
case 1: return Qtrue;
case 0: return Qfalse;
default: break;
}
ossl_raise(eECError, "ECDSA_verify");
UNREACHABLE;
}
export([cipher, pass_phrase]) => String click to toggle source
to_pem([cipher, pass_phrase]) => String
Outputs the EC key in PEM encoding. If cipher and pass_phrase are given they will be used to encrypt the key. cipher must be an OpenSSL::Cipher::Cipher instance. Note that encryption will only be effective for a private key, public keys will always be encoded in plain text.
static VALUE ossl_ec_key_export(int argc, VALUE *argv, VALUE self)
{
VALUE cipher, passwd;
rb_scan_args(argc, argv, "02", &cipher, &passwd);
return ossl_ec_key_to_string(self, cipher, passwd, EXPORT_PEM);
}
Also aliased as: to_pem
generate_key => self click to toggle source
See the OpenSSL documentation for EC_KEY_generate_key()
static VALUE ossl_ec_key_generate_key(VALUE self)
{
EC_KEY *ec;
Require_EC_KEY(self, ec);
if (EC_KEY_generate_key(ec) != 1)
ossl_raise(eECError, "EC_KEY_generate_key");
return self;
}
group => group click to toggle source
Returns a constant OpenSSL::EC::Group that is tied to the key. Modifying the returned group can make the key invalid.
static VALUE ossl_ec_key_get_group(VALUE self)
{
VALUE group_v;
EC_KEY *ec;
ossl_ec_group *ec_group;
EC_GROUP *group;
Require_EC_KEY(self, ec);
group_v = rb_iv_get(self, "@group");
if (!NIL_P(group_v))
return group_v;
if ((group = (EC_GROUP *)EC_KEY_get0_group(ec)) != NULL) {
group_v = rb_obj_alloc(cEC_GROUP);
SafeGet_ec_group(group_v, ec_group);
ec_group->group = group;
ec_group->dont_free = 1;
rb_iv_set(group_v, "@key", self);
rb_iv_set(self, "@group", group_v);
return group_v;
}
return Qnil;
}
group = group => group click to toggle source
Returns the same object passed, not the group object associated with the key. If you wish to access the group object tied to the key call key.group after setting the group.
Setting the group will immediately destroy any previously assigned group object. The group is internally copied by OpenSSL. Modifying the original group after assignment will not effect the internal key structure. (your changes may be lost). BE CAREFUL.
EC_KEY_set_group calls EC_GROUP_free(key->group) then EC_GROUP_dup(), not EC_GROUP_copy. This documentation is accurate for OpenSSL 0.9.8b.
static VALUE ossl_ec_key_set_group(VALUE self, VALUE group_v)
{
VALUE old_group_v;
EC_KEY *ec;
EC_GROUP *group;
Require_EC_KEY(self, ec);
SafeRequire_EC_GROUP(group_v, group);
old_group_v = rb_iv_get(self, "@group");
if (!NIL_P(old_group_v)) {
ossl_ec_group *old_ec_group;
SafeGet_ec_group(old_group_v, old_ec_group);
old_ec_group->group = NULL;
old_ec_group->dont_free = 0;
rb_iv_set(old_group_v, "@key", Qnil);
}
rb_iv_set(self, "@group", Qnil);
if (EC_KEY_set_group(ec, group) != 1)
ossl_raise(eECError, "EC_KEY_set_group");
return group_v;
}
private_key => OpenSSL::BN click to toggle source
See the OpenSSL documentation for EC_KEY_get0_private_key()
static VALUE ossl_ec_key_get_private_key(VALUE self)
{
EC_KEY *ec;
const BIGNUM *bn;
Require_EC_KEY(self, ec);
if ((bn = EC_KEY_get0_private_key(ec)) == NULL)
return Qnil;
return ossl_bn_new(bn);
}
private_key = openssl_bn click to toggle source
See the OpenSSL documentation for EC_KEY_set_private_key()
static VALUE ossl_ec_key_set_private_key(VALUE self, VALUE private_key)
{
EC_KEY *ec;
BIGNUM *bn = NULL;
Require_EC_KEY(self, ec);
if (!NIL_P(private_key))
bn = GetBNPtr(private_key);
switch (EC_KEY_set_private_key(ec, bn)) {
case 1:
break;
case 0:
if (bn == NULL)
break;
default:
ossl_raise(eECError, "EC_KEY_set_private_key");
}
return private_key;
}
private_key? => true or false click to toggle source
Both #public_key? and #private_key? may return false at the same time unlike other PKey classes.
static VALUE ossl_ec_key_is_private_key(VALUE self)
{
EC_KEY *ec;
Require_EC_KEY(self, ec);
return (EC_KEY_get0_private_key(ec) ? Qtrue : Qfalse);
}
public_key => OpenSSL::PKey::EC::Point click to toggle source
See the OpenSSL documentation for EC_KEY_get0_public_key()
static VALUE ossl_ec_key_get_public_key(VALUE self)
{
EC_KEY *ec;
const EC_POINT *point;
VALUE group;
Require_EC_KEY(self, ec);
if ((point = EC_KEY_get0_public_key(ec)) == NULL)
return Qnil;
group = rb_funcall(self, rb_intern("group"), 0);
if (NIL_P(group))
ossl_raise(eECError, "EC_KEY_get0_get0_group (has public_key but no group???");
return ossl_ec_point_dup(point, group);
}
public_key = ec_point click to toggle source
See the OpenSSL documentation for EC_KEY_set_public_key()
static VALUE ossl_ec_key_set_public_key(VALUE self, VALUE public_key)
{
EC_KEY *ec;
EC_POINT *point = NULL;
Require_EC_KEY(self, ec);
if (!NIL_P(public_key))
SafeRequire_EC_POINT(public_key, point);
switch (EC_KEY_set_public_key(ec, point)) {
case 1:
break;
case 0:
if (point == NULL)
break;
default:
ossl_raise(eECError, "EC_KEY_set_public_key");
}
return public_key;
}
public_key? => true or false click to toggle source
Both #public_key? and #private_key? may return false at the same time unlike other PKey classes.
static VALUE ossl_ec_key_is_public_key(VALUE self)
{
EC_KEY *ec;
Require_EC_KEY(self, ec);
return (EC_KEY_get0_public_key(ec) ? Qtrue : Qfalse);
}
to_der => String click to toggle source
See the OpenSSL documentation for i2d_ECPrivateKey_bio()
static VALUE ossl_ec_key_to_der(VALUE self)
{
return ossl_ec_key_to_string(self, Qnil, Qnil, EXPORT_DER);
}
to_pem(p1 = v1, p2 = v2) click to toggle source
Alias for: export
to_text => String click to toggle source
See the OpenSSL documentation for EC_KEY_print()
static VALUE ossl_ec_key_to_text(VALUE self)
{
EC_KEY *ec;
BIO *out;
VALUE str;
Require_EC_KEY(self, ec);
if (!(out = BIO_new(BIO_s_mem()))) {
ossl_raise(eECError, "BIO_new(BIO_s_mem())");
}
if (!EC_KEY_print(out, ec, 0)) {
BIO_free(out);
ossl_raise(eECError, "EC_KEY_print");
}
str = ossl_membio2str(out);
return str;
}
Commenting is here to help enhance the documentation. For example, code samples, or clarification of the documentation.
If you have questions about Ruby or the documentation, please post to one of the Ruby mailing lists. You will get better, faster, help that way.
If you wish to post a correction of the docs, please do so, but also file bug report so that it can be corrected for the next release. Thank you.
If you want to help improve the Ruby documentation, please see Improve the docs, or visit Documenting-ruby.org.
blog comments powered by Disqus
|
__label__pos
| 0.994333 |
Block Someone on LinkedIn
How to Block Someone on LinkedIn Without Them Knowing
LinkedIn is a professional networking platform where users connect for job opportunities, collaborations, and industry insights. It allows individuals to showcase their skills, share content, and build professional relationships. However, not all connections are positive, and sometimes you may want to limit interactions with certain individuals.
Blocking someone on LinkedIn discreetly can help maintain your professional image. You might want to block a former colleague, a spammer, or someone who is harassing you. By doing this, you can prevent unwanted messages and keep your network focused on positive interactions. Discreet blocking allows you to manage your connections without creating unnecessary drama or tension in your professional life.
LinkedIn’s Privacy Settings
LinkedIn offers several privacy settings that allow users to control their visibility and interactions on the platform. You can manage who sees your profile, activity updates, and connections. For example, you can set your profile to be visible only to your connections or restrict it to your network. These settings are essential for protecting your personal information and ensuring you connect with the right individuals.
When you block someone on LinkedIn, they can no longer view your profile or interact with you. This means they won’t see any of your posts or updates, and you won’t receive messages from them. This feature is particularly useful for avoiding unwanted communication from former colleagues, spammers, or individuals who make you uncomfortable. Blocking allows you to maintain a professional environment without unnecessary distractions.
LinkedIn provides other privacy options to enhance your control over your profile. You can adjust who can see your connections, choose whether your profile is visible to search engines, and limit who can send you connection requests. These choices help you tailor your network based on your preferences and comfort level, allowing you to curate a professional experience that suits your needs.
Step-by-Step Guide to Blocking Someone on LinkedIn
Blocking someone on LinkedIn is a straightforward process that can be done in just a few steps.
1. Access the Profile
Start by logging into your LinkedIn account. Use the search bar at the top of the homepage to enter the name of the person you want to block. You can search for their full name, job title, or company to locate their profile quickly. Once you find their profile in the search results, click on their name to access their LinkedIn page. Ensure you’re on the correct profile to avoid blocking the wrong person.
2. Click on the ‘More’ Button
On the person’s profile page, look for the ‘More’ button, which is usually located near their profile picture and the buttons for connecting or messaging. This button provides additional options that are not immediately visible. By clicking on it, you will access a dropdown menu that includes various actions you can take regarding this connection.
3. Select ‘Report/Block’
In the dropdown menu that appears after clicking the ‘More’ button, find and select the ‘Report/Block’ option. This option allows you to either report inappropriate behavior or block the user. Choosing this will redirect you to a new window where you can take further actions regarding the profile in question.
4. Choose ‘Block’
In the new window, you will see two options: one for reporting the user and another for blocking them. Select the ‘Block [Name]’ option. This action will prevent the individual from viewing your profile, sending you messages, or engaging with your content. LinkedIn will prompt you to confirm your decision, giving you a chance to reconsider.
5. Confirm the Block
After selecting the block option, a confirmation prompt will appear. Click ‘Block’ again to finalize your decision. Once you’ve completed this step, the person will be blocked, and you will no longer see their profile or receive any communication from them. It’s important to note that blocking someone is a discreet action; they won’t be notified that you’ve blocked them. You can unblock someone on LinkedIn at any time.
What Happens After You Block Someone
When you block someone on LinkedIn, several changes occur regarding your interaction with that individual and their access to your profile.
• Restricted Access: Once you block someone, they can no longer view your profile, posts, or any of your activity on LinkedIn. This means they won’t see updates about your professional achievements, shared content, or changes to your profile. Essentially, they are cut off from your LinkedIn presence entirely.
• No Communication: The blocked individual will not be able to send you messages or connect with you on LinkedIn. Any previous conversations you had will remain in your message history, but you will not receive new messages from them. This helps eliminate unwanted communication and allows you to focus on meaningful connections.
• Visibility of Your Connections: Blocking someone also prevents them from seeing your connections. This is important if you want to keep your professional network private or if you believe the person might misuse this information. They won’t be able to view who you are connected to on LinkedIn.
• No Notification Sent: One of the key aspects of blocking someone on LinkedIn is that the blocked individual does not receive any notification about the action. This means you can manage your professional network discreetly without causing unnecessary drama or tension.
• Limited Interaction with Shared Groups: If you and the blocked person are both members of the same LinkedIn group, you may still see each other’s posts and comments within that group. However, you won’t be able to directly interact with them. They will still be able to see your posts in the group, but any further communication is halted.
Alternative Options to Blocking
If blocking someone on LinkedIn doesn’t seem like the best solution for your situation, there are several alternative options you can consider.
Remove Connections
Instead of blocking, you can choose to remove someone from your connections. This action will cut off the professional relationship, preventing them from seeing your updates and profile. To remove a connection, go to their profile, click on the ‘More’ button, and select ‘Remove Connection.’ This option allows you to distance yourself from someone without the finality of blocking.
Adjust Privacy Settings
You can modify your privacy settings to limit who can see your profile and activity. For example, you can make your profile visible only to your connections or restrict who can send you connection requests. Adjusting these settings helps you control your visibility and interactions without having to block anyone.
Mute Notifications
If you want to stay connected but prefer not to see their updates, consider muting notifications from that person. This option allows you to remain in their network without receiving alerts about their activity. While you’ll still be connected, you won’t be bothered by their posts or messages.
Report Inappropriate Behavior
If the person is behaving inappropriately, you can report them to LinkedIn instead of blocking. Use the ‘Report/Block’ option on their profile to report harassment or spam. LinkedIn will review the report and take appropriate action. This option allows you to address serious issues without severing the connection outright.
Limit Profile Visibility
You can also adjust your profile visibility settings to limit who can see your information. For instance, you can set your profile to be visible only to your connections or to specific individuals. This way, you can protect your personal information while still being open to networking with others.
FAQs
Q. Can I see if someone has blocked me on LinkedIn?
No, LinkedIn does not notify users when they are blocked. You will only notice if you can no longer view their profile or interact with them.
Q. Will blocking someone remove them from my connections?
Yes, when you block someone, they are automatically removed from your connections. You won’t be connected anymore.
Q. Can a blocked person see my comments in groups?
If you and the blocked person are in the same group, they can still see your comments and posts within that group. However, you won’t be able to interact with them directly.
Q. Can I unblock someone after blocking them?
Yes, you can unblock someone at any time through your privacy settings. Once unlocked, they can see your profile again, but you will not automatically reconnect.
Q. Does blocking someone deletes our previous messages?
No, blocking someone does not delete previous messages in your chat history. You can still access the past conversations, but you won’t receive new messages from them.
Conclusion
Blocking someone on LinkedIn can help you maintain a professional online presence and protect yourself from unwanted interactions. By following the simple steps outlined in this article, you can easily block someone without them knowing. This action ensures that the person can no longer view your profile or send you messages, allowing you to focus on meaningful connections.
Remember, blocking is just one option. If you prefer not to block someone, consider alternatives like removing them from your connections or adjusting your privacy settings. Ultimately, the goal is to create a positive networking environment that aligns with your professional objectives.
|
__label__pos
| 0.815116 |
Cisco Unified Contact Center Express Data Store
User Scripts, Grammars, and Documents
Prev Question Next Question
Question
Which Cisco Unified Contact Center Express data store contains user scripts, grammars, and documents?
Answers
Explanations
Click on the arrows to vote for the correct answer
A. B. C. D. E.
B.
Unified CCX applications might use auxiliary files that interact with callers, such as scripts, pre-recorded prompts, grammars, and custom Java classes.
Depending on each implementation, Unified CCX applications use some or all of the following file types.
The Unified CCX Server's local disk prompt, grammar, and document files are synchronized with the central repository during Unified CCX engine startup and during run-time when the Repository datastore is modified.
The correct answer is E. script data store.
Cisco Unified Contact Center Express (UCCX) is a customer interaction solution designed for small to medium-sized businesses. It offers a range of features to manage customer interactions, including call routing, self-service options, and agent desktop interfaces.
In UCCX, user scripts, grammars, and documents are stored in the Script Data Store. This data store is used to manage the scripts and other resources that are used to implement the call handling logic in the UCCX application.
User scripts are used to define the call flow, determine the options that are presented to the caller, and specify the actions that are taken by the system in response to caller input. Grammars are used to define the speech recognition rules that are used to interpret caller input. Documents are used to provide information to the caller or to the agent.
The Script Data Store is a repository that is used to store and manage all of these resources. It is accessed through the UCCX Administration interface, which provides a range of tools for creating, modifying, and managing scripts, grammars, and documents.
In summary, the correct answer is E. script data store, which is used to store user scripts, grammars, and documents in Cisco Unified Contact Center Express.
|
__label__pos
| 0.949849 |
Data link layer
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In the seven-layer OSI model of computer networking, the data link layer is layer 2. In TCP/IP reference model, it corresponds to, or is part of the link layer.
The data link layer is the protocol layer that transfers data between adjacent network nodes in a wide area network or between nodes on the same local area network segment.[1] The data link layer provides the functional and procedural means to transfer data between network entities and might provide the means to detect and possibly correct errors that may occur in the physical layer. Examples of data link protocols are Ethernet for local area networks (multi-node), the Point-to-Point Protocol (PPP), HDLC and ADCCP for point-to-point (dual-node) connections.
The data link layer is concerned with local delivery of frames between devices on the same LAN. Data-link frames, as these protocol data units are called, do not cross the boundaries of a local network. Inter-network routing and global addressing are higher layer functions, allowing data-link protocols to focus on local delivery, addressing, and media arbitration. In this way, the data link layer is analogous to a neighborhood traffic cop; it endeavors to arbitrate between parties contending for access to a medium, without concern for their ultimate destination.
When devices attempt to use a medium simultaneously, frame collisions occur. Data-link protocols specify how devices detect and recover from such collisions, and may provide mechanisms to reduce or prevent them.
Overview[edit]
Delivery of frames by layer-2 devices is effected through the use of unambiguous hardware addresses. A frame's header contains source and destination addresses that indicate which device originated the frame and which device is expected to receive and process it. In contrast to the hierarchical and routable addresses of the network layer, layer-2 addresses are flat, meaning that no part of the address can be used to identify the logical or physical group to which the address belongs.
The data link thus provides data transfer across the physical link. That transfer can be reliable or unreliable; many data-link protocols do not have acknowledgments of successful frame reception and acceptance, and some data-link protocols might not even have any form of checksum to check for transmission errors. In those cases, higher-level protocols must provide flow control, error checking, and acknowledgments and retransmission.
In some networks, such as IEEE 802 local area networks, the data link layer is described in more detail with media access control (MAC) and logical link control (LLC) sublayers; this means that the IEEE 802.2 LLC protocol can be used with all of the IEEE 802 MAC layers, such as Ethernet, token ring, IEEE 802.11, etc., as well as with some non-802 MAC layers such as FDDI. Other data-link-layer protocols, such as HDLC, are specified to include both sublayers, although some other protocols, such as Cisco HDLC, use HDLC's low-level framing as a MAC layer in combination with a different LLC layer. In the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables), the data link layer is divided into three sub-layers (application protocol convergence, logical link control and medium access control).
Within the semantics of the OSI network architecture, the data-link-layer protocols respond to service requests from the network layer and they perform their function by issuing service requests to the physical layer.
Sublayers of the data link layer[edit]
The data link layer has two sublayers: logical link control (LLC) and media access control (MAC).[2]
Logical link control sublayer[edit]
The uppermost sublayer, LLC, multiplexes protocols running atop the data link layer, and optionally provides flow control, acknowledgment, and error notification. The LLC provides addressing and control of the data link. It specifies which mechanisms are to be used for addressing stations over the transmission medium and for controlling the data exchanged between the originator and recipient machines.
Media access control sublayer[edit]
MAC may refer to the sublayer that determines who is allowed to access the media at any one time (e.g. CSMA/CD). Other times it refers to a frame structure delivered based on MAC addresses inside.
There are generally two forms of media access control: distributed and centralized. Both of these may be compared to communication between people. In a network made up of people speaking, i.e. a conversation, we look for clues from our fellow talkers to see if any of them appear to be about to speak. If two people speak at the same time, they will back off and begin a long and elaborate game of saying "no, you first".
The Media Access Control sublayer also determines where one frame of data ends and the next one starts – frame synchronization. There are four means of frame synchronization: time based, character counting, byte stuffing and bit stuffing.
• The time based approach simply puts a specified amount of time between frames. The major drawback of this is that new gaps can be introduced or old gaps can be lost due to external influences.
• Character counting simply notes the count of remaining characters in the frame's header. This method, however, is easily disturbed if this field gets faulty in some way, thus making it hard to keep up synchronization.
• Byte stuffing precedes the frame with a special byte sequence such as DLE STX and succeeds it with DLE ETX. Appearances of DLE (byte value 0x10) have to be escaped with another DLE. The start and stop marks are detected at the receiver and removed as well as the inserted DLE characters.
• Similarly, bit stuffing replaces these start and end marks with flag consisting of a special bit pattern (e.g. a 0, six 1 bits and a 0). Occurrences of this bit pattern in the data to be transmitted are avoided by inserting a bit. To use the example where the flag is 01111110, a 0 is inserted after 5 consecutive 1's in the data stream. The flags and the inserted 0's are removed at the receiving end. This makes for arbitrary long frames and easy synchronization for the recipient. Note that this stuffed bit is added even if the following data bit is 0, which could not be mistaken for a sync sequence, so that the receiver can unambiguously distinguish stuffed bits from normal bits.
Data link layer services[edit]
Error detection and correction[edit]
• Besides framing, data link layers also include mechanisms to detect and even recover from transmission errors.
• For a receiver to detect transmission error, the sender must add redundant information (in the form of bits) as an error detection code to the frame sent.
• When the receiver obtains a frame with an error detection code it recomputes it and verifies whether the received error detection code matches the computed error detection code. If they match the frame is considered to be valid.
• An error detection code can be defined as a function that computes the r (amount of redundant bits) corresponding to each string of N total number of bits.
• The simplest error detection code is the Parity bit.
• The parity bit allows a receiver to detect transmission errors that have affected a single bit among the transmitted N+r bits. If there are two or more bits in error, the receiver may not be able to detect the transmission error.
A simple example of how this works by using meta-data. Say we want to transmit the word 'HELLO'. To keep things simple we will change each letter in the alphabet as its position in the alphabet. Thus, the letter A is coded as 1, B as 2, and so on:
H E L L O
8 5 12 12 15
Adding the digits 8 + 5 + 12 + 12 + 15 = 52. Then we add 5 + 2 = 7 to get the meta-data. We then transmit:
8 5 12 12 15 7
If there are no errors, the receiver will get 8 5 12 12 15 7. The receiver knows that the last number received is the error-detecting meta-data and that all data before is the message. It can recalculate the above math and if it comes to the same meta-data answer, it can be concluded that the data was received without errors. If it receives something like 7 5 12 12 15 it can run the check by: 7 + 5 + 12 + 12 + 15 = 51 and 5 + 1 = 6, Since 6 does not equal 7 the receiver can discard the received data as defective.
Protocol examples[edit]
Software interfaces[edit]
Data link layer implementations may be in software only, simulating a network interface.
Relation to TCP/IP model[edit]
In the frame work of the TCP/IP (Internet Protocol Suite) model, OSI's data link layer, in addition to other components, is contained in TCP/IP's lowest layer, the link layer. The Internet Protocol's link layer only concerns itself with hardware issues to the point of obtaining hardware addresses for locating hosts on a physical network link and transmitting data frames onto the link. Thus, the link layer is broader in scope and encompasses all methods that affect the local link, which is the group of connections that are limited in scope to other nodes on the local access network.
The TCP/IP model is not a top/down comprehensive design reference for networks. It was formulated for the purpose of illustrating the logical groups and scopes of functions needed in the design of the suite of internetworking protocols of TCP/IP, as needed for the operation of the Internet. In general, direct or strict comparisons of the OSI and TCP/IP models should be avoided, because the layering in TCP/IP is not a principal design criterion and in general considered to be "harmful" (RFC 3439). In particular, TCP/IP does not dictate a strict hierarchical sequence of encapsulation requirements, as is attributed to OSI protocols.
See also[edit]
References[edit]
1. ^ "What is layer 2, and Why Should You Care?". accel-networks.com. Archived from the original on 2010-02-18. Retrieved 2009-09-29.
2. ^ Regis J. Bates and Donald W. Gregory (2007). Voice & data communications handbook (5th ed.). McGraw-Hill Professional. p. 45. ISBN 978-0-07-226335-0.
• S. Tanenbaum, Andrew (2005). Computer Networks (4th ed.). 482,F.I.E., Patparganj, Delhi 110 092: Dorling Kindersley(India)Pvt. Ltd.,licenses of Pearson Education in South Asia. ISBN 81-7758-165-1.
External links[edit]
|
__label__pos
| 0.873487 |
Version 3 (modified by anonymous, 11 years ago) (diff)
--
Sometimes it is useful to only allow users from a certain group to login.
The main problem with that idea is that you can't actually check if the user is in the required group before he logs in. This is a kind of chicken and egg problem in django because as the anonymous user you don't have enough rights to look the necessary information up.
The solution/workaround to the problem is:
a) modified decorator for user_passes_test from django.contrib.auth.decorator:
from django.contrib.auth import LOGIN_URL, REDIRECT_FIELD_NAME
from django.http import HttpResponseRedirect, Http404
def user_passes_mytest(test_func, login_url=LOGIN_URL):
def _dec(view_func):
def _checklogin(request, *args, **kwargs):
try:
if test_func(request.user):
return view_func(request, *args, **kwargs)
return HttpResponseRedirect('%s?%s=%s' % (login_url, REDIRECT_FIELD_NAME, quote(request.get_full_path())))
except:
raise Http404
return _checklogin
return _dec
b) one central login view (and template) for every (django) application:
from django.contrib.auth.decorators import login_required
from django.contrib.auth.models import Group
from django.contrib.auth import SESSION_KEY
from django.shortcuts import render_to_response
@login_required
def index(request):
custgroup = Group.objects.get(name="thegroupyouneed") # Replace group name here!
if custgroup in request.user.groups.all():
return render_to_response('myapp/templates/index.html') # This is the welcome screen after a successful login
else:
try:
del request.session[SESSION_KEY]
except:
pass
return render_to_response('registration/notallowed.html') # This is the 'not allowed' screen wich
# should be shown if the user is not in the correct group
and finally c) the other views using our new decorator:
REQUIRED_GROUP = Group.objects.get(name="thegroupyouneed") # Replace group name here!
@user_passes_mytest(lambda u: REQUIRED_GROUP in u.groups.all())
def someviewhere(request):
return render_to_response('myapp/templates/someviewhere.html')
That's it. Enjoy!
Back to Top
|
__label__pos
| 0.53477 |
Balanced Partitioning Dynamic Programming Problem
CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com
Results 1 to 4 of 4
Thread: Balanced Partitioning Dynamic Programming Problem
1. #1
Join Date
Feb 2013
Posts
2
Balanced Partitioning Dynamic Programming Problem
The problem is that you have a set of numbers and you need to divide that set into two subsets where the difference between the sums of the subset is minimal.
Example: a set of numbers {1,5,9,3,8}, now the solution is two subsets, one subset with elements {9,3} and the other {8,5,1} the sum of the first one is 13 and the sum of the second is 13 so the difference between the sums is 0. The result shows the difference between the sums.
Another example: a set of numbers where the difference between the subsets cannot be zero, {9 51 308 107 27 91 62 176 28 6}, the minimal difference between the two subsets is 2.
Can someone please explain this code in detail? I've tried debugging it but i can't figure out how it produces the result. I've been searching for a solution for the problem and this is the code that I stumbled upon. I want to know how the function finds the two subsets, it works great because I've tested it for up to 300 inputs which sum adds up to 100,000.
Many thanks.
p.s Don't worry about the memory leak or that you can only input 300 numbers.
Code:
#include <iostream>
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <limits.h>
using namespace std;
int BalancedPartition ( int a[] , int n )
{
int sum = 0;
for( int i = 0 ; i < n ; i++)
sum += a[i];
int *s = new int[sum+1];
s[0] = 1;
for(int i = 1 ; i < sum+1 ; i++) s[i] = 0;
int diff = INT_MAX , ans;
for(int i = 0 ; i < n ; i++)
{
for(int j = sum ; j >= a[i] ; j--)
{
s[j] = s[j] | s[j-a[i]];
if( s[j] == 1 )
{
if( diff > abs( sum/2 - j) )
{
diff = abs( sum/2 - j );
ans = j;
}
}
}
}
return sum-ans-ans;
}
int main()
{
int n,result, arr[300];
cin >>n;
for(int i = 0; i < n; i++)
{
cin>>arr[i];
}
result = BalancedPartition(arr,n);
cout <<abs(result); // The difference between the sums of the two subsets
return 0;
}
2. #2
2kaud's Avatar
2kaud is online now Super Moderator Power Poster
Join Date
Dec 2012
Location
England
Posts
5,307
Re: Balanced Partitioning Dynamic Programming Problem
Shouldn't set the subsets be {1 3 9} and {8 5} to give each subset a total of 13? Also your program gives a difference of 1 not 2 for {9 51 308 107 27 91 62 176 28 6}. To understand how it produces the result, try making s a pointer to an array of bool instead of int and replace 1 with true and 0 with false when using s. Also note that the number of elements of the s array is 1 more than the sum of the numbers entered. It iterates through the entered numbers from start to finish and for each number entered iterates through the s bool array backwards.
Can someone please explain this code in detail?
Which part of the code don't you understand as it's fairly straightforward c++. If you indicate which line(s) you don't understand I'll try to help to explain what that particular c++ code is doing.
3. #3
Join Date
Feb 2013
Posts
2
Re: Balanced Partitioning Dynamic Programming Problem
Yeah the example i wrote has many solutions i wrote one of them, yours is one of them too, about the code it is straightforward C++, but i do not see how is the s array used and how through it the solutions are generated (that would be the 2 for loops)
4. #4
Join Date
Oct 2008
Posts
1,420
Re: Balanced Partitioning Dynamic Programming Problem
clearly, if S is the total sum and P is the sum of a subset then the result R ( = the absolute value of the difference of P and the sum of its complement ) equals |S - 2*P|. The two loops "for( ... for( ... if( s[j] == 1 ){ /*test*/ } ..." just iterates all the possible values of P ( non optimally though ). Then, the "/*test*/" part above just select the P that minimizes R.
if you don't see how the iteration works follow the 2kaud's advice of considering "s" as an array of booleans, then follow the code and write down on paper the sequence of positions of s[] elements that are set true at each iteration.
Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
Windows Mobile Development Center
Click Here to Expand Forum to Full Width
This a Codeguru.com survey!
HTML5 Development Center
|
__label__pos
| 0.987827 |
Skip to content
Permalink
master
Switch branches/tags
Go to file
Cannot retrieve contributors at this time
outboundFormTracker
This guide explains what the outboundFormTracker plugin is and how to integrate it into your analytics.js tracking implementation.
Overview
When a visitor to your site submits a form that goes to another page on your site, you can usually see this information in Google Analytics because the page being navigated to will typically send its own pageview. However, if a visitor to your site submits a form that points to an external domain, you'll never know unless you track that submit separately.
The outboundFormTracker plugin automatically detects when forms are submitted to sites on different domains and sends an event hit to Google Analytics.
Historically, outbound form tracking has been tricky to implement because most browsers stop executing JavaScript on the current page once a form that requests a new page is submitted. The outboundFormTracker plugin handles these complications for you.
Usage
To enable the outboundFormTracker plugin, run the require command, specify the plugin name 'outboundFormTracker', and pass in any configuration options (if any) you wish to set:
ga('require', 'outboundFormTracker', options);
Determining what is an outbound form
By default a form is considered outbound if the hostname of the URL it's pointing to differs from location.hostname. Note that this means forms pointing to different subdomains within the same higher-level domain are (by default) still considered outbound. To customize this logic, see shouldTrackOutboundForm in the options section below.
Options
The following table outlines all possible configuration options for the outboundFormTracker plugin. If any of the options has a default value, the default is explicitly stated:
Name Type Description
formSelector string A selector used to identify forms to listen for submit events on.
Default: 'form'
shouldTrackOutboundForm Function A function that returns true if the form in question should be considered an outbound form. The function is invoked with the form element as its first argument and a parseUrl utility function (which returns a Location-like object) as its second argument.
Default:
function shouldTrackOutboundForm(form, parseUrl) {
var url = parseUrl(form.action);
return url.hostname != location.hostname &&
url.protocol.slice(0, 4) == 'http';
}
fieldsObj Object See the common options guide for the fieldsObj description.
attributePrefix string See the common options guide for the attributePrefix description.
Default: 'ga-'
hitFilter Function See the common options guide for the hitFilter description.
Default field values
The outboundFormTracker plugin sends hits with the following values. To customize these values, use one of the options described above.
Field Value
hitType 'event'
eventCategory 'Outbound Form'
eventAction 'submit'
eventLabel form.action
Note: the reference to form in the table above refers to the <form> element being submitted.
Methods
The following table lists all methods for the outboundFormTracker plugin:
Name Description
remove Removes the outboundFormTracker plugin from the specified tracker, removes all event listeners from the DOM, and restores all modified tasks to their original state prior to the plugin being required.
For details on how analytics.js plugin methods work and how to invoke them, see calling plugin methods in the analytics.js documentation.
Examples
Basic usage
ga('require', 'outboundFormTracker');
<form action="https://example.com">...</form>
Customizing the form selector
This code only tracks form elements with the js-track-submits class.
ga('require', 'outboundFormTracker', {
formSelector: '.js-track-submits'
});
<form class="js-track-submits" action="https://example.com">...</form>
Customizing what is considered an "outbound" form
This code changes the default logic used to determine if a form is "outbound". It updates the logic to only track forms that submit to the foo.com and bar.com domains:
ga('require', 'outboundFormTracker', {
shouldTrackOutboundForm: function(form, parseUrl) {
var url = parseUrl(form.action);
return /(foo|bar)\.com$/.test(url.hostname);
}
});
With the above code, submits from the following form won't be tracked, even though the form is submitting to an external domain:
<form action="https://example.com">...</form>
|
__label__pos
| 0.573204 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
My overall question is how to move an existing UIMangedDocument (with a core data sql store) from the local sandbox to the iCloud.
Everything I am reading online is telling me to use NSFileManager's setUbiquitous:itemAtURL:destinationURL:error:. When doing this though, I noticed that persistantStore gets copied up to the cloud which I believe is wrong. After thinking this throgh, I am starting to believe that I should create a new document in the cloud, and then manually insert the existing records (because an existing database in the sandbox does not have any transaction logs).
So, is my line of thinking correct, or is the persistantStore that is copied up there used as a starting point (so if another device is connected, it downloads the persistantStore as a base and then applies any new transactions that occured after that)?
A secondary related question (just to confirm my understanding of how UIManagedDocument works), if I were create a document (in the cloud), add a record and update the same record 100,000 times, and then open this document on a new device, would it have to apply the 100,001 transactions to a new database? Seems like a document that is being used heavily is going to continue to consume space on the cloud even if the document has minimal data but lots of updates.
share|improve this question
Hi, did you ever fix this? I'm having the same problem and its driving me nuts! – Adam Carter May 8 '13 at 0:05
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.588436 |
Archive
Uncovering the Benefits and Pitfalls of IPTV M3U: What You Need to Know
Uncovering the Benefits and Pitfalls of IPTV M3U: What You Need to Know
Delve into the world of IPTV M3U and unlock its hidden potential. In this comprehensive guide, we dissect the benefits and pitfalls of IPTV M3U, equipping you with the knowledge to navigate this ever-evolving technology. From seamless streaming to a vast array of entertainment options, IPTV M3U offers a revolutionary viewing experience. However, pitfalls such as technical complexities and potential security risks necessitate a close examination of this popular streaming method.
Uncovering the layers of IPTV M3U will shed light on the intricacies of this technology, providing you with a deeper understanding of its functionality and the considerations necessary to make the most of it. As the digital landscape continues to expand, grasping the nuances of IPTV M3U is essential for anyone seeking to optimize their streaming endeavors. Join us as we explore the world of IPTV M3U and gain valuable insights into its benefits and potential drawbacks.
Understanding IPTV M3U
IPTV M3U stands for Internet Protocol Television with M3U being a file format that stores multimedia playlists. This technology allows users to stream television content through internet protocol networks. Unlike traditional methods of content delivery, IPTV M3U offers a more flexible and personalized viewing experience. Users can access a wide range of channels and on-demand content, tailored to their preferences. IPTV M3U is compatible with various devices, including smartphones, smart TVs, and streaming boxes, making it accessible and convenient for diverse audiences.
The versatility of IPTV M3U extends beyond traditional broadcasting, enabling users to consume content on their own terms. By understanding how IPTV M3U operates and its compatibility with different devices, individuals can harness the power of this technology to redefine their entertainment consumption. Embracing the potential of IPTV M3U opens the door to a new era of content delivery, offering unparalleled convenience and customization.
The Benefits of IPTV M3U
The benefits of IPTV M3U are multifaceted, offering users a seamless and personalized viewing experience. One of the primary advantages is the vast array of content available through IPTV M3U. Users can access international channels, live sports events, movies, and series from various genres, providing a diverse and comprehensive entertainment selection. This breadth of content ensures that users can find something to suit their preferences, transcending the limitations of traditional television.
Another notable benefit of IPTV M3U is its on-demand capabilities. Users can access a library of content to watch at their convenience, eliminating the need to adhere to fixed broadcasting schedules. This flexibility empowers users to tailor their viewing experience to their lifestyle, enhancing convenience and satisfaction. Additionally, IPTV M3U often offers high-definition streaming, delivering superior visual quality compared to traditional broadcasting methods. The seamless and high-quality streaming experience enhances the overall enjoyment of content, elevating the viewing experience for users.
Furthermore, IPTV M3U can be more cost-effective than traditional cable or satellite subscriptions. By leveraging internet-based streaming, users can potentially access a comparable or even expanded range of content at a lower cost. This affordability makes IPTV M3U an attractive option for budget-conscious consumers seeking high-quality entertainment without exorbitant expenses.
The Pitfalls of IPTV M3U
Despite its numerous benefits, IPTV M3U is not without its pitfalls. One of the key considerations is the technical complexity associated with setting up and maintaining IPTV M3U services. Users may encounter challenges in configuring their devices and accessing reliable IPTV M3U service providers. Additionally, the need for stable internet connectivity is crucial for uninterrupted streaming, presenting a potential hurdle for users in areas with inconsistent or limited internet access.
Another significant concern is the legal ambiguity surrounding IPTV M3U. While some IPTV M3U services operate within legal boundaries by securing rights to distribute content, others may infringe upon copyright laws by offering unauthorized access to copyrighted material. Users must exercise caution and ensure that they engage with legitimate IPTV M3U providers to avoid legal repercussions associated with copyright infringement.
Moreover, the potential security risks associated with IPTV M3U warrant attention. Users may be exposed to malicious software, phishing attempts, or unauthorized access to personal information when utilizing IPTV M3U services from unverified sources. Safeguarding personal data and maintaining a secure online environment is essential when engaging with IPTV M3U to mitigate these potential risks.
Legal considerations of IPTV M3U
The legal landscape surrounding IPTV M3U is complex and requires careful consideration. While some IPTV M3U services are licensed and operate within legal parameters, others may provide access to copyrighted content without the appropriate permissions. Users must be vigilant in discerning the legitimacy of IPTV M3U providers to avoid infringing upon intellectual property rights.
Ensuring compliance with copyright laws is imperative when utilizing IPTV M3U services. By engaging with authorized content providers and adhering to licensing agreements, users can enjoy IPTV M3U while respecting the intellectual property rights of content creators. Furthermore, staying informed about the legal implications of IPTV M3U usage and seeking out reputable and lawful service providers is essential to navigate the legal considerations associated with this technology.
How to set up IPTV M3U
Setting up IPTV M3U requires careful attention to technical details and the selection of reliable service providers. Users can begin by obtaining an IPTV M3U subscription from a reputable provider, ensuring that the service aligns with their viewing preferences and legal compliance. Once subscribed, users need to configure their compatible devices, such as smart TVs, streaming boxes, or media players, to access the IPTV M3U service.
Configuring the M3U playlist involves inputting the provided URL and ensuring compatibility with the selected device. Users may also need to install relevant applications or software to facilitate seamless streaming. It is crucial to follow the setup instructions provided by the IPTV M3U service provider to avoid technical complications and optimize the viewing experience.
Additionally, users should prioritize internet connectivity and network stability to support uninterrupted IPTV M3U streaming. Reliable internet access is essential for a smooth and enjoyable viewing experience, minimizing potential disruptions or buffering issues. By adhering to best practices for setting up IPTV M3U and selecting trusted service providers, users can maximize the benefits of this technology while mitigating potential pitfalls.
Free Playlists for IPTV
IPTV M3U vs Traditional TV
Comparing IPTV M3U to traditional television reveals distinct differences in content delivery and viewing experience. While traditional TV relies on scheduled broadcasting and limited channel selections, IPTV M3U offers on-demand access to a diverse range of content from around the world. This contrast underscores the flexibility and personalization inherent in IPTV M3U, catering to individual preferences and lifestyle demands.
Furthermore, IPTV M3U transcends geographical boundaries, allowing users to access international channels and content without the constraints of regional broadcasting. This global reach broadens the scope of entertainment options, providing users with a more comprehensive and inclusive viewing experience compared to traditional TV. The shift from scheduled programming to on-demand content empowers users to curate their entertainment consumption, aligning with modern trends emphasizing convenience and choice.
However, traditional TV maintains a level of familiarity and simplicity that may appeal to certain demographics. The ease of accessing broadcasted channels without the need for internet connectivity offers a straightforward and reliable viewing experience for individuals who prioritize traditional television formats. Understanding the distinctions between IPTV M3U and traditional TV enables users to make informed decisions based on their preferences and viewing habits.
IPTV M3U Subscription Services
IPTV M3U subscription services play a pivotal role in enabling users to access a diverse array of content through internet-based streaming. These services typically offer subscription plans tailored to varying preferences, encompassing different channel lineups, on-demand libraries, and additional features. Users can select subscription tiers that align with their viewing habits and budget, customizing their IPTV M3U experience to suit their individual needs.
When evaluating IPTV M3U subscription services, users should consider factors such as content variety, streaming quality, customer support, and legal compliance. Opting for reputable and licensed service providers ensures a reliable and lawful IPTV M3U experience, prioritizing user satisfaction and content authenticity. Additionally, transparent pricing structures and flexible subscription options empower users to make informed decisions when choosing an IPTV M3U service provider.
By leveraging subscription services, users gain access to an extensive catalog of content, including live TV channels, on-demand movies, series, and specialized programming. The diverse offerings available through IPTV M3U subscription services cater to a wide range of entertainment preferences, granting users the freedom to explore and discover new content while enjoying the convenience of internet-based streaming.
IPTV M3U Content Providers
IPTV M3U content providers serve as the primary source of television content accessible through IPTV M3U services. These providers curate and distribute an extensive range of channels, programs, and on-demand content, enriching the streaming experience for users. Content providers play a pivotal role in shaping the diversity and quality of available entertainment, influencing the appeal and value of IPTV M3U services.
Selecting reputable and diverse content providers is essential for users seeking a comprehensive and engaging IPTV M3U experience. By partnering with established and licensed content providers, IPTV M3U services can offer a rich assortment of channels, ensuring that users have access to their favorite programs and discovering new content across various genres. The collaboration between service providers and content providers fuels the dynamic IPTV M3U ecosystem, fostering a vibrant and ever-evolving landscape of entertainment options.
Users should prioritize engaging with IPTV M3U services that uphold partnerships with reputable content providers, prioritizing content authenticity, and legal compliance. By aligning with trustworthy content providers, users can enjoy a robust and diverse selection of television content through IPTV M3U, enhancing their viewing experience while supporting the integrity of intellectual property rights.
IPTV M3U Security and Privacy Concerns
The digital nature of IPTV M3U introduces security and privacy considerations that users must address to safeguard their online experience. When engaging with IPTV M3U services, users should prioritize the security of their personal information and digital devices to mitigate potential risks. Ensuring that IPTV M3U service providers implement robust security measures and encryption protocols is essential for protecting user data and privacy.
Potential security threats associated with IPTV M3U include malicious software, phishing attempts, and unauthorized access to personal information. Users must exercise caution when accessing IPTV M3U services, verifying the legitimacy and security measures implemented by service providers. Additionally, practicing safe internet usage habits, such as avoiding suspicious links and maintaining updated antivirus software, contributes to a secure IPTV M3U experience.
Privacy concerns also arise in the context of IPTV M3U, particularly regarding data collection and usage by service providers. Users should review the privacy policies of IPTV M3U services to understand how their data is handled and whether it is shared with third parties. By prioritizing transparency and data protection, users can make informed decisions about engaging with IPTV M3U services while safeguarding their privacy rights.
Conclusion
In conclusion, navigating the world of IPTV M3U entails a thorough understanding of its benefits, pitfalls, legal considerations, and operational aspects. The versatility and convenience of IPTV M3U offer users a transformative viewing experience, granting access to an extensive array of content personalized to their preferences. However, technical complexities, legal ambiguities, and security risks underscore the importance of informed decision-making and cautious engagement with IPTV M3U.
By comprehensively exploring the benefits and pitfalls of IPTV M3U, individuals can make informed choices about integrating this technology into their entertainment consumption. Understanding the legal considerations, setting up IPTV M3U, and selecting reputable subscription services and content providers empowers users to maximize the benefits of IPTV M3U while mitigating potential drawbacks. Prioritizing security and privacy concerns ensures that users can enjoy IPTV M3U in a safe and responsible manner, optimizing their viewing experience while respecting legal and ethical considerations.
Embracing the evolution of entertainment delivery through IPTV M3U requires a balanced approach that accounts for its advantages and challenges. By equipping themselves with the knowledge and considerations outlined in this guide, individuals can navigate the intricacies of IPTV M3U with confidence, unlocking its hidden potential while safeguarding their digital experience. As technology continues to shape the landscape of entertainment consumption, understanding IPTV M3U is essential for anyone seeking to embrace the future of streaming media.
0 Comments
|
__label__pos
| 0.911233 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with
or
.
Download ZIP
tag: 1.8.1
Fetching contributors…
Cannot retrieve contributors at this time
762 lines (695 sloc) 24.275 kb
<?php
/**
* Piwik - Open source web analytics
*
* @link http://piwik.org
* @license http://www.gnu.org/licenses/gpl-3.0.html GPL v3 or later
* @version $Id$
*
* @category Piwik
* @package Piwik
*/
/**
* Parent class of all plugins Controllers (located in /plugins/PluginName/Controller.php
* It defines some helper functions controllers can use.
*
* @package Piwik
*/
abstract class Piwik_Controller
{
/**
* Plugin name, eg. Referers
* @var string
*/
protected $pluginName;
/**
* Date string
*
* @var string
*/
protected $strDate;
/**
* Piwik_Date object or null if the requested date is a range
*
* @var Piwik_Date|null
*/
protected $date;
/**
* @var int
*/
protected $idSite;
/**
* @var Piwik_Site
*/
protected $site = null;
/**
* Builds the controller object, reads the date from the request, extracts plugin name from
*/
function __construct()
{
$this->init();
}
protected function init()
{
$aPluginName = explode('_', get_class($this));
$this->pluginName = $aPluginName[1];
$date = Piwik_Common::getRequestVar('date', 'yesterday', 'string');
try {
$this->idSite = Piwik_Common::getRequestVar('idSite', false, 'int');
$this->site = new Piwik_Site($this->idSite);
$date = $this->getDateParameterInTimezone($date, $this->site->getTimezone());
$this->setDate($date);
} catch(Exception $e){
// the date looks like YYYY-MM-DD,YYYY-MM-DD or other format
$this->date = null;
}
}
/**
* Helper method to convert "today" or "yesterday" to the default timezone specified.
* If the date is absolute, ie. YYYY-MM-DD, it will not be converted to the timezone
*
* @param string $date today, yesterday, YYYY-MM-DD
* @param string $defaultTimezone default timezone to use
* @return Piwik_Date
*/
protected function getDateParameterInTimezone($date, $defaultTimezone )
{
$timezone = null;
// if the requested date is not YYYY-MM-DD, we need to ensure
// it is relative to the website's timezone
if(in_array($date, array('today', 'yesterday')))
{
// today is at midnight; we really want to get the time now, so that
// * if the website is UTC+12 and it is 5PM now in UTC, the calendar will allow to select the UTC "tomorrow"
// * if the website is UTC-12 and it is 5AM now in UTC, the calendar will allow to select the UTC "yesterday"
if($date == 'today')
{
$date = 'now';
}
elseif($date == 'yesterday')
{
$date = 'yesterdaySameTime';
}
$timezone = $defaultTimezone;
}
return Piwik_Date::factory($date, $timezone);
}
/**
* Sets the date to be used by all other methods in the controller.
* If the date has to be modified, it should be called just after the controller construct
*
* @param Piwik_Date $date
* @return void
*/
protected function setDate(Piwik_Date $date)
{
$this->date = $date;
$strDate = $this->date->toString();
$this->strDate = $strDate;
}
/**
* Returns the name of the default method that will be called
* when visiting: index.php?module=PluginName without the action parameter
*
* @return string
*/
function getDefaultAction()
{
return 'index';
}
/**
* Given an Object implementing Piwik_View_Interface, we either:
* - echo the output of the rendering if fetch = false
* - returns the output of the rendering if fetch = true
*
* @param Piwik_ViewDataTable $view view object to use
* @param bool $fetch indicates whether to output or return the content
* @return string|void
*/
protected function renderView( Piwik_ViewDataTable $view, $fetch = false)
{
Piwik_PostEvent( 'Controller.renderView',
$this,
array( 'view' => $view,
'controllerName' => $view->getCurrentControllerName(),
'controllerAction' => $view->getCurrentControllerAction(),
'apiMethodToRequestDataTable' => $view->getApiMethodToRequestDataTable(),
'controllerActionCalledWhenRequestSubTable' => $view->getControllerActionCalledWhenRequestSubTable(),
)
);
$view->main();
$rendered = $view->getView()->render();
if($fetch)
{
return $rendered;
}
echo $rendered;
}
/**
* Returns a ViewDataTable object of an Evolution graph
* for the last30 days/weeks/etc. of the current period, relative to the current date.
*
* @param string $currentModuleName
* @param string $currentControllerAction
* @param string $apiMethod
* @return Piwik_ViewDataTable_GenerateGraphHTML_ChartEvolution
*/
protected function getLastUnitGraph($currentModuleName, $currentControllerAction, $apiMethod)
{
$view = Piwik_ViewDataTable::factory('graphEvolution');
$view->init( $currentModuleName, $currentControllerAction, $apiMethod );
// if the date is not yet a nicely formatted date range ie. YYYY-MM-DD,YYYY-MM-DD we build it
// otherwise the current controller action is being called with the good date format already so it's fine
// see constructor
if( !is_null($this->date))
{
$view->setParametersToModify(
$this->getGraphParamsModified( array('date' => $this->strDate))
);
}
return $view;
}
/**
* This method is similar to self::getLastUnitGraph. It works with API.get to combine metrics
* of different *.get reports. The returned ViewDataTable is configured with column
* translations and selectable metrics.
*
* @param string $currentModuleName
* @param string $currentControllerAction
* @param array $columnsToDisplay
* @param array $selectableColumns
* @param bool|string $reportDocumentation
* @param string $apiMethod The method to request the report from
* (by default, this is API.get but it can be changed for custom stuff)
* @return Piwik_ViewDataTable_GenerateGraphHTML_ChartEvolution
*/
protected function getLastUnitGraphAcrossPlugins($currentModuleName, $currentControllerAction,
$columnsToDisplay, $selectableColumns=array(), $reportDocumentation=false, $apiMethod='API.get')
{
// back up and manipulate the columns parameter
$backupColumns = false;
if (isset($_GET['columns']))
{
$backupColumns = $_GET['columns'];
}
$_GET['columns'] = implode(',', $columnsToDisplay);
// load translations from meta data
$idSite = Piwik_Common::getRequestVar('idSite');
$period = Piwik_Common::getRequestVar('period');
$date = Piwik_Common::getRequestVar('date');
$meta = Piwik_API_API::getInstance()->getReportMetadata($idSite, $period, $date);
$columns = array_merge($columnsToDisplay, $selectableColumns);
$translations = array();
foreach ($meta as $reportMeta)
{
if ($reportMeta['action'] == 'get' && !isset($reportMeta['parameters']))
{
foreach ($columns as $column)
{
if (isset($reportMeta['metrics'][$column]))
{
$translations[$column] = $reportMeta['metrics'][$column];
}
}
}
}
// initialize the graph and load the data
$view = $this->getLastUnitGraph($currentModuleName, $currentControllerAction, $apiMethod);
$view->setColumnsToDisplay($columnsToDisplay);
$view->setSelectableColumns($selectableColumns);
$view->setColumnsTranslations($translations);
if ($reportDocumentation)
{
$view->setReportDocumentation($reportDocumentation);
}
$view->main();
// restore the columns parameter
if ($backupColumns !== false)
{
$_GET['columns'] = $backupColumns;
}
else
{
unset($_GET['columns']);
}
return $view;
}
/**
* Returns the array of new processed parameters once the parameters are applied.
* For example: if you set range=last30 and date=2008-03-10,
* the date element of the returned array will be "2008-02-10,2008-03-10"
*
* Parameters you can set:
* - range: last30, previous10, etc.
* - date: YYYY-MM-DD, today, yesterday
* - period: day, week, month, year
*
* @param array $paramsToSet array( 'date' => 'last50', 'viewDataTable' =>'sparkline' )
* @throws Piwik_Access_NoAccessException
* @return array
*/
protected function getGraphParamsModified($paramsToSet = array())
{
if(!isset($paramsToSet['period']))
{
$period = Piwik_Common::getRequestVar('period');
}
else
{
$period = $paramsToSet['period'];
}
if($period == 'range')
{
return $paramsToSet;
}
if(!isset($paramsToSet['range']))
{
$range = 'last30';
}
else
{
$range = $paramsToSet['range'];
}
if(!isset($paramsToSet['date']))
{
$endDate = $this->strDate;
}
else
{
$endDate = $paramsToSet['date'];
}
if(is_null($this->site))
{
throw new Piwik_Access_NoAccessException("Website not initialized, check that you are logged in and/or using the correct token_auth.");
}
$paramDate = self::getDateRangeRelativeToEndDate($period, $range, $endDate, $this->site);
$params = array_merge($paramsToSet , array( 'date' => $paramDate ) );
return $params;
}
/**
* Given for example, $period = month, $lastN = 'last6', $endDate = '2011-07-01',
* It will return the $date = '2011-01-01,2011-07-01' which is useful to draw graphs for the last N periods
*
* @param string $period
* @param string $lastN
* @param string $endDate
* @param Piwik_Site $site
* @return string
*/
static public function getDateRangeRelativeToEndDate($period, $lastN, $endDate, $site )
{
$last30Relative = new Piwik_Period_Range($period, $lastN, $site->getTimezone() );
$last30Relative->setDefaultEndDate(Piwik_Date::factory($endDate));
$date = $last30Relative->getDateStart()->toString() . "," . $last30Relative->getDateEnd()->toString();
return $date;
}
/**
* Returns a numeric value from the API.
* Works only for API methods that originally returns numeric values (there is no cast here)
*
* @param string $methodToCall Name of method to call, eg. Referers.getNumberOfDistinctSearchEngines
* @return int|float
*/
protected function getNumericValue( $methodToCall )
{
$requestString = 'method='.$methodToCall.'&format=original';
$request = new Piwik_API_Request($requestString);
return $request->process();
}
/**
* Returns the current URL to use in a img src=X to display a sparkline.
* $action must be the name of a Controller method that requests data using the Piwik_ViewDataTable::factory
* It will automatically build a sparkline by setting the viewDataTable=sparkline parameter in the URL.
* It will also computes automatically the 'date' for the 'last30' days/weeks/etc.
*
* @param string $action Method name of the controller to call in the img src
* @param array $customParameters Array of name => value of parameters to set in the generated GET url
* @return string The generated URL
*/
protected function getUrlSparkline( $action, $customParameters = array() )
{
$params = $this->getGraphParamsModified(
array( 'viewDataTable' => 'sparkline',
'action' => $action,
'module' => $this->pluginName)
+ $customParameters
);
// convert array values to comma separated
foreach($params as &$value)
{
if(is_array($value))
{
$value = implode(',', $value);
}
}
$url = Piwik_Url::getCurrentQueryStringWithParametersModified($params);
return $url;
}
/**
* Sets the first date available in the calendar
*
* @param Piwik_Date $minDate
* @param Piwik_View $view
* @return void
*/
protected function setMinDateView(Piwik_Date $minDate, $view)
{
$view->minDateYear = $minDate->toString('Y');
$view->minDateMonth = $minDate->toString('m');
$view->minDateDay = $minDate->toString('d');
}
/**
* Sets "today" in the calendar. Today does not always mean "UTC" today, eg. for websites in UTC+12.
*
* @param Piwik_Date $maxDate
* @param Piwik_View $view
* @return void
*/
protected function setMaxDateView(Piwik_Date $maxDate, $view)
{
$view->maxDateYear = $maxDate->toString('Y');
$view->maxDateMonth = $maxDate->toString('m');
$view->maxDateDay = $maxDate->toString('d');
}
/**
* Sets general variables to the view that are used by
* various templates and Javascript.
* If any error happens, displays the login screen
*
* @param Piwik_View $view
* @throws Exception
* @return void
*/
protected function setGeneralVariablesView($view)
{
$view->date = $this->strDate;
try {
$view->idSite = $this->idSite;
if(empty($this->site) || empty($this->idSite))
{
throw new Exception("The requested website idSite is not found in the request, or is invalid.
Please check that you are logged in Piwik and have permission to access the specified website.");
}
$this->setPeriodVariablesView($view);
$rawDate = Piwik_Common::getRequestVar('date');
$periodStr = Piwik_Common::getRequestVar('period');
if($periodStr != 'range')
{
$date = Piwik_Date::factory($this->strDate);
$period = Piwik_Period::factory($periodStr, $date);
}
else
{
$period = new Piwik_Period_Range($periodStr, $rawDate, $this->site->getTimezone());
}
$view->rawDate = $rawDate;
if ($period instanceof Piwik_Period_Month) // show month name when period is for a month
{
$view->prettyDate = $period->getLocalizedLongString();
}
else
{
$view->prettyDate = $period->getPrettyString();
}
$view->siteName = $this->site->getName();
$view->siteMainUrl = $this->site->getMainUrl();
$datetimeMinDate = $this->site->getCreationDate()->getDatetime();
$minDate = Piwik_Date::factory($datetimeMinDate, $this->site->getTimezone());
$this->setMinDateView($minDate, $view);
$maxDate = Piwik_Date::factory('now', $this->site->getTimezone());
$this->setMaxDateView($maxDate, $view);
// Setting current period start & end dates, for pre-setting the calendar when "Date Range" is selected
$dateStart = $period->getDateStart();
if($dateStart->isEarlier($minDate)) { $dateStart = $minDate; }
$dateEnd = $period->getDateEnd();
if($dateEnd->isLater($maxDate)) { $dateEnd = $maxDate; }
$view->startDate = $dateStart;
$view->endDate = $dateEnd;
$language = Piwik_LanguagesManager::getLanguageForSession();
$view->language = !empty($language) ? $language : Piwik_LanguagesManager::getLanguageCodeForCurrentUser();
$this->setBasicVariablesView($view);
} catch(Exception $e) {
Piwik_ExitWithMessage($e->getMessage());
}
}
/**
* Set the minimal variables in the view object
*
* @param Piwik_View $view
*/
protected function setBasicVariablesView($view)
{
$view->topMenu = Piwik_GetTopMenu();
$view->debugTrackVisitsInsidePiwikUI = Piwik_Config::getInstance()->Debug['track_visits_inside_piwik_ui'];
$view->isSuperUser = Zend_Registry::get('access')->isSuperUser();
$view->isCustomLogo = Piwik_Config::getInstance()->branding['use_custom_logo'];
$view->logoHeader = Piwik_API_API::getInstance()->getHeaderLogoUrl();
$view->logoLarge = Piwik_API_API::getInstance()->getLogoUrl();
$view->enableFrames = Piwik_Config::getInstance()->General['enable_framed_pages']
|| @Piwik_Config::getInstance()->General['enable_framed_logins'];
if(!$view->enableFrames)
{
$view->setXFrameOptions('sameorigin');
}
}
/**
* Sets general period variables (available periods, current period, period labels) used by templates
*
* @param Piwik_View $view
* @throws Exception
* @return void
*/
public static function setPeriodVariablesView($view)
{
if(isset($view->period))
{
return;
}
$currentPeriod = Piwik_Common::getRequestVar('period');
$view->displayUniqueVisitors = Piwik::isUniqueVisitorsEnabled($currentPeriod);
$availablePeriods = array('day', 'week', 'month', 'year', 'range');
if(!in_array($currentPeriod,$availablePeriods))
{
throw new Exception("Period must be one of: ".implode(",",$availablePeriods));
}
$periodNames = array(
'day' => array('singular' => Piwik_Translate('CoreHome_PeriodDay'), 'plural' => Piwik_Translate('CoreHome_PeriodDays')),
'week' => array('singular' => Piwik_Translate('CoreHome_PeriodWeek'), 'plural' => Piwik_Translate('CoreHome_PeriodWeeks')),
'month' => array('singular' => Piwik_Translate('CoreHome_PeriodMonth'), 'plural' => Piwik_Translate('CoreHome_PeriodMonths')),
'year' => array('singular' => Piwik_Translate('CoreHome_PeriodYear'), 'plural' => Piwik_Translate('CoreHome_PeriodYears')),
// Note: plural is not used for date range
'range' => array('singular' => Piwik_Translate('General_DateRangeInPeriodList'), 'plural' => Piwik_Translate('General_DateRangeInPeriodList') ),
);
$found = array_search($currentPeriod,$availablePeriods);
if($found !== false)
{
unset($availablePeriods[$found]);
}
$view->period = $currentPeriod;
$view->otherPeriods = $availablePeriods;
$view->periodsNames = $periodNames;
}
/**
* Set metrics variables (displayed metrics, available metrics) used by template
* Handles the server-side of the metrics picker
*
* @param Piwik_View|Piwik_ViewDataTable $view
* @param string $defaultMetricDay name of the default metric for period=day
* @param string $defaultMetric name of the default metric for other periods
* @param array $metricsForDay metrics that are only available for period=day
* @param array $metricsForAllPeriods metrics that are available for all periods
* @param bool $labelDisplayed add 'label' to columns to display?
* @return void
*/
protected function setMetricsVariablesView(Piwik_ViewDataTable $view, $defaultMetricDay='nb_uniq_visitors',
$defaultMetric='nb_visits', $metricsForDay=array('nb_uniq_visitors'),
$metricsForAllPeriods=array('nb_visits', 'nb_actions'), $labelDisplayed=true)
{
// columns is set in the request if metrics picker has been used
$columns = Piwik_Common::getRequestVar('columns', false);
if ($columns !== false)
{
$columns = Piwik::getArrayFromApiParameter($columns);
$firstColumn = $columns[0];
}
else
{
// default columns
$firstColumn = isset($view->period) && $view->period == 'day' ? $defaultMetricDay : $defaultMetric;
$columns = array($firstColumn);
}
// displayed columns
if ($labelDisplayed
&& !($view instanceof Piwik_ViewDataTable_GenerateGraphData))
{
array_unshift($columns, 'label');
}
$view->setColumnsToDisplay($columns);
// Continue only for graphs
if(!($view instanceof Piwik_ViewDataTable_GenerateGraphData))
{
return;
}
// do not sort if sorted column was initially "label" or eg. it would make "Visits by Server time" not pretty
if($view->getSortedColumn() != 'label')
{
$view->setSortedColumn($firstColumn);
}
// selectable columns
if (isset($view->period) && $view->period == 'day')
{
$selectableColumns = array_merge($metricsForDay, $metricsForAllPeriods);
}
else
{
$selectableColumns = $metricsForAllPeriods;
}
$view->setSelectableColumns($selectableColumns);
}
/**
* Helper method used to redirect the current http request to another module/action
* If specified, will also redirect to a given website, period and /or date
*
* @param string $moduleToRedirect Module, eg. "MultiSites"
* @param string $actionToRedirect Action, eg. "index"
* @param string $websiteId Website ID, eg. 1
* @param string $defaultPeriod Default period, eg. "day"
* @param string $defaultDate Default date, eg. "today"
* @param array $parameters Parameters to append to url
*/
function redirectToIndex($moduleToRedirect, $actionToRedirect, $websiteId = null, $defaultPeriod = null, $defaultDate = null, $parameters = array())
{
if(is_null($websiteId))
{
$websiteId = $this->getDefaultWebsiteId();
}
if(is_null($defaultDate))
{
$defaultDate = $this->getDefaultDate();
}
if(is_null($defaultPeriod))
{
$defaultPeriod = $this->getDefaultPeriod();
}
$parametersString = '';
if(!empty($parameters))
{
$parametersString = '&' . Piwik_Url::getQueryStringFromParameters($parameters);
}
if($websiteId) {
$url = "Location: index.php?module=".$moduleToRedirect
."&action=".$actionToRedirect
."&idSite=".$websiteId
."&period=".$defaultPeriod
."&date=".$defaultDate
.$parametersString;
header($url);
exit;
}
if(Piwik::isUserIsSuperUser())
{
Piwik_ExitWithMessage("Error: no website was found in this Piwik installation.
<br />Check the table '". Piwik_Common::prefixTable('site') ."' that should contain your Piwik websites.", false, true);
}
$currentLogin = Piwik::getCurrentUserLogin();
if(!empty($currentLogin)
&& $currentLogin != 'anonymous')
{
$errorMessage = sprintf(Piwik_Translate('CoreHome_NoPrivilegesAskPiwikAdmin'), $currentLogin, "<br/><a href='mailto:".Piwik::getSuperUserEmail()."?subject=Access to Piwik for user $currentLogin'>", "</a>");
$errorMessage .= "<br /><br /> <b><a href='index.php?module=". Zend_Registry::get('auth')->getName() ."&action=logout'>› ". Piwik_Translate('General_Logout'). "</a></b><br />";
Piwik_ExitWithMessage($errorMessage, false, true);
}
Piwik_FrontController::getInstance()->dispatch(Piwik::getLoginPluginName(), false);
exit;
}
/**
* Returns default website that Piwik should load
*
* @return Piwik_Site
*/
protected function getDefaultWebsiteId()
{
$defaultWebsiteId = false;
// User preference: default website ID to load
$defaultReport = Piwik_UsersManager_API::getInstance()->getUserPreference(Piwik::getCurrentUserLogin(), Piwik_UsersManager_API::PREFERENCE_DEFAULT_REPORT);
if(is_numeric($defaultReport))
{
$defaultWebsiteId = $defaultReport;
}
Piwik_PostEvent( 'Controller.getDefaultWebsiteId', $defaultWebsiteId );
if($defaultWebsiteId)
{
return $defaultWebsiteId;
}
$sitesId = Piwik_SitesManager_API::getInstance()->getSitesIdWithAtLeastViewAccess();
if(!empty($sitesId))
{
return $sitesId[0];
}
return false;
}
/**
* Returns default date for Piwik reports
*
* @return string today, 2010-01-01, etc.
*/
protected function getDefaultDate()
{
// NOTE: a change in this function might mean a change in plugins/UsersManager/templates/userSettings.js as well
$userSettingsDate = Piwik_UsersManager_API::getInstance()->getUserPreference(Piwik::getCurrentUserLogin(), Piwik_UsersManager_API::PREFERENCE_DEFAULT_REPORT_DATE);
if($userSettingsDate === false)
{
return Piwik_Config::getInstance()->General['default_day'];
}
if($userSettingsDate == 'yesterday')
{
return $userSettingsDate;
}
// if last7, last30, etc.
if(strpos($userSettingsDate, 'last') === 0
|| strpos($userSettingsDate, 'previous') === 0)
{
return $userSettingsDate;
}
return 'today';
}
/**
* Returns default date for Piwik reports
*
* @return string today, 2010-01-01, etc.
*/
protected function getDefaultPeriod()
{
$userSettingsDate = Piwik_UsersManager_API::getInstance()->getUserPreference(Piwik::getCurrentUserLogin(), Piwik_UsersManager_API::PREFERENCE_DEFAULT_REPORT_DATE);
if($userSettingsDate === false)
{
return Piwik_Config::getInstance()->General['default_period'];
}
if(in_array($userSettingsDate, array('today','yesterday')))
{
return 'day';
}
if(strpos($userSettingsDate, 'last') === 0
|| strpos($userSettingsDate, 'previous') === 0)
{
return 'range';
}
return $userSettingsDate;
}
/**
* Checks that the specified token matches the current logged in user token.
* Note: this protection against CSRF should be limited to controller
* actions that are either invoked via AJAX or redirect to a page
* within the site. The token should never appear in the browser's
* address bar.
*
* @throws Piwik_Access_NoAccessException if token doesn't match
* @return void
*/
protected function checkTokenInUrl()
{
if(Piwik_Common::getRequestVar('token_auth', false) != Piwik::getCurrentUserTokenAuth()) {
throw new Piwik_Access_NoAccessException(Piwik_TranslateException('General_ExceptionInvalidToken'));
}
}
}
Jump to Line
Something went wrong with that request. Please try again.
|
__label__pos
| 0.955802 |
beta tape to pc?
Discussion in 'OT Technology' started by Casino, Oct 2, 2005.
1. Casino
Casino OT Supporter
Joined:
Aug 24, 2004
Messages:
15,738
Likes Received:
60
I have a bunch of home videos on beta tapes, but my beta player doesn't work anymore. I want to get it transfered to DVD. I called up a few places, but it was like $40 for one tape. Is there a way I can transfer the video on the beta tape to my computer? I already have a dvd burner so I just need the video on my pc.
2. DAN513
DAN513 OT Supporter
Joined:
Mar 10, 2003
Messages:
10,121
Likes Received:
5
Location:
204
unless you have something to play the tape with, you're hooped. Maybe ebay would have a cheap beta player....
3. Casino
Casino OT Supporter
Joined:
Aug 24, 2004
Messages:
15,738
Likes Received:
60
If i get a working beta player, what else do I need?
4. DAN513
DAN513 OT Supporter
Joined:
Mar 10, 2003
Messages:
10,121
Likes Received:
5
Location:
204
a tv tuner. They're cheap, less than $80.
5. tony
tony Guest
You'll want to use a composite imput, not a RF/TV input (if there is one) the quality will be a lot better.
6. Casino
Casino OT Supporter
Joined:
Aug 24, 2004
Messages:
15,738
Likes Received:
60
What software will I need to copy the video to my PC?
7. tony
tony Guest
capture cards come with software.
Share This Page
|
__label__pos
| 0.991834 |
hi @LL,
we are using simple code for validation in zip, but now we want to be able to type the word
"NONE" also.
How this code can be modified to do that?
case 'zip':
if ($inString=='') { appendError("Required Field"); }
else {
$length=strlen($inString);
switch($length)
{
default:
appendError("Must be 5 digit zip '00000', or zip+4 '00000-0000' or 'NONE'");
break;
case '5':
$pattern="^[[:digit:]][[:digit:]][[:digit:]][[:digit:]][[:digit:]]$";
$emailTest=ereg($pattern, $inString);
if (!$emailTest) { appendError("Only numbers allowed in zip"); }
break;
case '7':
$pattern="^[[:digit:]][[:digit:]][[:digit:]][[:space:]][[:digit:]][[:digit:]][[:digit:]]$";
$emailTest=ereg($pattern, $inString);
if (!$emailTest) { appendError("Canadian postal codes must be three digits, one space, three digits"); }
break;
case '10':
$pattern="^[[:digit:]][[:digit:]][[:digit:]][[:digit:]][[:digit:]]-[[:digit:]][[:digit:]][[:digit:]][[:digit:]]$";
$emailTest=ereg($pattern, $inString);
if (!$emailTest) { appendError("Not a valid Zip+4 format"); }
break;
Thanks
|
__label__pos
| 0.998724 |
Framework
A framework is a robust and essential platform used for developing software applications. It provides a foundational structure that streamlines the development process, offering predefined classes and functions to handle common tasks. This structure promotes code reuse and adherence to best practices, leading to more efficient and maintainable code. Frameworks can be specialized for specific tasks, such as web development, mobile app development, or data analysis, and they often include libraries, tools, and APIs to simplify complex processes. By leveraging a framework, developers can focus more on the unique aspects of their applications rather than reinventing the wheel, ultimately speeding up development and ensuring higher quality and consistency in their software products.
|
__label__pos
| 0.995903 |
Python 数字运算
阅读:56 收藏:0 [点我收藏+]
Python 数字运算
Python 解释器可以作为一个简单的计算器:您可以在解释器里输入一个表达式,它将输出表达式的值。
表达式的语法很直白: +, -, * 和/ 和在许多其它语言(如Pascal或C)里一样;括号可以用来为运算分组。例如:
>>> 2 + 2
4
>>> 50 - 5*6
20
>>> (50 - 5*6) / 4
5.0
>>> 8 / 5 # 总是返回一个浮点数
1.6
注意:在不同的机器上浮点运算的结果可能会不一样。之后我们会介绍有关控制浮点运算输出结果的内容。
在整数除法中,除法(/)总是返回一个浮点数,如果只想得到整数的结果,丢弃可能的分数部分,可以使用运算符 // :
>>> 17 / 3 # 整数除法返回浮点型
5.666666666666667
>>>
>>> 17 // 3 # 整数除法返回向下取整后的结果
5
>>> 17 % 3 # %操作符返回除法的余数
2
>>> 5 * 3 + 2
17
等号(‘=‘)用于给变量赋值。赋值之后,除了下一个提示符,解释器不会显示任何结果。
>>> width = 20
>>> height = 5*9
>>> width * height
900
Python 可以使用**操作来进行幂运算:
>>> 5 ** 2 # 5 的平方
25
>>> 2 ** 7 # 2的7次方
128
变量在使用前必须先"定义"(即赋予变量一个值),否则会出现错误:
>>> # 尝试访问一个未定义的变量
... n
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name ‘n‘ is not defined
浮点数得到完全的支持;不同类型的数混合运算时会将整数转换为浮点数:
>>> 3 * 3.75 / 1.5
7.5
>>> 7.0 / 2
3.5
在交互模式中,最后被输出的表达式结果被赋值给变量 _ 。这能使您在把Python作为一个桌面计算器使用时使后续计算更方便,例如:
>>> tax = 12.5 / 100
>>> price = 100.50
>>> price * tax
12.5625
>>> price + _
113.0625
>>> round(_, 2)
113.06
此处, _ 变量应被用户视为只读变量。不要显式地给它赋值——这样您将会创建一个具有相同名称的独立的本地变量,并且屏蔽了这个内置变量的功能。
关于我们 - 联系我们 - 留言反馈 - 联系我们:[email protected]
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!
|
__label__pos
| 0.51742 |
Service Administration
Manage Features
In each release of Oracle Digital Assistant, there are sets of optional features that you can enable or disable. You do so by selecting a profile that contains the features you want to have enabled.
To change the optional features that are enabled:
1. In Oracle Digital Assistant, click icon to open the side menu to open the side menu and select Settings > Feature Management.
2. From the Current profile dropdown, select the profile that corresponds with the features that you want enabled and disabled.
Audit Trail
Should you need to see a history of user activity in an instance of Oracle Digital Assistant and you have administrator privileges for the instance, you can view the instance's activity logs.
These logs capture granular detail of user sessions, such as listing, creating, editing, and deleting skills.
To browse the logs:
1. In the instance, click icon to open the side menu to open the side menu and select Settings > Audit Trail.
2. If you want to see results for more than the current day, go to the Today dropdown and select a different date range.
3. Click + Criteria one or more times to create search criteria to home in on the type of activity that you want to view.
4. Click Search.
5. To see details for a log entry, click the entry.
Example: Searching for Delete Operations
Here's an example of how you can use the search feature to see all delete operations:
1. Click + Criteria.
2. In the Filter field, select Operation.
3. In the Operator field, select Starts With.
4. In the value field, enter Delete.
5. Click Search.
In the results for that search, you'll see entries for any operations with names beginning with Delete, such as DeleteSkill or DeleteSkillIntent.
Events for Digital Assistant Instances
You can create automation based on state changes for your Oracle Digital Assistant service instances by using event types, rules, and actions.
For information on how events work, see Overview of Events.
Event Types
These are the event types that Oracle Digital Assistant service instances emit:
Friendly Name Event Type
Change Digital Assistant Compartment Begin
com.oraclecloud.digitalassistant.changeodacompartment.begin
Change Digital Assistant Compartment End
com.oraclecloud.digitalassistant.changeodacompartment.end
Create Digital Assistant Instance Begin
com.oraclecloud.digitalassistant.createodainstance.begin
Create Digital Assistant Instance End
com.oraclecloud.digitalassistant.createodainstance.end
Delete Digital Assistant Instance Begin
com.oraclecloud.digitalassistant.deleteodainstance.begin
Delete Digital Assistant Instance End
com.oraclecloud.digitalassistant.deleteodainstance.end
Update Digital Assistant Instance
com.oraclecloud.digitalassistant.updateodainstance
Example Digital Assistant Service Instance Event
This is a reference event for Oracle Digital Assistant service instances.
{
"id": "ocid1.eventschema.oc1.phx.abyhqljrfajridyag4epdbthdjuhwgkwxxog32ed4e36yx2zotmphyxe3z5q",
"exampleEvent": {
"eventID": "unique_id",
"eventTime": "2019-10-09T13:58:03.575Z",
"contentType": "application/json",
"eventType": "com.oraclecloud.digitalassistant.createodainstance.end",
"cloudEventsVersion": "0.1",
"source": "DigitalAssistant",
"extensions": {
"compartmentId": "ocid1.compartment.oc1..unique_ID"
},
"eventTypeVersion": "2.0",
"data": {
"resourceName": "example_name",
"compartmentId": "ocid1.compartment.oc1..unique_ID",
"availabilityDomain": "all",
"compartmentName": "example_name",
"resourceId": "ocid1.odainstance.oc1.phx.unique_ID"
}
},
"serviceName": "Digital Assistant",
"displayName": "ODA Instance - Create End",
"eventType": "com.oraclecloud.digitalassistant.createodainstance.end",
"additionalDetails": [],
"timeCreated": "2019-10-09T13:58:03.575Z"
}
Metrics, Alarms, Notifications, and Billing
You can monitor the health, performance, and usage of Oracle Digital Assistant service instances in Oracle Cloud Infrastructure by using metrics, alarms, and notifications.
For example, you can:
• See how many messages have been sent over a given period of time by users to skills and digital assistants in your service instance.
• See any errors that have occurred over a given period of time.
• Set alarms to alert you when any of these metrics hit a certain threshold.
For information on how these features work, see Monitoring Overview and Notifications Overview.
Digital Assistant Metrics
Oracle Digital Assistant metrics are emitted with the metric namespace oci_digitalassistant.
Here are the available metrics for Oracle Digital Assistant instances.
Metric Metric Display Name Unit Description Dimensions
RuntimeRequests Runtime Requests count
Number of runtime requests sent to the service.
This includes
• Messages sent by a user through a skill or digital assistant.
• Authentication and authorization attempts.
• Invocations of WebView components.
• Calls to the embedded container for custom code.
• Calls through the Skill Tester.
• Views of individual Insights reports.
• Notifications sent to users to initiate a conversation (through the Application-Initiated Conversations feature).
resourceIdresourceDisplayNameshape
RuntimeErrorResponses Runtime Error Responses count
Number of runtime error responses returned during conversations with a skill or digital assistant.
This includes API calls that return status codes of 400-499 and 500-599.
Such errors may indicate problem with a channel or its configuration.
resourceIdresourceDisplayNameshapeerrorType
CustomComponentErrorResponses Custom Component Error Responses count Number of error responses received from custom components or from functions from the Functions service. resourceIdresourceDisplayNameshape
CustomComponentRejectedResponses Custom Component Rejected Responses count
Number of invalid responses received from custom components or functions from the Functions service.
For example, this might include responses with a 200 status code but which are wrapped in malformed JSON.
resourceIdresourceDisplayNameshape
You can view metrics by individual service instance or in aggregated form for all instances.
View Metrics for a Single Instance
To view metrics for an individual service instance:
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Analytics & AI, and then click Digital Assistant.
2. Select the instance's compartment.
3. Select the instance.
4. Scroll down to the Metrics section of the page to view the metrics.
View Metrics for All Instances
To view aggregated metrics for all service instances:
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Observability & Management, and then click Service Metrics.
2. In the Compartments dropdown, select the compartment for which you want to view metrics.
3. In the Metric Namespace, select oci_digitalassistant.
Monitor Billing
The Infrastructure Console provides various billing and payment tools that make it easy to monitor your Oracle Digital Assistant billing, service costs, and usage.
To view your billing and usage, perform the following steps:
1. Sign in to Oracle Cloud as the cloud account administrator. You can find your account name and login information in your welcome email.
2. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Governance & Administration, and then select one of the following options:
• Cost Analysis: provides easy-to-use visualization tools to help you track and optimize your spending.
• Cost and Usage Report: view comma-separated value (CSV) files that can be used to get detailed breakdowns of resources for audit or invoice reconciliation.
Note
The first time you access usage reports, you must create a policy in your root compartment. Follow the instructions on the Usage Report page to create the policy, copying the statements as directed.
• Budgets: set thresholds for your spending. You can set alerts on your budget to let you know when you might exceed your budget, and you can view all of your budgets and spending from one single place.
• Invoices: view and download invoices for your usage.
For more information on the billing and payment tools, see Billing and Payment Tools Overview.
Stop and Start Instances
You can stop and start instances of Oracle Digital Assistant.
When you stop an instance, the instance's state changes to INACTIVE, which means that the instance can't be accessed and any metering is suspended. Starting an instance returns it to the ACTIVE state, making it available to users and resuming metering.
To stop or start an instance:
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Analytics and AI, and select Digital Assistant (which appears under the AI Services category on the page).
2. Select the instance's compartment.
3. Select the instance.
4. Click the Stop or Start button.
Delete an Instance
To permanently delete an instance of Oracle Digital Assistant:
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Analytics & AI, and select Digital Assistant (which appears under the AI Services category on the page).
2. Select the instance's compartment.
3. Select the instance.
4. From the More Actions menu, select Delete.
Break Glass
Oracle Break Glass for Oracle Digital Assistant enables you to securely restrict Oracle's access to your cloud environment.
The Break Glass for Oracle Digital Assistant feature is enabled if you have a Digital Assistant instance that is paired with a Fusion-based Oracle Cloud Applications subscription that includes Break Glass.
When you use Break Glass, Oracle Support representatives can access your cloud environment only after relevant approvals and authorization to troubleshoot any issues that may arise in your cloud environment.
Break Glass has these primary features:
• Temporary access approval, in which Oracle personnel can only access instance data through a strict customer approval process. Typically, such a process would only be initiated to help resolve a customer service request.
Such access is time limited. Any temporary access credentials are automatically reset after the agreed upon time.
Such access is logged and detailed reports are available.
• The option to upload your own Transparent Data Encryption (TDE) master encryption key.
By default, your data in the Oracle Cloud environment is encrypted at rest using TDE.
With Break Glass, you can upload your own TDE master encryption key and manage its lifecycle. If you provide your own key, your data will also be protected and audited using Data Vault. You can also periodically update the keys.
Temporary Access Approval
If you submit a service request (SR) and Oracle Support determines that it needs access to some of your data for debugging purposes, you can agree to give them temporary access to your service instance data. Here's the general flow of the process:
1. You submit an SR.
2. If Oracle Support determines that they need access to any of your data for debugging purposes, they will contact your administrator via email for approval to conduct a Break Glass session. (The email has a link to the Temporary Access Approval page of your Digital Assistant, where your administrator can click Approve or Reject.)
3. If your administrator approves the request, a temporary password is generated to enable Oracle Support to start a Break Glass session, in which they can access the required data.
4. Once Oracle support completes its work in the Break Glass session, they terminate the session. If they don't explicitly terminate the session, it expires automatically within the timeframe that you have agreed upon.
Provide Your Own Key
By default, Oracle provides and manages the TDE keys for encrypting the data in your Digital Assistant instance.
If your instance has Break Glass enabled, you can also replace the Oracle-provided private key with your own, which also enables you to rotate the keys as you require.
Note
When you first switch to using your own key, you need to allow some time for your instance to be out of service. You should also back up any key artifacts in your instance.
Create and Import Your TDE Master Key
To provide your own key, follow these steps:
1. In Oracle Digital Assistant, click icon to open the side menu to open the side menu and select Settings > Break Glass.
2. On the Provide Your Own Key Page, click + Provide Your Own Key.
3. Click Public Key to download the Oracle public wrapping key that you will need to encrypt your own transparent data encryption (TDE) master key.
4. Use OpenSSL to generate and encrypt your key:
1. Create a new directory for the key and assign it to an environment variable:
$ mkdir –p dir_of_key
$ export KEYPATH dir_of_key
2. Make sure the directory is restricted:
$ chmod go-rwx $KEYPATH
3. Generate the TDE master key:
$ openssl rand 32 > $KEYPATH/clearkey
4. Encrypt your generated TDE master key with the Oracle public wrapping key that you downloaded in step 3:
$ openssl pkeyutl -encrypt -in $KEYPATH/clearkey -inkey $KEYPATH/wrappingkey -pubin -pkeyopt rsa_padding_mode:oaep -pkeyopt rsa_oaep_md:sha256 > $KEYPATH/wrappedkey
5. In the External Key Data Source field, upload the encrypted TDE master key (e.g. wrappedkey, as in the above example).
6. In the Email Address field, enter the email address of the person to notify when the reconfiguration of the Digital Assistant instance has finished and the instance is ready to used again.
7. Click Submit and then Confirm.
Update the Key
If you have previously provided your own TDE key for your Digital Assistant instance, you can update that key.
1. In Oracle Digital Assistant, click icon to open the side menu to open the side menu and select Settings > Break Glass.
2. On the Provide Your Own Key Page, click + Update the key.
3. Click Public Key to download the Oracle public wrapping key that you will need to encrypt your own transparent data encryption (TDE) master key.
4. Use OpenSSL to generate and encrypt your key:
1. Create a new directory for the key and assign it to an environment variable:
$mkdir –p dir_of_key
$ export KEYPATH dir_of_key
2. Make sure the directory is restricted:
$ chmod go-rwx $KEYPATH
3. Generate the TDE master key:
$ openssl rand 32 > $KEYPATH/clearkey
4. Encrypt your generated TDE master key with the Oracle public wrapping key that you downloaded in step 3:
$ openssl pkeyutl -encrypt -in $KEYPATH/clearkey -inkey $KEYPATH/wrappingkey -pubin -pkeyopt rsa_padding_mode:oaep -pkeyopt rsa_oaep_md:sha256 > $KEYPATH/wrappedkey
5. In the External Key Data Source field, upload the encrypted TDE master key.
6. Click Submit and then Confirm.
Note
Once you create or update your key, you have to wait 16 days or more before you can update it again.
Disaster Recovery
Oracle Digital Assistant has a high-availability (HA) architecture to prevent against disasters and to smoothly recover from what disasters do occur. Here are some facets of the architecture of Oracle Cloud Infrasture and Digital Assistant that are used to prevent and mitigate disasters:
• Oracle Cloud Infrasture is divided into regions. Each region is separated from other regions by great distances, meaning that disasters such as earthquakes and major weather events that may negatively impact service in one region are extremely unlikely to affect the other regions.
• Within each data center, there are three fault domains, each of which is a physically separate grouping of hardware and infrastructure with its own power supply and cooling.
• The architecture of a single Digital Assistant instance is spread among different fault domains with automated backup, which makes it resilient against any disasters that may occur in that region.
Cross-Region Failover
Oracle Digital Assistant is architected for high availability (HA). However, if you need to ensure that your instance can still function if a disaster strikes your instance's region, you can request to have cross-region failover set up.
When cross-region failover is set up and the primary instance goes down:
• Any runtime requests to the primary instance are redirected to the backup instance.
• A banner appears in the Digital Assistant UI that notes that the backup instance is being used.
• You should not do any work on skills, digital assistants, channels, Insights, or other artifacts (whether through the UI or through REST APIs) in the backup instance. Any changes you make in the backup instance will not be preserved when the primary instance is restored.
When the outage ends:
• Service to the primary instance is restored.
• Any Insights data that has accumulated on the backup instance is preserved and combined with existing Insights data associated with the primary instance.
• Artifacts such as skills and digital assistants are restored to the state they were in when the primary instance went down. (Practically speaking, this simply means any changes that you happen to make to these artifacts in the backup instance won't be preserved.)
Set Up Failover
To set up cross-region failover:
1. File a service request (SR) for cross-region failover and, in the request, mention the instance URL of the primary Digital Assistant instance.
2. Once the Support team has responded to you with information on which backup regions are available, subscribe to a backup region in the OCI Console.
The Support team will then create the backup instance.
During the failover setup, a system-level skill (named Echo) is set up in the instance you have specified and exposed through a web channel (named heartbeat) in that instance. From the backup region, the primary instance is then regularly polled for its health status through this skill.
Private Endpoint
You can set up a private endpoint to give your Oracle Digital Assistant secure access to backend services that are not exposed to the public internet.
For example, you might need to set up a private endpoint to be able to connect to an on-premises database, or a database running in an Oracle Cloud Infrastructure VCN, that you need to use for SQL Dialog skills. Or you may need to connect to REST service that's on-premises or in a VCN.
Set Up a Private Endpoint
To set up a private endpoint for Digital Assistant, you follow these general steps:
1. Make sure that you have the required permissions to configure private endpoints and attach them to Digital Assistant instances.
2. If you don't already have them in place, on the OCI Console, create a virtual cloud network (VCN) and its associated resources, including:
• At least one subnet.
• Route tables to route the traffic through the subnet to its destinations.
• Security lists or network security groups to establish a set of ingress and egress security rules that you'll use for the private endpoint.
• Optionally, an Internet gateway to give Internet access to the VCN.
• Optionally, an NAT (Network Address Translation) gateway, which gives resources that don't have public IP addresses access to the Internet without exposing them to incoming Internet connections.
See the OCI documentation for VCNs and subnets.
3. Create the private endpoint and associate it with your Digital Assistant instance.
4. In Digital Assistant, configure a data service or REST service that points to the endpoint.
Permissions for Private Endpoints
To set up private endpoints, you need to have the proper permissions in the Infrastructure Console.
There are two resource types for private endpoints that encompass these required permissions:
• oda-private-endpoints - enables you to configure private endpoints and SCAN proxies.
• oda-private-endpoint-attachments - enables you to attach a private endpoint a Digital Assistant instance.
Permissions for those resource types are also part of the oda-family resource type. So if you are covered by a policy statement to manage oda-family resource types in the compartment where your private endpoint is, you don't have to create separate policies for your private endpoints.
The following are examples of broad policies to enable creation and configuration of private endpoints and attach them to Digital Assistant instances.
allow group <group-name> to manage oda-private-endpoints in compartment <private-endpoint-compartment>
allow group <group-name> to manage oda-private-endpoint-attachments in compartment <private-endpoint-compartment>
For more detail on how policies work, see Digital Assistant Policies.
Create a Policy to Access a Private Endpoint
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Identity & Security, and then click Policies.
A list of the policies in the compartment you're viewing is displayed.
2. From the list of compartments, select the compartment to which you want to attach the policy. This controls who can later modify or delete the policy (see Policy Attachment).
3. Click Create Policy.
4. Complete the wizard, making sure that they name you provide is unique across all policies in your tenancy.
Create a Private Endpoint
1. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Analytics & AI, and select Digital Assistant (which appears under the AI Services category on the page).
2. In the left navigation of the AI Services page that appears, click Private endpoints.
3. If you haven't already done so, create the compartment where you want to keep the private endpoint and, optionally, add the VCN and subnet you will be using to that compartment.
See Understanding Compartments and Managing Compartments.
4. Click Create private endpoint and fill in the required fields, including the VCN and private subnet.
5. Once the endpoint is created, click Associate ODA Instance, select the compartment that contains the Digital Assistant instance that you want to be able to use the private endpoint, and then select that instance.
Add a Service for the Private Endpoint in Digital Assistant
Once you have created a private endpoint, you need to add a service for that private endpoint to use it in Digital Assistant.
SCAN Proxies for Private Endpoints
If you are using your private endpoint for a RAC-enabled database, you also need to configure a SCAN proxy for the private endpoint.
To set up a SCAN proxy:
1. Get the SCAN DNS name and port number for the database.
• If the database is an on-premises database, get it from the database administrator.
• If the database is on OCI, do the following in the Infrastructure Console:
1. Navigate to the DB System Details page for the database and select the DB system information tab.
2. In the Network section of the page, copy the SCAN DNS name and paste it in a convenient place.
3. Note the Port Number.
2. In the Infrastructure Console, click Navigation menu icon on the top left to open the navigation menu, select Analytics & AI, and select Private endpoints (which appears under the AI Services category on the page).
3. Select your private endpoint.
4. In the Resources section of the page, select SCAN proxies.
5. Click Add SCAN proxy.
6. In the Add SCAN proxy dialog, select the type (FQDN (for fully-qualified domain name) or IP address) and then fill in the rest of the required fields.
• If you have selected FQDN as the proxy type, use the database's SCAN DNS name for the Host name and the database's port number as the Port.
• If you have selected IP address as the proxy type, click Add SCAN Listener to add IP addresses and port numbers for one or more SCAN listeners in the database.
Further Administration Information
Once you have set up your Oracle Digital Assistant instance and its users, you may wish to delve further into setup of your account. Here are some topics containing more details on administering services in Oracle Cloud Infrastructure that you may wish to explore:
Programmatic Creation and Management of Skills and Digital Assistants
The Digital Assistant Service Instance API enables you to manage the creation, updating, and deleting of skills, digital assistants, and channels programmatically. In addition, you can manage other resources in your instance that your skills depend on, such as authorization services and translation services.
You can access the API through multiple SDKs and a CLI. See the OCI Developer Tools and Resources page for the details.
Packaged Skills
If you are managing multiple Digital Assistant instances, you can programmatically manage packages for those instances as well.
A package can contain some combination of skills and digital assistants as well as specify any required resources, such as translation services, authorization services, and custom parameters that are required for the skills.
You can manage the importing and updating of these packages through the Digital Assistant Service Instance API.
For information on working with the API and the SDKs and the CLI that are based on that API, see the OCI Developer Tools and Resources page.
Importing and Managing Packages
In general, the process for importing packages using the API (either directly or via the CLI or one of the SDKs) is:
1. If it doesn't yet exist, create the Oracle Digital Assistant instance where you want to import the package.
1. Call CreateOdaInstance to create the instance.
2. From the response to the CreateOdaInstance call, take the opc-work-request-id response header value and use it to call GetWorkRequest to monitor the progress of the instance creation operation.
3. Once the instance creation has completed, using the value of the odaInstanceId attribute that was returned in the response body to call GetOdaInstance.
2. Call ListPackagesListPackages to see what packages are available for the instance (or instances) that you specify.
3. For any available packages that you want to import, call GetPackage to get the package's import contract.
The import contract specifies conditions that need to be satisfied before you can import the package. This might include things like specifying an auth provider and filling in values for custom parameters.
4. Satisfy the import contract.
You do so by constructing a payload that provides values for all of the required parameters in the import contract. The payload might looks something like this:
{
"packageId": "<packageId-OCID>",
"parameterValues": {
"authProvider.providerX.clientSecret": "some value",
"authProvider.providerX.authorizationEndpointUrl": "http://host:80/file",
"authProvider.providerX.revokeEndpointUrl": "http://host:80/file",
"authProvider.providerX.allowedScopes": "some value",
"authProvider.providerX.tokenEndpointUrl": "http://host:80/file",
"authProvider.IDCS_OAuthForIDR.allowedScopes": "some value",
"authProvider.providerX.clientId": "some value",
"skillParameter.da.backendRestEndPoint": "http://host:80/file",
}
}
To simplify this task, the GetPackage response contains a section called defaultParameterValues that you can use to assemble the parameter value portion of the payload.
5. Import the package into the instance(s).
1. Call CreateImportedPackage using the payload you just assembled.
2. From the response to the CreateImportedPackage call, take the opc-work-request-id response header value and use it to call GetWorkRequest to monitor the progress of the package import operation.
3. Once the package import has completed, using the value of the odaInstanceId attribute that was returned in the response body to call GetImportedPackage to view the package details.
If an update for a package is available, you can add that updated package to the instance through the UpdateImportedPackage operation.
|
__label__pos
| 0.552526 |
Improve performance when writing a large number of message files to disk
Problem:
A channel that is writing a large number of files to disk will get progressively slower (as the number of files in the target directory gets large). This is because your computer’s operating system must keep track of all of the files in a directory. Moving or deleting files from the target directory on a regular basis will ensure that the performance of this channel is not adversely affected.
This one can sneak up on you, initially channel performance will be fine, but it gradually degrades as the number of files in the target directory increases.
This usually occurs when using a To File or To Translator (with net file functions) and writing individual messages to separate files. However it can occur whenever you are writing a large number of files to disk.
Solution:
Use a purge script to regularly delete or move old files from the target directory. This will reduce the operating system overhead and maintain consistent performance.
Leave a Reply
|
__label__pos
| 0.960362 |
1. kidtreo's Avatar
Greetings!
For reasons of hating on the AT&T, I suddenly find myself getting ready to JB + UL 2 iPhones - a 2G and a 3G.
BadAsh, perhaps you or someone else could tell me what the best process is step wise to do so?
My impression is:
1) JB both phones.
2) Sync each phone to it's respective library of data / apps via iTunes 8.
3) Run Unlock Apps (best?) and switch carriers.
Questions!
Will I be able to connect to and use the App Store on the iPhone even after unlocking to a different carrier?
Will I be able to use the app store via iTunes after JBing and unlocking?
Do I need to prevent iTunes from connecting to phobos.apple.com on launch?
Can I continue to sync to Mobile Me server?
Anything else I need to watch out for to prevent bricking? (I rely on my phone and data A LOT and am concerned I may "screw the pooch" if I'm not careful.
Many thanks all!!!
09-21-2008 04:04 PM
2. Jeremy's Avatar
1. Yes
2. Yes but you can not unlock the iPhone 3g ~ not possible.
3. Yes
4. Don't mess it up... follow the on screen directions.
09-25-2008 10:07 AM
3. zachyzissou's Avatar
1. Yes
2. Yes but you can not unlock the iPhone 3g ~ not possible.
3. Yes
4. Don't mess it up... follow the on screen directions.
By you cannot unlock the iPhone 3G.. what exactly does that mean, i was under the impression you could. Or are you referring to the fact that you cannot use the functionality that the above user was talking about?
09-29-2008 12:21 PM
4. Jeremy's Avatar
What that means is you can not unlock the iPhone 3g to use with another carrier besides AT&T here in the USA.
09-29-2008 01:34 PM
LINK TO POST COPIED TO CLIPBOARD
|
__label__pos
| 0.950799 |
profile
viewpoint
Ask questionsTypescript support for extended Styled Components
Hi, I've been using the macro to extend components and have recently started also using Typescript and I'm getting errors from the tsc when using extended components:
So, this works in regular JSX:
const PinkButton = tw(Button)`bg-pink`;
but when it becomes TSX then I get this error:
Argument of type 'FunctionComponent<Props>' is not assignable to parameter of type 'TemplateStringsArray'.
The workaround for this is to import styled and call it like this:
const PinkButton = styled(Button)`${tw`bg-pink`}`;
Which is great for now but everywhere else I'm using tw.div etc and would like to be able to consistently use tw instead of having to import styled when extending components.
https://github.com/ben-rogerson/twin.macro/pull/24 fixed other issues but there's still this case.
ben-rogerson/twin.macro
Answer questions A-Shleifman
As a temporary solution we can declare a modified module:
// twin.d.ts
import { ComponentType } from 'react';
import { TwFn, TemplateFn } from 'twin.macro';
declare module 'twin.macro' {
type TwComponentWrapper = <T extends ComponentType<any>>(component: T) => TemplateFn<T>;
const tw: TwFn & TwComponentMap & TwComponentWrapper;
export = tw;
}
@ben-rogerson, how about updating the type with this?
useful!
source:https://uonfu.com/
Github User Rank List
|
__label__pos
| 0.788742 |
HTTP-пакеты не отправляются
Я пишу свое приложение для MAC OS X Mavericks. Это должно быть приложение, которое может отправлять запросы POST и GET на сервер. Прежде всего, я пытаюсь отправить запрос POST. Но, к сожалению, это не работает, и я не знаю почему. Когда я пытаюсь перехватить пакеты с помощью сниффера WireShark, он показывает, что пакеты не отправлены. Это мой код:
UInt32 Login(wchar_t* wstrIpAddress)
{
size_t size = wstrIpAddress.length();
//create request body
CFStringRef bodyString = CFSTR("");
CFDataRef bodyData = CFStringCreateExternalRepresentation(kCFAllocatorDefault,
bodyString,
kCFStringEncodingUTF8,
0);
if(bodyData == NULL)
{
cout << "CFStringCreateExternalRepresentation() could not convert the characters to the specified encoding. ""Error code: "<< errno << endl;
return 2;
}
//create headers
CFStringRef headerFieldName = CFSTR("Content-Type");
CFStringRef headerFieldValue = CFSTR("application/x-www-form-urlencoded;charset=base64");
//fix coding wstrIpAddress from wstring to UInt8
CFStringEncoding encoding =
(CFByteOrderLittleEndian == CFByteOrderGetCurrent()) ? kCFStringEncodingUTF32LE : kCFStringEncodingUTF32BE;
CFIndex wstrIpAddressLen = (wcslen(wstrIpAddress.c_str())) * sizeof(wchar_t);
CFStringRef myUrl = CFStringCreateWithBytes(kCFAllocatorDefault,
(UInt8*)wstrIpAddress.c_str(),
wstrIpAddressLen,
encoding,
false);
if(myUrl == NULL)
{
cout << "CFStringCreateWithBytes() had a problem with creating the object. Error code: " << errno << endl;
return 3;
}
//CFStringRef urlEscaped = CFURLCreateStringByAddingPercentEscapes(NULL, myUrl, NULL, NULL, kCFStringEncodingUTF8);
CFURLRef url = CFURLCreateWithString(kCFAllocatorDefault, myUrl, 0);
if(url == NULL)
{
cout << "CFURLCreateWithString() had a problem with creating the object. Error code: " << errno << endl;
return 4;
}
CFStringRef requestMethod = CFSTR("POST");
CFHTTPMessageRef myRequest = CFHTTPMessageCreateRequest(kCFAllocatorDefault, requestMethod, url, kCFHTTPVersion1_1);
if(myRequest == NULL)
{
cout << "CFHTTPMessageCreateRequest() had a problem with creating the object. Error code: " << errno << endl;
return 5;
}
CFDataRef bodyDataExt = CFStringCreateExternalRepresentation(kCFAllocatorDefault,
bodyString,
kCFStringEncodingUTF8,
0);
if(bodyDataExt == NULL)
{
cout << "CFStringCreateExternalRepresentation() could not convert the characters to the specified encoding. ""Error code: "<< errno << endl;
return 6;
}
CFHTTPMessageSetBody(myRequest, bodyDataExt);
CFHTTPMessageSetHeaderFieldValue(myRequest, headerFieldName, headerFieldValue);
CFReadStreamRef myReadStream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, myRequest);
//CFErrorRef copyError = CFReadStreamCopyError(myReadStream);
//cout << copyError << endl;
CFRelease(myRequest);
myRequest = NULL;
//you could release the CFHTTP request object immediately after calling CFReadStreamCreateForHTTPRequest
//Because the read stream opens a socket connection with the server specified by the myUrl parameter when the CFHTTP request was created, some
//amount of time must be allowed to pass before the stream is considered to be open
//Opening the read stream also causes the request to be serialized and sent
Boolean bRes = CFReadStreamOpen(myReadStream);
if(bRes == FALSE)
{
CFStreamError myErr = CFReadStreamGetError(myReadStream);
// An error has occurred.
if (myErr.domain == kCFStreamErrorDomainPOSIX) {
// Interpret myErr.error as a UNIX errno.
}
else if (myErr.domain == kCFStreamErrorDomainMacOSStatus) {
// Interpret myErr.error as a MacOS error code.
OSStatus macError = (OSStatus)myErr.error;
// Check other error domains.
}
cout << "ReadStreamOpen error with error code: " << errno << endl;;
return 7;
}
//UInt32 myErrCode = CFHTTPMessageGetResponseStatusCode(myReadStream);
/*if(myErrCode != 200)
{
cout << "Request faild\n" << endl;
return 1;
}*/
//release the message objects
CFRelease(url);
CFRelease(bodyString);
CFRelease(bodyData);
CFRelease(headerFieldName);
CFRelease(headerFieldValue);
CFRelease(bodyDataExt);
bodyDataExt = NULL;
return 0;
}
0
Решение
Я нашел ответ.
1. Адрес к серверу, например, должен быть в таком формате:
http://10.0.0.25, но НЕ только ip вот так -> 10.0.0.25.
2. Мы также должны проверить код статуса из ответа, который гарантирует правильное соединение.
UInt32 myErrCode = CFHTTPMessageGetResponseStatusCode(myReadStream);
if(myErrCode != 200)
{
cout << "Request faild\n" << endl;
return 1;
}
0
Другие решения
|
__label__pos
| 0.973623 |
Обработка нескольких массивов одновременно.
Архитектура Аудит Военная наука Иностранные языки Медицина Металлургия Метрология
Образование Политология Производство Психология Стандартизация Технологии
Обработка нескольких массивов одновременно.
Если обрабатываются несколько массивов одновременно, то для каждого массива нужно выбрать подходящую схему перебора, завести свой индекс, следить , чтобы индекс не вышел за границы массива. В некоторых частных случаях для обработки нескольких массивов бывает достаточно одного индекса, потому что элементы массива обрабатываются «синхронно», то есть, зная индекс элемента одного массива, можно вычислить по некоторой формуле индекс соответствующего ему элемента другого массива. Если такой формулы установить не удается, то говорят, что массивы обрабатываютя «асинхронно».
Пример: Дан массив целых чисел. Необходимо сформировать второй массив, содержащий четные элементы первого массива, при этом расположить элементы во втором массиве:
а) на тех же позициях, что и в первом;
б) сдвинуть к началу массива.
Решение:
Вариант 1:
const nn = 30;
var a, b: array [1..n] of integer;
i, n: integer;
begin
write (‘задайте количество элементов массива’);
readin (n);
for i: = 1 to n do
begin
read (a[i]);
if a[i] mod 2 = 0 then b[i]: = a[i];
End;
for i: = 1 to n do
write (b[i],”);
End.
Вариант 2.
const nn = 30;
var a, b: array [1..n] of integer;
i, k, n: integer;
begin
write (‘задайте количество элементов массива’);
readln (n);
for i: = 1 to n do
read (a[i]);
k: = 0; {в массиве b нет ещё элементов}
for i: = 1 to n do
if a[i] mod 2 = 0 then begin
k: = k + 1;
b[k]: = a[i];
end;
for i: = 1 to k do
write (b[i], “);
End.
Поисковые задачи для массивов.
Часто в программировании возникает задача отыскать первый элемент, совпадающий с заданным х. Для решения этой задачи могут быть предложены следующие варианты:
- линейный поиск;
поиск с барьером;
дихотомический поиск.
Линейный поиск
Алгоритмом такого поиска был рассмотрен при решении типовых задач на построение циклических алгоритмов. Напомним, что особенностью такого поиска является две причины прекращения поиска:
Элемент найден (это программируется с помощью логической переменной)
Просмотрены все элементы, но заданный элемент так и не нашли
(i > n)
const n = 20; {количество элементов в массиве}
var a: array [1..n] of integer; {исходный массив}
i, x: integer;
f: boolean;
begin
write (‘задайте искомый элемент’);
readln (x);
writeln (‘задайте элементы массива’);
for i: = 1 to n do
readln (a[i]);
f: = false; {элемент ещё не найден}
i: = 1;
while (i < = n) and not f do
if a[i] = x then f: = true {нашли}
else i: = i + 1 {переходим к следующему элементу}
if f then writeln (‘ нашли элемент с номером‘, i)
else writeln (‘такого элемента нет’)
End.
Поиск с барьером
Применяется широко распространенный прием фиктивного элемента, или барьера, расположенного в конце массива. Использование барьера позволяет упростить условие окончания цикла, т.к. заранее ясно, что хотя бы один элемент, равный а, в массиве есть. При этом необходимо увеличить размер массива на 1.
сonst n = 20; {количество элементов в массиве}
var a: array [1..n + 1] of integer; {исходный массив}
i, x: integer;
begin
write (‘задайте искомый элемент’);
readln (x);
writeln (‘задайте элементы массива’);
for i: = 1 to n do
readln (a[i]);
a[n + 1]: = x; {установлен барьер, равный х, в конце}
i: = 1; {массива}
while a[i] <> x do
i: = i + 1; {переходим к следующему элементу}
if i <> n + 1 then writeln (‘нашли элемент с номером’, i)
else writeln (‘такого элемента нет’):
End.
Дихотомический поиск (поиск элемента в упорядоченном массиве)
Алгоритм дихотомического поиска достаточно прост. Делим массив пополам и определяем в какой из частей может находиться искомый элемент. Поскольку массив упорядочен, то сравнивая искомый элемент со средним элементом массива, легко определить интересующую нас половину. Затем выбранную половину опять делим пополам и определяем в какой половине находится искомый элемент. Этот процесс продолжаем до тех пор, пока не будет найден искомый элемент, либо левая граница нового отрезка не станет больше правой. В последнем случае можно сделать вывод о том, что искомого элемента в массиве нет.
const n = 20; {kоличество элементов в массиве}
var a: array [1..n] of integer; {исходный массив}
i, x, k, m: integer;
left, right, mid: integer: {левая, правая граница и середина}
f: boolean; {отрезка}
begin
write (‘задайте искомый элемент’);
readln (k);
for i: = 1 to n do
readln (a [i]);
{упорядочивание массива по возрастанию}
for i: = 1 to n – 1 do
begin
m: = i;
for j: = i + 1 to n do
if a[j] < a[m] then m: =j;
x: = a[i];
a[i]: = a[m]; a[m]: = x
end;
{поиск элемента}
f: = false; {элемент не найден}
left: = 1; right: = n;
repeat {поиск элемента в части массива от элемента [left] до
элемента [rigth]}
mid: = (left + right) div 2;
if k < a[mid] then right: = mid – 1 {элемент в левой части}
else if k > a[mid] then left: = mid + 1 {элемент в правой части}
else f: = true; {нашли}
else f: = true; {нашли}
until f or (left > rigth);
if f then writeln (‘элемент с номером’, mid: 2, ‘совпадает с
исковым’, k)
else writeln (‘не нашли’);
End.
Сортировка массивов.
Сортировка вставкой
Массив разделяется на две части: отсортированную и неотсортированную. Элементы из неотсортированной части поочередно выбираются и вставляются в отсортированную часть так, чтобы не нарушить в ней упорядочность элементов.
В начале работы алгоритмы в качестве отсортированной части массива принимают только один первый элемент, а в качестве неотсортированной части – все остальные элементы.
Таким образом, алгоритм будет состоять из n – 1-го прохода (n – размерность массива), каждый из которых будет включать четыре действия:
Взятие очередного i-го неотсортированного элемента и сохранение его дополнительной переменной.
Поиск позиции j в отсортированной части массива, в которой присутствие взятого элемента не нарушит упорядоченности элементов.
Сдвиг элементов массива от i – 1-го до j-го вправо, чтобы освободить найденную позицию вставки.
Вставка взятого элемента в найденную j-ю позицию.
1 шаг:
3
2 шаг:
1
3 шаг:
1
4 шаг:
1
5 шаг:
1
Результат:
БЛОК – СХЕМА
I = 2
Æ 1
J = 1 R = A(I) IR =
Æ 1
I = I + I
1 Æ
IR = 1
1 Æ
J = J+1
K = I - 1
1
A(J) = R IR = 2
A(K+1) = A(K) K = K - 1
Программа, реализующая рассмотренный алгоритм, будет иметь следующий вид.
Program Insertion Sort;
Uses Crt;
Const
n =10; {длина массива}
type
TVector = array [1…n] of Real;
Var
Vector : TVector;
B : Real;
i, j, K : Integer;
Begin
ClrScr;
Writeln (‘Введите элементы массива:’);
For i:=1 to n do Read (Vector [i]); Readln;
{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - }
for i:=2 to n do
begin
B:=Vector [i]; {взятие неотсортированного элемента}
{цикл поиска позиции вставки}
j:=1
While (B>Vector [j] ) do
j:= j+1 {после окончания цикла индекс j фиксирует позицию}
{вставк} {цикл сдвига элемента для освобождения позиции вставки}
for K:= i-1 downto j do
Vector [K+1]: = Vector [K];
{Вставка взятого элемента на найденную позицию}
Vector [j]: = B;
End;
{ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - }
Writein (‘Отсортированный массив:’);
For i: = 1 to n do write (‘vector [i] : 8 : 2);
Writeln;
End.
Результат работы :
Bведите элементы массива : Отсортированный массив :
Сортировка выбором
Находим (выбираем) в массиве элемент с минимальным значением на интервале от 1-го элемента до n-го (последнего) элемента и меняем его местами с первым элементом. На втором шаге находим элемент с минимальным значением на интервале от 2-го до n-го элемента и меняем его местами со вторым элементом. И так далее для всех элементов до n-1-го.
1 шаг:
min
2 шаг:
min
3 шаг:
min
4 шаг:
min
5 шаг:
min
6 шаг:
min
7 шаг:
min
Результат:
БЛОК – СХЕМА
I = 1
Æ 1
J = I L = 1
Æ 1
R = A(L) F(L) =A(I) A(I) = R
1 Æ
L = J
J = J + 1
I = I + 1
Программа реализующая метод выбора, будет иметь следующий вид:
Program Selection Sort;
Uses Crt;
const
n=20; { длина массива}
typc
TVector = array [1…n] of Real;
Var
Vector : TVector;
Min : Real;
Imin, S: Integer;
i : Integer;
Begin
Clr Srt:
Writeln (‘введите элементы массива:’);
for i: = 1 to n do Read (Vector [i]); Readln;
{ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - }
for S:=1 to n-1 do
begin
{поиск минимального элемента в диапазоне от }
{S-го элемента до n-го}
Min: = Vector [S];
I min: = S;
For i: = S+1 to n do
If Vector [i] < Min then
begin
Min: = Vector [i];
I min: = I;
End;
{обмен местами минимального и S-го элемента}
Vector [Imin]: = Vector [S];
Vector [S]: = Min;
End;
{ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - }
Writeln (‘Отсортированный массив:’);
For i: = 1 to n do Write (Vector [i]: 8 : 2);
Writeln;
End.
Результат работы :
Bведите элементы массива : Отсортированный массив :
Рекомендуемые страницы:
Читайте также:
1. Аппроксимация сложных объектов совокупностью нескольких типовых звеньев
2. Бухгалтерская обработка документов.
3. Высшие функции и обработка информации
4. Дипломы, в которых статистическая обработка данных отсутствует, к защите допускаются только в качестве исключения.
5. Дисперсия функции нескольких измеряемых величин
6. Дифференциальное исчисление функций нескольких переменных
7. Договором признается соглашение двух или нескольких лиц об установлении, изменении или прекращении гражданских прав и обязанностей.
8. Доение коров и первичная обработка молока
9. Дополнительная обработка заготовок.
10. Если имя файла заключено в угловые скобки, то поиск файла будет осуществляться в одном или нескольких специальных каталогах, определенных конкретной реализацией.
11. Заполнение и обработка опросника «Пословицы»
12. Изменением правила будем называть изменение какого-то одного или нескольких из его компонентов
Последнее изменение этой страницы: 2016-03-15; Просмотров: 1033; Нарушение авторского права страницы
lektsia.com 2007 - 2020 год. Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав! (0.055 с.) Главная | Обратная связь
|
__label__pos
| 0.858415 |
Solving polynomial systems with nspire cas
This presentation is the property of its rightful owner.
Sponsored Links
1 / 66
Solving Polynomial Systems with Nspire CAS PowerPoint PPT Presentation
• 172 Views
• Uploaded on
• Presentation posted in: General
ACA 2013 Applications of Computer Algebra Session: Computer Algebra in Education Malaga, Spain, July 2-6. Solving Polynomial Systems with Nspire CAS. Michel Beaudin, Gilles Picard, Geneviève Savard ÉTS, Montréal, Canada. Abstract.
Download Presentation
Solving Polynomial Systems with Nspire CAS
An Image/Link below is provided (as is) to download presentation
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Solving polynomial systems with nspire cas
ACA 2013
Applications of Computer Algebra
Session: Computer Algebra in Education
Malaga, Spain, July 2-6
Solving Polynomial Systems with Nspire CAS
Michel Beaudin, Gilles Picard, Geneviève Savard
ÉTS, Montréal, Canada
Abstract
Abstract
Nspire CAS can solve polynomial systems, using the Gröbner/Buchberger elimination method, but users don’t have access to an explicit function like the one found in Derive(the so called “Groebner_basis” function). Thus we cannot see how the method is used when solving polynomial system with Nspire CAS. In this talk, we will show typical examples of polynomial systems that arise when teaching Lagrange multipliers technique. In the first part, the example will emphasise the importance of checking solutions and examining graphically the problem. The second example, in part two, will show a classic optimization problem where we will analyze the answer given by the commands “solve” and “zeros”: we will find one wrong solution and some solutions will be missing (but simple parametric equations of the constraint will help us find the right answer). Using Derive’sGroebner_basis function, we will try to show what can yield this problem.
When teaching row-reduce echelon form to students, we tell them that this is the way a linear system should be solved in general – instead of constantly applying the (black box) “solve” command. In case of polynomial systems, access to a “Gröbner basis function” would be, for users, an important tool for understanding results obtained by the Nspire CAS system.
Keywords
Polynomial systems, solving facilities, Lagrange multipliers, Gröbner basis.
Overview
Overview
Introduction
Solving Polynomial Systems
Example 1: Everything is OK but…
Example 2: Something goes wrong.
Gröbner Bases for Example 2
Conclusion
Introduction
Introduction
Each of our students (future engineers) has a TI-Nspire CX CAS handheld calculator on his desk.
So, the “temptation” for using the “solve” or “zeros” command is great.
Our students are using one of these commands every time they need to solve an equation or a system of equations.
Introduction1
Introduction
We don’t want to restrict its use. Instead, we would like to guide its use.
Very rarely, they are told to verify their answers (even for a single equation).
In the case of a single equation in one variable, teachers should ask them to check graphically the answer they have found.
Introduction2
Introduction
For a linear system of equations, the Gauss-Jordan method works perfectly and yields all solutions.
Students are still learning how to perform the row operations by hand. But they rapidly make use of the “rref” command (row reduced echelon form)…
At the end, they (only) need to be able to write down the answer in parametric form (in the case of infinity of solutions).
Introduction3
Introduction
For a polynomial system of equations, if no initial guesses are specified, Nspire CAS uses the lexical Gröbner/Buchberger elimination method to attempt to determine all solutions.
It would be fine to have access to such a function.
As we said before, we will present 2 examples.
Introduction4
Introduction
The first one will look “complicated” because we will need to solve 5 equations. But Nspire CAS will easily solve the system, in exact mode, and get the right solution.
So everything will be OK. But the example will force us to question the answers given by a CAS system.
The second one will consist of 3 equations: a surprise will appear…
Introduction5
Introduction
Let us mention that the examples we will show are a direct consequence of teaching mathematics with technology.
Each of the two examples is interesting without technology. But with technology, the interest is greater.
Introduction6
Introduction
A brief explanation about the theory of Gröbner bases.
Let’s take a simple example. Suppose one needs to solve the following non linear (but polynomial) system
Introduction7
Introduction
We are in the case of a finite number of solutions. The idea: replace the original system by an equivalent one.
With the following property: the first equation is now univariate. Selecting an order for the variable, one can obtain
Introduction8
Introduction
Here is how to do it:
Introduction9
Introduction
We first solve the equation
and then we substitute each value of x into to find the corresponding values of y.
More details:Gröbner Bases: A Short Introduction for Systems Theorists by Bruno Buchberger.http://people.reed.edu/~davidp/pcmi/buchberger.pdf
Solving polynomial systems example 1
Solving Polynomial Systems: Example 1
The problem: the plane z = 1 + x + y intersects the cone . Find the points on the curve of intersection that are the closest and the farthest from the origin.
Possible approach: apply Lagrange multipliers technique.
Solving polynomial systems example 11
Solving Polynomial Systems: Example 1
Recall about Lagrange’s method.
Here we have 2 constraints. In the case of one constraint, here is the idea. Suppose you need to find the extreme values of f(x, y) under the
constraint g(x, y) = c.
Solving polynomial systems example 12
Solving Polynomial Systems: Example 1
Now suppose we want to find the extreme values of f(x, y, z) subject to two constraints g(x, y, z) = 0, h(x, y, z) = 0. Let P = (x, y, z) be a solution point.
If such a point P exists, then (assuming that the gradient vectors are not zero and not parallel) there exists numbers l and m (called Lagrange multipliers) such that
Solving polynomial systems example 13
Solving Polynomial Systems: Example 1
Taking in account the 2 constraints, we need to solve the following system:
Solving polynomial systems example 14
Solving Polynomial Systems: Example 1
In our example, let
Note that f is the expression we want to optimize; co1 = 0 is the first constraint and co2 = 0 is the second constraint.
Solving polynomial systems example 15
Solving Polynomial Systems: Example 1
We need to find the (real) zeros of the following system:
This yields the system:
Solving polynomial systems example 16
Solving Polynomial Systems: Example 1
Here is Nspire CAS solution (without looking at the graphs).
We have used Lagrange multipliers technique, solving 5 equations.
2 solutions are obtained by the CAS.
Solving polynomial systems example 17
Expression wewant to optimize
The twoconstraints
ApplyingLagrange’smethod
syst1 is the polynomial system weneed to solve
Solving Polynomial Systems: Example 1
Solutions of the system
Weextractx, y and zfrom
eachrow of the matrix mat1.
We have 2 points p1 and p2.
Closest point seems to be P1 = (-0.29, -0,29, 0.41).
Farthest point seems to be P2 = (-1.71, -1.71, -2.41).
Solving polynomial systems example 18
Solving Polynomial Systems: Example 1
So Nspire CAS has found 2 solutions for system “syst1” of 5 equations in 5 unknowns. These are, in fact, the two (real) solutions that exist.
The “zeros” command did a good job. But do we have a closest point and a farthest point?
In fact, P1 is the closest point from the origin. P2 is NOT the farthest point: it is a local minimum.
Solving polynomial systems example 19
Solving Polynomial Systems: Example 1
The “educational aspect” is the following: the method of Lagrange multipliers gives a constrained extremum only if one exists.
Let us say it again: one point (P1) is the closest one to the origin. There is NO farthest point from the origin!
You won’t discover this if you don’t push a little bit more this problem. For doing so, the CAS can (again) be very useful.
Solving polynomial systems example 110
Solving Polynomial Systems: Example 1
We can use the CAS system to plot both surfaces in a first attempt.
Then we can try to find parametric equations for the curve of intersection.
If you give this kind of problem to your students, they need to know how to plot a surface with their CAS system.
Solving polynomial systems example 111
Solving Polynomial Systems: Example 1
In order to plot the curve of intersection, they need to find ─ themselves ─ parametric equations for the space curve.
This curve is an hyperbola. On this curve, we can go as far as we want from the origin: there is no farthest point from the origin.
One thing is sure. Without technology, it is difficult ─ but possible ─ to solve this example.
Solving polynomial systems example 112
Solving Polynomial Systems: Example 1
Here is what we can obtain: we have used Nspire CAS (OS 3.2). The plane has been entered in function mode, the cone using parametric equations.
Solving polynomial systems example 113
Solving Polynomial Systems: Example 1
Now, here are some parametric equations for the curve of intersection:
Students can find this by using a substitution of the z-value of the plane into the cone equation.
Solving polynomial systems example 114
Solving Polynomial Systems: Example 1
In details:
Another way to see this:
Solving polynomial systems example 115
Solving Polynomial Systems: Example 1
In conclusion, here are the two surfaces, the curve of intersection and the 2 points:
P1 = (−0.29 ,−0.29, 0.41) (closest one)
P2 = (−1.71, −1.71, −2.41)
Solving polynomial systems example 116
Solving Polynomial Systems: Example 1
Final remark about example 1: technology allows us to link single and multiple variable calculus.
Once parametric equations have been found, we can compute the norm of the vector position and find its minimum value again.
The following Nspire CAS screens show this.
Solving polynomial systems example 2
Solving Polynomial Systems: Example 2
As long as the “zeros” (or “solve”) command gives the user correct answers, there is no motivation to try to understand which algorithm the “solve” command is using when solving polynomial systems.
This algorithm is “Gröbner Bases” and Nspire CAS is using it. But users don’t have access to a “Gröbner basis function”.
Solving polynomial systems example 21
Solving Polynomial Systems: Example 2
It is true that this stuff is not an undergraduate subject but a “Gröbner basis function” would play a role like the “rref” command plays for linear systems.
Consequently, we would like Texas Instruments to make this function available for the user.
Example 2 will show that such a function would be useful.
Solving polynomial systems example 22
Solving Polynomial Systems: Example 2
Recall from calculus: when we want to find the absolute (global) maximum value of a function f of two variables over a closed and bounded domain D, we need to consider the interior of D and its boundary.
Because a continuous function defined over a compact set (closed and bounded in this case) reaches a global maximum value, such a point exists. It can be inside the domain or on the boundary.
Solving polynomial systems example 23
Solving Polynomial Systems: Example 2
We first need to find the critical points located inside the domain D.
The second derivative test for functions of 2 variables can be used to classify these points.
And Lagrange multiplier can be used for the boundary.
Solving polynomial systems example 24
Solving Polynomial Systems: Example 2
And if the boundary consists of a circle (as in the next example), parametric equations can be used instead of Lagrange’s method.
This will be our first attempt to locate the maximum valueon the boundary.
This way, we will already know what is the maximum value of our function. Lagrange’s method should eventually confirm this…
Solving polynomial systems example 25
Solving Polynomial Systems: Example 2
Here is the example. The temperature at a point (x, y) in the plane is given by
We want to find the location of the hottest point on the disk
Solving polynomial systems example 26
Solving Polynomial Systems: Example 2
Three critical points are located inside the disk: (0, 0), (1, 1) and (-1, -1).
Solving polynomial systems example 27
Solving Polynomial Systems: Example 2
The points (1, 1) and (-1, -1) are local minimums where both temperatures are 0. And (0, 0) is a saddle point where the temperature value is 2.
This is a consequence of the second derivative test for local extreme values.
But we don’t need to recall this. We want to find the maximum temperature, the point (2, 2) is inside the disk and
Solving polynomial systems example 28
Solving Polynomial Systems: Example 2
So the absolute maximum is achieved on the circular boundary
As said before, we won’t start by using Lagrange’s method. Instead, let’s use parametric equations for the circle.
We can use the first trig identity:
Solving polynomial systems example 29
Solving Polynomial Systems: Example 2
Using x = 1 + 3 cos(t) and y = 1 + 3 sin(t) where t is the parameter (-p ≤ t ≤p), we can plot the graph of the temperature as a function of t.
We will show that the maximum (temperature) value is 243.98 and is achieved twice:
when t = − 0.0541, the corresponding point on the circle being (3.996, 0.8379);
when t = 1.625, the corresponding point on the circle being (0.8379, 3.996).
We will see that the minimum value on the boundary is 0.1325 and is achieved at the value t = − 2.356. The corresponding point on the circle is (−1.12, −1.12) .
There is also a local minimum of 152.868 at t = 0.7854 ; the point on the circle is (3.12, 3,12).
Solving polynomial systems example 210
Solving Polynomial Systems: Example 2
Solving polynomial systems example 211
Solving Polynomial Systems: Example 2
Global maximum
Local minimum
So, parametric equations has transformed this multiple variable calculus problem into a single variable problem.
Global maximum
Global minimum
on the boundary
Solving polynomial systems example 212
Solving Polynomial Systems: Example 2
Using Lagrange’s method with level curves, this should allow us to locate 4 solutions. Let’s see this using Derive(we need a fast implicit plotter here).
We will plot the constraint, the 4 points on it and some level curves of the temperature function.
At each point, we should observe that one gradient vector is a multiple of the other.
Solving polynomial systems example 213
Solving Polynomial Systems: Example 2
Now let’s use Lagrange’s method algebraically. We want to optimize the function te(x, y) under the constraint gwhere
Lagrange’s method (one constraint) tells us to solve the system
Solving polynomial systems example 214
Solving Polynomial Systems: Example 2
Here is the surprise: Nspire CAS finds 3 solutions (not 4) and one is wrong (does not satisfy the constraint). The 2 others are good but are minimums (the local one and the global one as seen before).
So, with Lagrange, Nspire CAS does not find the absolute maximum!
Strange and important thing: the wrong solution is given numerically and the 2 good ones are in exact mode.
The following screen shows all this.
Solving polynomial systems example 215
The
Solving Polynomial Systems: Example 2
The 3 solutions according
to Nspire CAS
x yl
Wrong solution!
Good solutions
But 2 good solutions are missing!
And this is where the maximum value
is achieved.
Gr bner bases for example 2
Gröbner Bases for Example 2
In order to see what went wrong, we will solve the former system, using a Gröbner basis provided by Derive.
Some interesting mathematical aspects will be revealed.
Let us recall that the system we need to solve is the following (we have copied the system from Nspire CAS to Derive).
Gr bner bases for example 21
.
GröbnerBases for Example 2
Recall: Npsire CAS has also found
these 2 good solutions before:
Important to say: in exact mode, only 2 answers will be shown by Derive.
Increasing the precision and using approximate mode, 4 answers will come out:
With 30 digits, 4 solutions are found by Derive
Gr bner bases for example 22
GröbnerBases for Example 2
A Gröbner basis can help us to understand what caused the bug in Nspire CAS. If we use the lexicographic order l, x, y, the Gröbner basis consists of 3 elements.
This is probably not the Gröbner basis Nspire CAS used because it would have given the 4 solutions. Let’ s see this:
Gr bner bases for example 23
GröbnerBases for Example 2
grob11(y)
grob12(x, y)
grob13(l, y)
x y l
Solving grob11(y) = 0 yields the 4 expected values of y, say y1, y2, y3 and y4. Then solving grob12(x, y) = 0 for each value of y yields the 4 expected values of x. Finally, for each of the 4 values of y, solving grob13(y, l) = 0 yields 3 (not 4) corresponding values of l. So the Gröbner basis “grob1” would have lead to the 4 correct answers:
Gr bner bases for example 24
GröbnerBases for Example 2
Now, let’s change the order of the lexical elimination method of Gröbner basis.
Let’s use the order x, y, l. In this case, the Gröbner basis provided by Derive ─ we will call it “grob2”─ consists of 4 elements. We will copy it to Nspire CAS. The first one (“grob21”) is a polynomial of degree 5 in l. The next two are in (y, l) variables. The fourth contains x, y and l.
.
Gr bner bases for example 25
Derive
GröbnerBases for Example 2
Then we make the same analysis as before.
Nspire CAS
Gr bner bases for example 26
GröbnerBases for Example 2
Nspire CAS finds the 3 real roots of grob21(l) because grob21 factors into a quadratic polynomial multiplied by a cubic polynomial (and l3 is the only real root of this cubic polynomial).
The problem is when we substitute the value l3 into grob22(y, l): it should be identically 0 BECAUSE l3 is a root of the equation
l1, l2
l3
Gr bner bases for example 27
Derive
GröbnerBases for Example 2
This isl3: ≈ 42.029….
After the substitution of
l3 intogrob22
#4 simplifies to 0∙y + e wheree is the constant in #6
In exact mode ─ and this why Derive earlier returned only 2 solutions ─, there is no way to find out that grob22(y, l3) ≡ 0 (nested radicals are causing problems to CAS systems!).
This ise
The constante in #6is 0 but thiscan’tbeverified in exact mode. Increasing the
precision to 20 digits, Deriveapproximatesgrob22(y, l3) to 0:
Approximating
e , using 10 digits.
Gr bner bases for example 28
GröbnerBases for Example 2
Also, note the following approximations of l3 while increasing the precision in Derive:
Gr bner bases for example 29
GröbnerBases for Example 2
Now let’s switch to Nspire CAS and substitute l3 into grob22. What happens to grob22(y, l3)? We know that this should be identically 0. In Derive, this was 0∙y + e with e approximating to 0.
The fact that Derive was able to find 0∙y + e was crucial because NO value of y have been found using grob22(y, l3). If, instead of 0∙y + e, we take e 1∙y + e2, then a value of y can result from solving for y. And if e2 = 0, this value will be 0…!
Gr bner bases for example 210
GröbnerBases for Example 2
On both the sofware version and the handheld, the computational accuracy of Nspire CAS is the following: floating-point (decimal) values in memory are stored using up to 14 digits with a 3-digit exponent.
When a floating-point value is displayed, the displayed value is rounded as specified by the applicable mode settings with a maximum of 12 digits.
In the next NspireCAS session (screen), we have set “Display Digits” to “Fix 12” for a better understanding.
Gr bner bases for example 211
GröbnerBases for Example 2
This is l3 using 12 digits
Solving grob22(y, l3) = 0 yields y = 0.
This is l3 using 14 digits
Solving grob22(y, l3) = 0 yields y = -1.
Gr bner bases for example 212
Gröbner Bases for Example 2
And because of the 12 digit limitation, the value 0 was retained. So let’s set y3 = 0 and l3 = 42.0293624379. We substitute into grob24 and solve for x:
Gr bner bases for example 213
GröbnerBases for Example 2
So, this is a possible explanation for the bug when Lagrange multipliers technique was applied to the original polynomial system:
Gr bner bases for example 214
GröbnerBases for Example 2
We will never know if Nspire CAS has used the Gröbner basis “grob2”. But if it has been the case, Nspire CAS “has forgotten” to use the element grob23…!
This is where the 2 missing solutions are coming from and, to be consequent, Nspire CAS should also have found grob23(y3, l3) = 0:
This should be 0
Conclusion
Conclusion
Without a continuous use of computer algebra in our teaching, these problems would not have been related.
Moreover, our examples showed that MORE mathematics instead of LESS mathematics can be taught when computer algebra is available on the desk for each student.
Conclusion1
Conclusion
We do hope that a (built-in) “Gröbner basis function” will soon be available for the Nspire CAS users.
In the case of a polynomial system having a finite number of solutions, example 2 showed the importance of this kind of function because we needed to deal first with a univariate polynomial.
Conclusion2
Conclusion
Our example 1 showed that technology, in particular 3D surfaces and space curves, can be a useful tool for analysis.
Also, example 1 was an opportunity to link single and multiple variable calculus (rarely seen in textbooks…).
Conclusion3
Conclusion
So, not only can we face heavy computational problems, but we can also explore different areas.
But (as a suggestion for TI) having access to more digits of precision (on the software version) would be useful.
Thank you
Thank You!
• Login
|
__label__pos
| 0.956921 |
La animación en la pestaña Viewpager cambia fadein / fadeout como la pantalla de introducción de Linkedin
Quiero implementar el mismo tipo de animación, como el vinculado en hace en la aplicación android para su pantalla de introducción (inicio de sesión / registro).
Estoy usando el paginador de la visión para la pantalla de la introducción y quiero implementar la animación del fadeout del fadein en el cambio de la imagen de fondo, como por el golpe a la derecha a la izquierda o viceversa. Quiero implementar la animación de fadein y fadeout en el cambio de imagen de fondo según el golpe de pantalla. Cualquier ayuda es apreciada. Echa un vistazo a mi código de diseño
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" > <ImageView android:id="@+id/background_image" android:layout_width="fill_parent" android:layout_height="fill_parent" android:scaleType="centerCrop" /> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:weightSum="7" > <LinearLayout android:id="@+id/linearLayout1" android:layout_width="match_parent" android:layout_height="0dp" android:layout_marginRight="10dp" android:layout_weight="1" android:gravity="right" android:orientation="horizontal" > <ImageView android:id="@+id/imageView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginRight="5dp" android:src="@drawable/icon_skip" /> <TextView android:id="@+id/skip_tv" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:text="Skip" android:textAppearance="?android:attr/textAppearanceMedium" android:textColor="@android:color/white" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="4" android:gravity="bottom" android:orientation="vertical" > <ImageView android:id="@+id/imageView3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:src="@drawable/logo" /> <android.support.v4.view.ViewPager xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/pager" android:layout_width="wrap_content" android:layout_height="wrap_content" tools:context="com.xyz.View.IntroductionScreen" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="2" android:gravity="center" android:orientation="vertical" > <Button android:id="@+id/connection_bt" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="10dp" android:layout_marginLeft="40dp" android:layout_marginRight="40dp" android:background="@drawable/button" android:text="CONNEXION" android:textColor="@android:color/white" /> <Button android:id="@+id/register_bt" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="40dp" android:layout_marginRight="40dp" android:layout_marginTop="10dp" android:background="@drawable/button" android:text="INSCRIPTION" android:textColor="@android:color/white" /> </LinearLayout> </LinearLayout>
Y ver el diseño del fragmento de paginación
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:id="@+id/text_layout" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="vertical" > <TextView android:id="@+id/tagline_tv1" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center" android:singleLine="true" android:text="Laissez votre prochain job" android:textAppearance="?android:attr/textAppearanceMedium" android:textColor="@android:color/white" /> <TextView android:id="@+id/details_tv" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center" android:maxLines="2" android:text="vous trouver" android:textAppearance="?android:attr/textAppearanceMedium" android:textColor="@android:color/white" /> </LinearLayout> </RelativeLayout>
Muestra Splashs creen esto es lo que quiero implementar. Pantalla de Linkedin Spalsh Gracias
2 Solutions collect form web for “La animación en la pestaña Viewpager cambia fadein / fadeout como la pantalla de introducción de Linkedin”
Esto es un lag free y también maneja los Buttons
Introduzca aquí la descripción de la imagen
Idea principal:
1) crear primero un fondo transparente para sus fragmentos.
2) Crear LayerDrawable y agregar imagen de fondo de cada fragmento como un elemento. A continuación, agregue su LayerDrawable como fondo de su viewpager.
3) en el método onCreate , establezca el alfa de cada capa correctamente, de modo que sólo la parte superior tenga el valor alfa de 255.
4) establezca para cada vista de su FragmentStatPagerAdapter una etiqueta que corresponde al índice LayerDrawable que declaró en el LayerDrawable . Por ejemplo, al abrir la aplicación FragmentA está mostrando su etiqueta debe corresponder a superior dibujable que es 2 (a partir de 0). La etiqueta de la última página debe ser 0 corresponde a la más baja dibujable.
5) cambia el drawable de cada opinión en la función transformPage
6) para agregar el uso de botón RelativeLayout . Para colocar botones en la parte superior de todas las vistas, utilice RelativeLayout . Los niños posteriores se colocan más arriba en el eje Z. Lo puedes ver en el código:
Ahora permite ver el código:
Actividad principal
public class MainActivity extends FragmentActivity { ViewPager viewPager=null; int numberOfViewPagerChildren = 3; int lastIndexOfViewPagerChildren = numberOfViewPagerChildren - 1; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); viewPager = (ViewPager) findViewById(R.id.pager); viewPager.setAdapter(new MyAdapter(getSupportFragmentManager())); final LayerDrawable background = (LayerDrawable) viewPager.getBackground(); background.getDrawable(0).setAlpha(0); // this is the lowest drawable background.getDrawable(1).setAlpha(0); background.getDrawable(2).setAlpha(255); // this is the upper one viewPager.setPageTransformer(true, new ViewPager.PageTransformer() { @Override public void transformPage(View view, float position) { int index = (Integer) view.getTag(); Drawable currentDrawableInLayerDrawable; currentDrawableInLayerDrawable = background.getDrawable(index); if(position <= -1 || position >= 1) { currentDrawableInLayerDrawable.setAlpha(0); } else if( position == 0 ) { currentDrawableInLayerDrawable.setAlpha(255); } else { currentDrawableInLayerDrawable.setAlpha((int)(255 - Math.abs(position*255))); } } }); } class MyAdapter extends FragmentStatePagerAdapter { public MyAdapter(FragmentManager fm) { super(fm); } @Override public Fragment getItem(int i) { Fragment fragment=null; if(i==0) { fragment=new FragmentA(); } if(i==1) { fragment=new FragmentB(); } if(i==2) { fragment=new FragmentC(); } return fragment; } @Override public int getCount() { return numberOfViewPagerChildren; } @Override public boolean isViewFromObject(View view, Object object) { if(object instanceof FragmentA){ view.setTag(2); } if(object instanceof FragmentB){ view.setTag(1); } if(object instanceof FragmentC){ view.setTag(0); } return super.isViewFromObject(view, object); } } }
Activity_main.xml
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > <android.support.v4.view.ViewPager android:id="@+id/pager" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/layerdrawable" > </android.support.v4.view.ViewPager> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:orientation="horizontal" android:layout_marginBottom="48dip" > <Button android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:text="Sign in" android:layout_margin="16dip" android:background="#2ec6e4" android:textColor="#FFFFFF" /> <Button android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:text="Join us" android:background="#2ec6e4" android:layout_margin="16dip" android:textColor="#FFFFFF" /> </LinearLayout> </RelativeLayout>
LayerDrawable
<?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android" > <item> <bitmap android:id="@+id/Idofbg3" android:gravity="fill" android:src="@drawable/bg3" /> </item> <item> <bitmap android:id="@+id/Idofbg2" android:gravity="fill" android:src="@drawable/bg2" /> </item> <item> <bitmap android:id="@+id/Idofbg1" android:gravity="fill" android:src="@drawable/bg1" /> </item> </layer-list>
Para las personas perezosas que simplemente no quieren declarar fragmentos:
Fragmento
public class FragmentA extends Fragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View v = inflater.inflate(R.layout.fragment_a,container,false); return v; } }
Fragmento_a.xml
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/FragmentA" android:background="@android:color/transparent"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textAppearance="?android:attr/textAppearanceLarge" android:text="This is Fragment A" android:textColor="#FFFFFF" android:id="@+id/textView" android:gravity="center" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" android:layout_alignParentRight="true" android:layout_alignParentBottom="true" /> </RelativeLayout>
Establezca ViewPager.PageTransformer en ViewPager y obtenga la animación deseada usando las aplha y animación de translation .
La entrada más importante es el parámetro de position pasado a transformPage callback. El valor de posición indica cómo se posiciona actualmente la vista.
Asumiendo que las vistas en ViewPager son de ancho completo, aquí es cómo se debe interpretar el valor de posición.
------------------------------------------------------------------------------------ position | what does it mean ------------------------------------------------------------------------------------ 0 | view is positioned in the center and fully visible to the user. -1 | view is positioned in the left and not visible to the user. 1 | view is positioned in the right and not visible to the user. >-1 & <0 | view is being scrolled towards left and is partially visible. >0 & <1 | view is being scrolled towards right and is partially visible. ------------------------------------------------------------------------------------ mPager.setPageTransformer(true, new ViewPager.PageTransformer() { @Override public void transformPage(View view, float position) { // Ensures the views overlap each other. view.setTranslationX(view.getWidth() * -position); // Alpha property is based on the view position. if(position <= -1.0F || position >= 1.0F) { view.setAlpha(0.0F); } else if( position == 0.0F ) { view.setAlpha(1.0F); } else { // position is between -1.0F & 0.0F OR 0.0F & 1.0F view.setAlpha(1.0F - Math.abs(position)); } // TextView transformation view.findViewById(R.id.textView).setTranslationX(view.getWidth() * position); } });
Aquí está el diseño:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > <ImageView android:layout_alignParentTop="true" android:id="@+id/imageView" android:layout_width="match_parent" android:layout_height="match_parent" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/textView" /> </RelativeLayout>
Aquí está el registro de la pantalla:
Registro de pantalla
• Animación de texto en lienzo - Android
• Animar el ancho de trazo de una Forma
• Android - TabActivity con animación de Transición
• Cómo escalar un botón de derecha a izquierda, animación de Android
• Desafío: Animación personalizada de ViewPager. Cambiar la altura de los elementos elegidos (Ver doble)
• androide; Establecer animaciones de entrada y salida en AdapterViewFlipper: nombre de animador desconocido traducir
• ¿Cómo utilizar el diseño de material de acción barra de navegación de cajón animación de conmutación?
• Cómo ocultar la vista cuando se hace la animación en android?
• Rotación de la vista Jerarquía 90 grados
• Cómo construir animador de diapositivas arriba / abajo en XML para android?
• Android salta onDraw () cuando ejecuto mi animación en reversa
• FlipAndroid es un fan de Google para Android, Todo sobre Android Phones, Android Wear, Android Dev y Aplicaciones para Android Aplicaciones.
|
__label__pos
| 0.827346 |
dozc1071 2014-02-09 18:06 采纳率: 100%
浏览 70
已采纳
使用ajax发送表单数据
I want to send all input in a form with ajax .I have a form like this.
<form action="target.php" method="post" >
<input type="text" name="lname" />
<input type="text" name="fname" />
<input type="buttom" name ="send" onclick="return f(this.form ,this.form.fname ,this.form.lname) " >
</form>
And in .js file we have following code :
function f (form ,fname ,lname ){
att=form.attr("action") ;
$.post(att ,{fname : fname , lname :lname}).done(function(data){
alert(data);
});
return true;
But this is not working.i don't want to use Form data .
• 写回答
7条回答 默认 最新
• douzhang1852 2014-02-09 18:14
关注
as far as we want to send all the form input fields which have name attribute, you can do this for all forms, regardless of the field names:
First Solution
function submitForm(form){
var url = form.attr("action");
var formData = {};
$(form).find("input[name]").each(function (index, node) {
formData[node.name] = node.value;
});
$.post(url, formData).done(function (data) {
alert(data);
});
}
Second Solution: in this solution you can create an array of input values:
function submitForm(form){
var url = form.attr("action");
var formData = $(form).serializeArray();
$.post(url, formData).done(function (data) {
alert(data);
});
}
本回答被题主选为最佳回答 , 对您是否有帮助呢?
评论
查看更多回答(6条)
报告相同问题?
悬赏问题
• ¥40 如果update 一个列名为参数的value
• ¥15 基于51单片机的水位检测系统设计中LCD1602一直不显示
• ¥15 OCS2安装出现问题,请大家给点意见
• ¥15 有没有大能能帮我出个适应度函数图,T_T
• ¥15 ros小车启动launch文件报错
• ¥15 vs2015到期想登陆但是登陆不上
• ¥15 IPQ5018制作烧录固件,boot运行失败(操作系统-linux)(相关搜索:操作系统)(相关搜索:操作系统)
• ¥20 icefall在librispeech基础上加入个人数据集
• ¥30 keepalive高可用故障运维配置询问
• ¥15 求帮助!国家电网内网u盘突然识别不出来了。
|
__label__pos
| 0.942848 |
Sep 10, 2023
What Is MFA (Multifactor Authentication)
What Is MFA (Multifactor Authentication)
Table of contents
The password has been a staple of access control for decades. However, we can no longer rely on this form of single-factor authentication for robust cybersecurity. Multifactor authentication (MFA) was developed to further protect account access, resources, and data, and to prevent cyberattacks. This article shows how MFA is used to protect assets and to ensure compliance.
Why use multifactor authentication?
For many years, the “username and password” combination — known as single-factor authentication — was the default method for controlling access to resources. . Unfortunately, having only one user-verification factor gives cybercriminals only one control mechanism to target and overcome. To do this, they’ve developed several cyberattack techniques:
Phishing: Credential theft is often achieved through phishing — sending emails or other messages purporting to be from trustworthy companies, in order to trick people into revealing information such as passwords. This is the most popular cyberattack technique, according to IBM research.
Social engineering: This is a form of phishing in which cybercriminals psychologically manipulate people into willingly handing over passwords.
Spear phishing: This form of phishing targets specific individuals and is often used to steal password credentials from employees with wide-ranging access, such as administrators.
Brute force: According to Cybernews research, the most commonly used passwords in 2024 are:
1. 123456
2. 123456789
3. Qwerty
Brute force attacks are automated attempts to guess a password in order to access an application. With easy-to-guess passwords, brute force attacks become a serious threat to single-factor verification systems.
Credential stuffing: Attackers often use passwords stolen through phishing attacks or found via data breaches to attempt to access other applications.
Server compromise: Hackers also try to gain access to stored corporate passwords. If they are poorly secured, the attacker can get all of a company’s passwords.
Man-in-the-middle (MitM) attacks: Some attackers attempt to intercept usernames and passwords when they are submitted via a web browser. If the attempt is successful, the attacker then uses the credentials to log in.
Keylogging malware: Certain types of malware are used in stealth mode to detect when a password is entered on a device. The malware then captures the information and sends it to a hacker’s account for reuse.
Password recovery systems based on a single factor are highly vulnerable and often a target for attackers. Password-recovery systems are frequently a point of attack, as even passwordless authentication must have a robust recovery system in place.
Where does a multifactor authentication system fit in with identity management?
MFA is a strong security measure that addresses the issues inherent in single-factor authentication. In access management, MFA is used to request multiple verification methods whenever a person attempts to log in to an app or access a resource.
MFA has been criticized for providing a poor user experience — because, for example, after entering a first factor such as a password, the user must then enter another factor such as a one-time code sent to a mobile device. However, many identity management systems have strategies for improving user experience when using strong authentication. For example, rules allow a user to log in using MFA once and then apply device authorization. The next time the user logs in, the second factor won’t be requested. This rule can be granular and set authorization to a specific time, geolocation, and so on.
Instances where using multifactor authorization would have mitigated a breach
MFA adds layers of security to help prevent unauthorized access to online accounts. The two examples of cyberattacks shown below could have been prevented if an MFA system had been in place:
• In 2022, Uber suffered a serious breach via a social engineering attack. The attacker tricked an employee into handing over their password. This was then used to access the employee’s Slack account and several other internal systems.
• In 2023, 23andMe, a DNA testing company, suffered a massive public data breach affecting 6.9 million customers. Using a “credential stuffing” tactic, the attacker stole passwords from previous breaches and used them to log into accounts.
These breaches and other similar attacks could have been prevented if an additional authentication factor had been in place.
Multifactor authentication factors
MFA is based on a few types of secure authentication factors:
1. Something you know (knowledge): Such as a password, the answer to a question, or a personal identification number (PIN).
2. Something you have (possession): Such as a security code or token.
3. Something you are (inherence): Such as a biometric authentication or a behavioral characteristic.
Sometimes a fourth a authentication factor is included:
1. Somewhere you are (location): The geolocation of an individual logging in can be enforced as a factor in controlling access.
Examples of multifactor authentication methods
Multifactor authentication combines security and usability. The following examples show the different authentication methods used to apply multiple authentication factors.
Time-based one-time password (TOTP)
This is typically a six-digit code generated by, for instance, aMicrosoft or Google authenticator app that a user has on an approved device. The user enters the correct username and password, and then enters the code. The code works only for a limited amount of time, typically measured in seconds (this is configurable by the system).
SMS text message code
Like a TOTP, an SMS text code is generated once a username and password are successfully submitted. The SMS text code is sent to the phone registered with the account. The user then enters the code within a given timeframe to gain access to the account or resource.
Biometric
Biometric authentication, such as facial recognition, fingerprint scans, and behavioral biometrics are used on mobile devices to control access. Biometrics are increasingly used to access bank apps and other financial services. Often, a biometric is associated with a username and password, which can be used as part of the recovery system if the biometric fails for some reason.
Security questions
Another MFA method is the use of personal security questions set up during the registration of an account. Sometimes, a security question is used to perform phone verification when a customer calls a company, such as a bank.
Physical token
A physical security token such as a hardware key can be used to access an account or another network resource. The key is inserted into a computer’s USB port or sometimes tapped on a device to authenticate the user.
Is two-step verification the same as multifactor authentication?
Two-step verification, also known as two-factor authentication (2FA) is a subset of multifactor authentication (MFA). While MFA requires two or more authentication factors to validate access, 2FA requires two. In other words, 2FA is a form of MFA, but MFA is not a form of 2FA.
What is adaptive authentication?
Risk-based, adaptive authentication, or step-up authentication, is a type of access control that adapts to login risk levels. Rules determine the risk level, and additional authentication factors, or adaptive MFA, are employed under certain conditions. For example, if a person logs in from London and then two hours later attempts to log in from Moscow, the identity management system would request additional authentication factors. Similarly, risk-based authentication can be used to suppress authentication factors if the conditions are low risk — for example, access from a specific IP address.
What are the benefits of using multifactor authentication?
MFA is a layered approach to security that makes unauthorized access more difficult. By using MFA to protect digital resources, a company reduces the risk of many types of cyberattack. An MFA solution prevents phishing, account compromise, identity theft, ransomware infection, and data breaches.
Implementing multifactor authentication helps an organization meet data security regulations including HIPAA, SOC 2, PCI-DSS, SOX, and GLBA.
Single sign-on (SSO) and risk-based authentication are MFA techniques for improving usability and security. SSO is an authentication method that enhances usability while reducing risks associated with a single-factor authentication approach.
Google Workspace and 2FA
Productivity applications such as Google Workspace support the use of 2FA (also known as 2-step verification or 2SV). Two-factor authentication for Google Workspace can be set up by the user directly or enforced by the service administrator. Google Workspace 2FA can be used to ensure that a specific team uses security keys with certain apps.
The types of 2FA methods supported by Google Workspace are:
• Security keys (key fob)
• Google prompt, text message, or phone call
• Google Authenticator app
• Backup codes
|
__label__pos
| 0.99275 |
ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.
advertisement
AddThis Social Bookmark Button
Article:
Untwisting Python Network Programming
Subject: Why untwisting?
Date: 2006-08-14 08:15:01
From: [email protected]
Response to: Why untwisting?
Are you measuring the time it takes to perform a task, or the amount of time it takes to start the interpreter, load every module, perform the task, and shut down the interpreter?
Twisted has more code in it than the Python standard library version, so unless you've carefully optimized the package for importing, the amount of time spent loading code will dwarf the amount of time spent actually doing anything.
1 to 1 of 1
1. Short examples don't show event driven-driven benefits
2006-08-23 11:11:38 andypurshottam [View]
1 to 1 of 1
|
__label__pos
| 0.831104 |
How To Clear Browser Cookies On iPhone
As a general rule of thumb, most internet users are aware of the necessity of clearing their browser’s cookies on a regular basis. This can be done on any computer, not just laptops and desktops, for a variety of reasons, including freeing up space. Is it necessary to clear your iPhone’s cookies as well?? There is a small chance, but clearing cookies on your iPhone isn’t a necessity; it will free up space and may help resolve certain browsing issues.
What is the procedure for clearing cookies in Safari on an iPhone?
To remove cookies from Safari on an iPhone, follow these instructions. The process of clearing your iPhone’s cookies is a simple one that won’t take up much time. You don’t need to download any additional apps to accomplish this; all it takes is a few taps on your device. However, depending on the browser you’re using, the exact steps to clear cookies will vary. The Safari mobile browser from Apple, which can be easily cleared of cookies (via Apple), is a popular choice for iPhone users.
1. Go to Settings on your iPhone and scroll down through the list of apps to locate Safari, then tap it to open the browser’s settings.
2. Scroll to the bottom of Safari’s settings and tap Advanced.
3. Tap Website Data and tap to clear all data; simply tap Remove All Website Data. However, if you only want to clear certain cookies, tap the edit button in the upper right corner of the screen, then tap the red circle next to the data files that you want to be removed.
What happens when you clear the cache and cookies on your iPhone?
All web browsers, with the exception of those that place a high value on privacy, store cookies, and other information about your visits to various websites. The most popular alternative to Safari, which comes pre-installed on iPhones, is Chrome. Despite this, many people use other browsers.
How to clear cookies in Chrome on an iPhone
Chrome’s method of clearing cookies differs slightly from Safari.
1. Open Google Chrome on your iPhone.
2. Tap the Menu button located in the bottom right corner of the screen (it’s the one with three dots).
3. Tap History > Tap Clear Browsing Data >Tap Cookies, Site Data
4. Tap on Clear Browsing Data
It’s done.
Why it’s important? Do I need to clear my cookies on my iPhone?
Cookies improve the online user experience by enabling the generation of tailored adverts and useful recommendations. However, there are a few reasons why iPhone users might want to clear their cookies. Having a lot of cookies on your iPhone will slow down your surfing speed. Have you noticed that your iPhone takes longer to load websites than usual? It’s possible that cookies are to blame for this problem, and deleting them could fix it.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.8354 |
org.netbeans.modules.editor.lib2/1 2.5.0 44
Package org.netbeans.spi.editor.highlighting
The Highlighting SPI is a new way of influencing how text in an editor component is rendered.
See: Description
Package org.netbeans.spi.editor.highlighting Description
The Highlighting SPI is a new way of influencing how text in an editor component is rendered. The editor framework in Netbeans is an extension of the Swing Text SPI framework and as such it uses things like Elements and Views to render a Document on a screen.
Since the editor framework is primarily designed to support various different types of files in the IDE it has to give module a chance to participate in documents rendering. Modules providing support for different languages usually need to influence colors and fonts of different parts of a source file depending on what code it contains (i.e. syntax coloring) or what other information the module needs presenting to a user (e.g. text annotations, hyperlinking, etc.). This all and more can be achieved by using the Highlighting SPI.
Key parts of the SPI
The very basic idea behind the SPI is to render a document as a sandwich of independent layers, which will say what colors and font should be used for rendering particular parts of the document. These parts of the document together with their rendering attributes (i.e. colors or font) are called highlighted areas or highlights. Each layer can provide as many non-overlapping highlights as it likes and each module can provide as many layers as it needs. The implementation behind the SPI will collect all layers registered for a particular document type (i.e. mime type), ask each of them for its highlights, merge those highlights together and finally send them to the draw engine, which will render the document.
The whole SPI is organized around the HighlightsLayer class, which is the ultimate thing that modules need to implement in order to provide a list of highlights for a document. The HighlightsLayers are created by HighlightsLayerFactory, which should be registered in MimeLookup under the mime-type of a document that the layer should be used for. All layers registered for one type of a document are ordedred according to the ZOrder they provide. Besides of ZOrder the layers provide additional information about nature of highlights they maintain.
The HighlightsLayer class implements HighlightsContainer interface, which is the fundamental part of the SPI. The HighlightsContainer interface allows to get a list of highlighs and to listen on changes in highlights that it contains. Besides of HighlightsLayer there are two other implementations of this interface and they are OffsetsBag and PositionsBag. Both OffsetsBag and PositionsBag classes allow adding and removing highlights dynamically. The highlights can be added either one-by-one or in chunks; each change is reported to listeners.
HighlightsLayer registration
The registration of HighlightsLayers has to be done through an instance of the HighlightsLayerFactory class. The factory should be registered in MimeLookup under the mime-type of documents, which the HighlightsLayer should be used for. For example, if a module wants to provide HighlightsLayer for text/x-something documents it should implement its own HighlightsLayerFactory (e.g. org.some.module.HLFactory class) and register it in MimeLookup using its XML layer as it is shown on the example below.
<folder name="Editors">
<folder name="text">
<folder name="x-something">
<file name="org-some-module-HLFactory.instance" />
</folder>
</folder>
</folder>
The HLFactory class will simply return a new instance of the module's implementation of the HighlightsLayer class from its createLayers method. The parameter of the createLayers method provides access to a JTextComponent and its Document, which the layer is being created for. The method can create and return multiple HighlightsLayers.
HighlightsLayer lifecycle
The lifecycle of HighlightsLayers is tied to the lifecycle of Document. The infrastructure creates new instances of layers by calling registered HighlightsLayerFactory objects every time it needs to visualize a new Document. The layers created for one Document are not cached or resused in any way. This means that the layers themselvs do not have to take care about a potential change of a Document instance in JTextComponent. The infrastructre will always create a new set of layers if the Document instance changes. Therefore the layers can simply hold their instance of JTextComponent and/or Document and treat them as invariants.
Locking and Document changes
The basics of the locking and events model of Swing documents is described in the javadoc of the javax.swing.text.AbstractDocument class. Netbeans documents use the same patterns and so does the Highlighting SPI, because of its tight relation to documents. The fundamentals of the Swing documents locking model are that any changes to a document are done under the document's write lock, the document's listeners are notified synchronously on the mutating thread and have full read access to the document, but can't modify it.
The main functionality of the Highlighting SPI is to maintain highlights of certain areas of a document. These highlights are specified as a triple of starting offset, ending offset and a set of attributes. The offsets are usually passed in and out accross the SPI boundaries in form of ints and even though some implementations (e.g. PositionsBag) use Positions the esential rule is that any calls in and out from the SPI have to be made under the document's read lock. Let's have a look on a few examples demonstrating what this means.
The Highlighting SPI does not use any special threads and any processing it does is always done on the caller's thread. This means that the above described constraints hardly cause any limitation for practical use. The majority of things happening around a document are done from within DocumentListeners, which hold the document's read lock anyway.
The Highlighting SPI is generally thread-safe meaning that any implementation behind the SPI can be used simultaneously from multiple threads if not stated otherwise. This doesn't change in any way the rule of acquiring a read lock before calling the SPI. Swing documents generally allow access for multiple readers that can run concurrently.
Z-order
Since there can be multiple layers suplying highlights for one document and the highlights can generally overlap it is important to sort the layers according to their Z-order. For this purpose each layer has to supply an appropriate ZOrder.
ZOrder maintains a position of a layer relatively to other layers as a simple integer number. The higher the number the higher (more visible) the layer is in the z-order hierarchy. Instances of the ZOrder class are immutable making it impossible to dynamically change a position of a layer in the z-order stack created for a document.
The ZOrder class contains several predefined constants, which can be used as well-known positions. These constants are called z-order racks and are meant to be used as a starting point for positioning a layer. An exact z-order can then be specified by choosing an integer position of the layer within a rack. The racks are listed below in their respective z-order.
Using AttributeSet
The Highlighting SPI uses javax.swing.text.AttributeSet to define attributes for particular highlights. These attributes can be anything, which the editor's drawing engine understands and can render. Usually the attribute names are constants from javax.swing.StyleConstants or org.netbeans.api.editor.settings.EditorStyleConstants. The values depend on the meaning of each particular attribute, but they usually are instances of java.awt.Color, java.lang.Integer, Boolean.TRUE or Boolean.FALSE and similar.
Since there can be more highlighting layers participating on one document and they can provide highlights that overlap the infrastructure will merge attributes from all AttributeSets provided for areas with overlapping highlights. The merging is done in the order defined by ZOrders of the participating layers, which means that if two layers provide an attribute with the same name then the merged AttributeSet will contain the attribute from the layer, which is placed higher in the z-order hierarchy.
There are two important rules for using AttributeSets, which should be carefully followed by all highlighting layer implementations. Violating these rules may potentialy break the rendering of a document or may cause performance problems.
The AttributeSets used for highlighting are often created by calling FontColorSettings and it is a responsibility of this class to prevent excesive creation of AttributeSets it provides. However, if your highlighting layer creates its own AttributeSets they should always be cached and reused. You can use methods from the AttributesUtilities class for creating immutable AttributeSets.
Use cases
Use case 1. - Caret selection
The Netbeans editor as any other modern editor allows selecting blocks of text and highlighting them to a user for easier identification. We call this functionality caret selection services and it includes things as simple as marking a block of text that the user selected for copy/paste operation or highlighting a line where the caret is placed to more complex ones such as highlighting occurences of a text that the user search for using the 'Find dialog', etc.
This functionality usually only needs to create one highlight and update it depending on the caret movements/selection notified from JTextComponent. The more complex cases may need to create several highlights (e.g. to show the text being searched for). Generally, the highlights are created independently on the text changes in the document itself (e.g. the caret move or searching for a text). However, they have to survive editing the document (e.g. the highlighted occurences of the searched text have to remain highlighted when other parts of the document are edited).
The caret selection highlights are generally short-lived and have higher importance than other highlights (e.g. syntax or semantic coloring). They usually change the background color to highlight the selection, but also retain as much of a visual appearance of the highlighted text as possible.
Use case 2. - Syntax highlighting
This type of a document coloring shows 'words' or characters in different colors to indicate their meaning in the structure of the text document. This is very popular with highly structured documents such as source code files, scripts, SGML-like documents, etc. It's usually not used for plain text documents containing text in a human language.
Syntax highlighting in Netbeans editor is based on a lexical analysis done by lexer plug-ins registered for various types of documents. The lexers are written using the APIs provided by the Lexer module. During the lexical analysis text gets split into tokens of different types and categories. Each token type or category can have defined its own coloring information such as font and foreground and background colors, etc. Tokens know their position in text (i.e. offset and length), which information can then be used for creating highlights.
Decoupling the lexers and making them pluggable lets the syntax highlighting be very flexible. A single layer based on the Lexer API can colorify all sorts of documents providing that there is a lexer registered for each type of a document.
Generally a syntax analysis is very fast and syntax highlighting immediately reflects changes done in text. The syntax highlighting layer is usually at the bottom of the hierarchy of highlighting layers.
Use case 3. - Semantic highlighting
In fact semantic coloring regardless of the language it is provided for is very similar to syntax coloring. Words or groups of characters are highlighted depending on their meaning in the text. The difference is in the amount of information that is needed to make this type of coloring meaningful. While with syntax coloring all the information needed is in the text itself in semantic coloring parts of text can be colored depending on information found in a completely different document (e.g. in another source file, library, project, etc.).
Semantic highlighting is highly dependent on the type of a document and therefore is usually provided on case-by-case basis and only for the most important types of documents (i.e. those most frequenty used such as java files in Netbeans). Also, semantic coloring is generally not very fast, because of the amount of information that is sometimes needed to gather before a document can be colored. Therefore, while all the effort is made to make semantic coloring reflect text changes as soon as possible, it is generally done asynchronously outside of the documents event model and highlights are created as soon as they are available. The tokens created during the semantic analysis always contain token's position within the text in some form (i.e. either offset or Position). If Positions are available they should be accepted and re-used by the Highlighting SPI.
Use case 4. - Embedded languages
An embedded language is a language of a part of a document that is different than the main language of the document. An example can be a java scriplet in a JSP page or JavaScript in an HTML document. The main language of a JSP page is 'text/x-jsp' and the emebedded language in the case of a java scriplet is 'text/x-java'. For the HTML document the main language is 'text/html' and if a JavaScript part is included in the document the 'text/x-javascript' is the embedded language.
The language embedding is supported by Lexer API and therefore there is no problem with supporting it for syntax coloring. For semantic coloring all the work lies on the highlighting layers providing semantic coloring support for a particular language. These layers have to be prepared to provide highlights for parts of a document, which does not contain text in the language they support, but which contains some embedded parts in that language. The Highlighting infrastructure will scan the document for all languages it contains and then it will create appropriate highlighting layers. The layers can be added dynamically as user inserts parts of text in a new language. The layers, however, may not be removed immediately when the last part of text in a language they suppor is removed. Therefore the layers should be prepared to provide no highlights if there is no text they recognize.
Use case 5. - Filtering layers used for JTextComponent
In certain situations JTextComponent or JEditorPane are used for other purposes than editing. For example debugger may want to show JEditorPane for adding a new watch, where a user could write a piece of java code that should evaluated. This pane should use basically the same layers so that the entered code looks like properly colored and formatted java code. However, it is not desirable to use exactly the same layers as for an ordinary java editor, because some highlightings have a little value in this context or could even be disturbing. There is no point in highlighting the row with the caret, because watches are essentially one-line expressions. There is also a little point in showing text-search related highlights, because hardly anybody will use text search in these simple expressions anyway. On the other hand it makes sense to highlight selected text if user selects some.
There can be a whole range of usecases where modules need to show an editor pane, but do not want to use a particular set of highlighting layers, which are registered for the mime type of text that the module is trying to display and which would normally be used for an ordinary editor pane. These usecases are very specific for each module and its way of implementing some features.
The editor insfrastructure supports this usecase through allowing modules to set special properties on the editor pane that they want to use for displaying text. The properties are called HighlightsLayerIncludes and HighlightsLayerExcludes. The value of those properties can be String or String [] of regular expressions that will be used for finding the matching layers by evaluating each regular expression against the layer's type id. The exact interpretation of those two properties is described below.
The filters defined by those two properties are used in the same order as they were listed above. That is the includes are used first and whatever layers they includ are then filtered by the excludes filter. The result is then used for rendering text in an editor component, which defined those properties.
The example below shows how to disable the caret row highlighting layer on JEditorPane.
JEditorPane pane = new JEditorPane();
pane.putClientProperty(
"HighlightsLayerExcludes",
"^org\\.netbeans\\.modules\\.editor\\.lib2\\.highlighting\\.CaretRowHighlighting$"
);
Other usecases
The main usecases described above are certainly not the only usecases of the Highlighting SPI. In general the SPI can be used for binding any type of information to parts of text in a document. While this information should have limited size to keep a good performance of Netbeans editor it can be pretty much anything. Information provided in highlights is currently used only by the editor's drawing engine, which provides a limited set of features useful mostly for rendering text. Some other uses could be for example text annotations, hyperlinking, showing icons in text, etc.
org.netbeans.modules.editor.lib2/1 2.5.0 44
Built on October 9 2015. | Portions Copyright 1997-2015 Oracle. All rights reserved.
|
__label__pos
| 0.957048 |
View Complete Thread | FoxWeb Forum Home
Search:
Date: Msg ID:
From: Thread:
Subject:
We move FW to a new server. We changed to using run-time DLLs. The FoxWeb Channel status shows fewer than the 8 channels configured dropping as low as 2. But task manager shows 8 instances of FWServer running. Force quiting them in task manager would restart the channels.
System:
Win2K server SP4
FW 2.5
two -2.4gz xenon
VFP 6
We moved from an older box to find if it would fix a timeout problem. This has been an intermitant problem. It occures as a storm with all channels timing out and cpu and network activity droping to 0 as if waiting for a resource to free up. The FW startlog show it happening to a random sampling of programs.
|
__label__pos
| 0.781312 |
leftso 6236 0 2021-10-11
版权申明:本文为博主原创文章,未经博主允许不得转载。 https://www.leftso.com/blog/912.html
问题描述
默认情况下,在使用spring的restTemplate调用接口,如果响应的http状态码是200则会成功返回数据。但是有些时候对方接口返回错误或失败的情况设置了非200的http状态码导致默认的restTemplate调用直接抛异常而不是直接得到对方的错误json信息。
RestTemplate源码分析
1.调试postForEntity请求的方法找到判断响应结果状态码的方法是org.springframework.web.client.DefaultResponseErrorHandler类中的hasError方法
@Override
public boolean hasError(ClientHttpResponse response) throws IOException {
int rawStatusCode = response.getRawStatusCode();
HttpStatus statusCode = HttpStatus.resolve(rawStatusCode);
return (statusCode != null ? hasError(statusCode) : hasError(rawStatusCode));
}
代码再往上跟踪一级,如下:
protected void handleResponse(URI url, HttpMethod method, ClientHttpResponse response) throws IOException {
ResponseErrorHandler errorHandler = getErrorHandler();
boolean hasError = errorHandler.hasError(response);
if (logger.isDebugEnabled()) {
try {
int code = response.getRawStatusCode();
HttpStatus status = HttpStatus.resolve(code);
logger.debug("Response " + (status != null ? status : code));
}
catch (IOException ex) {
// ignore
}
}
if (hasError) {
errorHandler.handleError(url, method, response);
}
}
从上面的代码可以看到是使用了RestTemplate的错误处理器,所以解决思路就是可以想办法自定义错误处理器;
解决/处理办法
自定义错误处理器如下所示
@Bean
public RestTemplate restTemplate(ClientHttpRequestFactory factory){
RestTemplate restTemplate = new RestTemplate(factory);
ResponseErrorHandler responseErrorHandler = new ResponseErrorHandler() {
@Override
public boolean hasError(ClientHttpResponse response) throws IOException {
return true;
}
@Override
public void handleError(ClientHttpResponse response) throws IOException {
}
};
restTemplate.setErrorHandler(responseErrorHandler);
return restTemplate;
}
注意说明
1.hasError默认写死true,都进;
2.handleError 为空,相当于跳过错误抛出
配置上方代码,以后不管状态码是200还是其它的都会返回结果
提示:本文最后更新于【 2021-10-11 10:11:11 】,某些文章具有时效性,若有错误或已失效,请在下方留言
评论区域
|
__label__pos
| 0.902771 |
18
Bitcoin uses proof of work to secure the network, Ripple uses a global consensus system and PPCoin uses proof of stake.
Are there any known alternatives to the above methods?
4
• 2
in the ppc paper they talk about a 'proof of excellence' where all peers compete to solve a game. It is basically proof of work, buta better algorithm could score higher. – placeybordeaux Mar 20 '13 at 19:33
• @David Perry. This is a subtly different question than what you've suggested as a duplicate. This person is asking if there exist alternative distributed synchronization algorithms. The other question asks if the proof-of-work system can prove secondarily-useful work. There is a clear difference. – Rooke Mar 21 '13 at 3:50
• @Rooke fair enough, I'll re-open it. – David Perry Mar 21 '13 at 5:08
5
Another alternative is proof of burn.
6
A really quick answer is this: The proof of work system is a solution to the distributed synchronization issue; in another guise it is called the Byzantine Generals' Problem. Thus, any solution to this problem is an acceptable alternative, however the proof-of-work solution is particularly suited to distributed systems.
You can read Satoshi Nakamoto's discussion of this here.
2
I have been researching on blockchain for sometime now and some of them I found are a mix of some existing and some proposed.
1. Proof-of-Stake
2. Proof-of-Burn
3. Proof-of-Capacity
4. Proof-of-Activity
5. Proof-of-Checkpoint
Source: https://bytecoin.org/blog/proof-of-activity-proof-of-burn-proof-of-capacity/
1
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.918886 |
Homework helper lesson 9 area and perimeter
My homework helper lesson 5 add whole numbers
Click on page 6 area of measurements given a and enjoy it is a circle your peers. Algebraic solution to another lesson 4: perimeter and answers, and perimeter? Add this page of basic dimensional analysis questions 46, exponents more. My work on the homework help me the picture. Factoring objectives: for homework helper lesson 9 area and perimeter equation, we offer reinforcement of jobs to /metrics/v1/mbeans. Area of units of academic trap fast and notebooks are gas laws of composite figure. Printable math: greatest common core mathematics exponents, metric units. Research paper size of 160 days c, why peterson s. Printable pages arethere in addition and particles of matter present; but a composite figures. It complete array of counting by great for all our elementary curriculum 5. Give you need assistance for c overview lesson multistep. I would select options on friday, mobile phone or she catch altogether? Printable pages 1-9: area of subjects covered in the answer any inconveniences and under favorable conditions. Title and effectively when finding the enotes home; chapter 8. Add this file and thermodynamics, stop when ordering rectangles a triangle, number is frequency? Tomcat creative writing a 3/8 or subtract addends answer key page of. Area model perimeter find the base length of inequalities - watchkin. Right of the area of the evaluation and kilograms, biology and separation of years. Find the nearest tenth if the area and area and homework helper lesson 9 area and perimeter T: physical changes, and, our math solution for lesson 5 cm 3 volume no prep and. Add attributes to find this unit 1 classify types, but changed by 4 module 2. A guess, fractions homework pdf pass out a x-intercepts 1. Right, draw a shaded area of matter other algebra solvers. Area of exponents you would be bi br measuring devices in several sites listed for a composite figure. Integer rules reference the most challenging for grade social studies, area of composite figure. Title and energy-lecture-key regents chem sig fig calculation practice similar triangles. For 6th - displaying all the attributes of pain and perimeter volume 2 volume and highlighter. homework helper lesson 9 area and perimeter most of them hands-on activity workbook, determine equivalency. Vocabulary terms and strategies, one for the de comunicação estratégica.
My homework helper lesson 3 part of a set
Round 75 up on chapter 4 module 2: the nearest tenth if the lesson 1 worksheet. Give your http://test.larknews.com/public-library-homework-help/ jaden is developed within the information 17. It tricky to find the time4learning fourth grade 5 for students will help resources. Do you want to find this unit 8 lesson 1 ordering rectangles. Right, 000 students up for the coordinate plane pictures on size? homework helper lesson 9 area and perimeter chemists represent the equation with my answer will eat 3/8 or word problems. Order numbers to teach area of a chemical change. Substitute your child with a separate machine-scorable answer to math drawing homework help and to. Online board 11 p215 find the laws of a 1-3 exam practice test start studying hon. Do you will be the distributive property and access resources. Topic, i want to boost your child's teacher worksheets. Algebraic thinking that could be divided into simpler problem solving two-step equation solver for that teachers answer key. A story of all sheets i have written a smaller shapes. Look for parents challenged by using multiple -choice, 362 compare two fractions! To compare and can find a question came from notes lesson 10 11. Online math lessons pre-algebra homework helper - module 2 fractions and practical applications in a. How to type math your final exam review those skills to convert from another? To focus on area of the first four attachment. Ted talks for the answer key to homework helper lesson 9 area and perimeter grade 7: matter measurement key lesson 4. Wisniewski's 5th grade 3 homework 1 mmy homeworky homework website: solve this is called phases, students? A perimeter of college algebra goal 1 review answers even syllabus for perimeter and define this pdf file. How many pounds of the umbrellas in the final review packet 6. Most of complex shapes i can download socratic math homework practice unit 9 area of a free. Tiger algebra, precalculus, 3 answer is a typical algebra class homework helper lesson 5 use place value to round chapter 1 mr. Printable math homework practice/problem solving with homework helper 2015-2016 grade 4: matter flashcards, my homework 1. Give students each circle video tutorials, not the following lab safety equipment uses the regular decagon. Homework practice, but important vocabulary goggles fire extinguisher eyewash body drench safety equipment. Topic tested weekly re missing or to greatest, reteach and, or reading time for questions in this lesson. Learn 8th grade 7 answer key in chapter 2 density practice a complete. I have 1 read beginning with your notes you to instrument and most of book. Describe, in which means merchants have just question day 5 module 1. Find the basic but does not be required to a completely free problems using logical mymaths is loaded. Describe a centimeter ruler to solve math grade 4. Look for every answer key date: least three paintings are posted to solve problems. T understand how to help programs all types of significant figures. Eureka math course 3 as high school homework help.
|
__label__pos
| 0.980698 |
MFEM v4.3.0
Finite element discretization library
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Pages
complex_fem.hpp
Go to the documentation of this file.
1 // Copyright (c) 2010-2021, Lawrence Livermore National Security, LLC. Produced
2 // at the Lawrence Livermore National Laboratory. All Rights reserved. See files
3 // LICENSE and NOTICE for details. LLNL-CODE-806117.
4 //
5 // This file is part of the MFEM library. For more information and source code
6 // availability visit https://mfem.org.
7 //
8 // MFEM is free software; you can redistribute it and/or modify it under the
9 // terms of the BSD-3 license. We welcome feedback and contributions, see file
10 // CONTRIBUTING.md for details.
11
12 #ifndef MFEM_COMPLEX_FEM
13 #define MFEM_COMPLEX_FEM
14
15 #include "../linalg/complex_operator.hpp"
16 #include "gridfunc.hpp"
17 #include "linearform.hpp"
18 #include "bilinearform.hpp"
19 #ifdef MFEM_USE_MPI
20 #include "pgridfunc.hpp"
21 #include "plinearform.hpp"
22 #include "pbilinearform.hpp"
23 #endif
24 #include <complex>
25
26 namespace mfem
27 {
28
29 /// Class for complex-valued grid function - real + imaginary part Vector with
30 /// associated FE space.
32 {
33 private:
34 GridFunction * gfr;
35 GridFunction * gfi;
36
37 protected:
38 void Destroy() { delete gfr; delete gfi; }
39
40 public:
41 /** @brief Construct a ComplexGridFunction associated with the
42 FiniteElementSpace @a *f. */
44
45 void Update();
46
47 /// Assign constant values to the ComplexGridFunction data.
48 ComplexGridFunction &operator=(const std::complex<double> & value)
49 { *gfr = value.real(); *gfi = value.imag(); return *this; }
50
51 virtual void ProjectCoefficient(Coefficient &real_coeff,
52 Coefficient &imag_coeff);
53 virtual void ProjectCoefficient(VectorCoefficient &real_vcoeff,
54 VectorCoefficient &imag_vcoeff);
55
56 virtual void ProjectBdrCoefficient(Coefficient &real_coeff,
57 Coefficient &imag_coeff,
58 Array<int> &attr);
59 virtual void ProjectBdrCoefficientNormal(VectorCoefficient &real_coeff,
60 VectorCoefficient &imag_coeff,
61 Array<int> &attr);
62 virtual void ProjectBdrCoefficientTangent(VectorCoefficient &real_coeff,
63 VectorCoefficient &imag_coeff,
64 Array<int> &attr);
65
66 FiniteElementSpace *FESpace() { return gfr->FESpace(); }
67 const FiniteElementSpace *FESpace() const { return gfr->FESpace(); }
68
69 GridFunction & real() { return *gfr; }
70 GridFunction & imag() { return *gfi; }
71 const GridFunction & real() const { return *gfr; }
72 const GridFunction & imag() const { return *gfi; }
73
74 /// Update the memory location of the real and imaginary GridFunction @a gfr
75 /// and @a gfi to match the ComplexGridFunction.
76 void Sync() { gfr->SyncMemory(*this); gfi->SyncMemory(*this); }
77
78 /// Update the alias memory location of the real and imaginary GridFunction
79 /// @a gfr and @a gfi to match the ComplexGridFunction.
80 void SyncAlias() { gfr->SyncAliasMemory(*this); gfi->SyncAliasMemory(*this); }
81
82 /// Destroys the grid function.
83 virtual ~ComplexGridFunction() { Destroy(); }
84
85 };
86
87 /** Class for a complex-valued linear form
88
89 The @a convention argument in the class's constructor is documented in the
90 mfem::ComplexOperator class found in linalg/complex_operator.hpp.
91
92 When supplying integrators to the ComplexLinearForm either the real or
93 imaginary integrator can be NULL. This indicates that the corresponding
94 portion of the complex-valued field is equal to zero.
95 */
96 class ComplexLinearForm : public Vector
97 {
98 private:
100
101 protected:
104
105 public:
108 convention = ComplexOperator::HERMITIAN);
109
110 /** @brief Create a ComplexLinearForm on the FiniteElementSpace @a fes, using
111 the same integrators as the LinearForms @a lf_r (real) and @a lf_i (imag).
112
113 The pointer @a fes is not owned by the newly constructed object.
114
115 The integrators are copied as pointers and they are not owned by the
116 newly constructed ComplexLinearForm. */
119 convention = ComplexOperator::HERMITIAN);
120
121 virtual ~ComplexLinearForm();
122
125 convention) { conv = convention; }
126
127 /// Adds new Domain Integrator.
129 LinearFormIntegrator *lfi_imag);
130
131 /// Adds new Boundary Integrator.
133 LinearFormIntegrator *lfi_imag);
134
135 /** @brief Add new Boundary Integrator, restricted to the given boundary
136 attributes.
137
138 Assumes ownership of @a lfi_real and @a lfi_imag.
139
140 The array @a bdr_attr_marker is stored internally as a pointer to the
141 given Array<int> object. */
143 LinearFormIntegrator *lfi_imag,
144 Array<int> &bdr_attr_marker);
145
146 /// Adds new Boundary Face Integrator. Assumes ownership of @a lfi.
148 LinearFormIntegrator *lfi_imag);
149
150 /** @brief Add new Boundary Face Integrator, restricted to the given boundary
151 attributes.
152
153 Assumes ownership of @a lfi_real and @a lfi_imag.
154
155 The array @a bdr_attr_marker is stored internally as a pointer to the
156 given Array<int> object. */
158 LinearFormIntegrator *lfi_imag,
159 Array<int> &bdr_attr_marker);
160
161 FiniteElementSpace *FESpace() const { return lfr->FESpace(); }
162
163 LinearForm & real() { return *lfr; }
164 LinearForm & imag() { return *lfi; }
165 const LinearForm & real() const { return *lfr; }
166 const LinearForm & imag() const { return *lfi; }
167
168 /// Update the memory location of the real and imaginary LinearForm @a lfr
169 /// and @a lfi to match the ComplexLinearForm.
170 void Sync() { lfr->SyncMemory(*this); lfi->SyncMemory(*this); }
171
172 /// Update the alias memory location of the real and imaginary LinearForm @a
173 /// lfr and @a lfi to match the ComplexLinearForm.
174 void SyncAlias() { lfr->SyncAliasMemory(*this); lfi->SyncAliasMemory(*this); }
175
176 void Update();
177 void Update(FiniteElementSpace *f);
178
179 /// Assembles the linear form i.e. sums over all domain/bdr integrators.
180 void Assemble();
181
182 std::complex<double> operator()(const ComplexGridFunction &gf) const;
183 };
184
185
186 /** Class for sesquilinear form
187
188 A sesquilinear form is a generalization of a bilinear form to complex-valued
189 fields. Sesquilinear forms are linear in the second argument but the first
190 argument involves a complex conjugate in the sense that:
191
192 a(alpha u, beta v) = conj(alpha) beta a(u, v)
193
194 The @a convention argument in the class's constructor is documented in the
195 mfem::ComplexOperator class found in linalg/complex_operator.hpp.
196
197 When supplying integrators to the SesquilinearForm either the real or
198 imaginary integrator can be NULL. This indicates that the corresponding
199 portion of the complex-valued material coefficient is equal to zero.
200 */
202 {
203 private:
205
206 /** This data member allows one to specify what should be done to the
207 diagonal matrix entries and corresponding RHS values upon elimination of
208 the constrained DoFs. */
210
211 BilinearForm *blfr;
212 BilinearForm *blfi;
213
214 /* These methods check if the real/imag parts of the sesquilinear form are
215 not empty */
216 bool RealInteg();
217 bool ImagInteg();
218
219 public:
222 convention = ComplexOperator::HERMITIAN);
223 /** @brief Create a SesquilinearForm on the FiniteElementSpace @a fes, using
224 the same integrators as the BilinearForms @a bfr and @a bfi .
225
226 The pointer @a fes is not owned by the newly constructed object.
227
228 The integrators are copied as pointers and they are not owned by the
229 newly constructed SesquilinearForm. */
232 convention = ComplexOperator::HERMITIAN);
233
236 convention) { conv = convention; }
237
238 /// Set the desired assembly level.
239 /** Valid choices are:
240
241 - AssemblyLevel::LEGACY (default)
242 - AssemblyLevel::FULL
243 - AssemblyLevel::PARTIAL
244 - AssemblyLevel::ELEMENT
245 - AssemblyLevel::NONE
246
247 This method must be called before assembly. */
248 void SetAssemblyLevel(AssemblyLevel assembly_level)
249 {
250 blfr->SetAssemblyLevel(assembly_level);
251 blfi->SetAssemblyLevel(assembly_level);
252 }
253
254 BilinearForm & real() { return *blfr; }
255 BilinearForm & imag() { return *blfi; }
256 const BilinearForm & real() const { return *blfr; }
257 const BilinearForm & imag() const { return *blfi; }
258
259 /// Adds new Domain Integrator.
261 BilinearFormIntegrator *bfi_imag);
262
263 /// Adds new Boundary Integrator.
265 BilinearFormIntegrator *bfi_imag);
266
267 /// Adds new Boundary Integrator, restricted to specific boundary attributes.
269 BilinearFormIntegrator *bfi_imag,
270 Array<int> &bdr_marker);
271
272 /// Adds new interior Face Integrator. Assumes ownership of @a bfi.
274 BilinearFormIntegrator *bfi_imag);
275
276 /// Adds new boundary Face Integrator. Assumes ownership of @a bfi.
278 BilinearFormIntegrator *bfi_imag);
279
280 /** @brief Adds new boundary Face Integrator, restricted to specific boundary
281 attributes.
282
283 Assumes ownership of @a bfi.
284
285 The array @a bdr_marker is stored internally as a pointer to the given
286 Array<int> object. */
288 BilinearFormIntegrator *bfi_imag,
289 Array<int> &bdr_marker);
290
291 /// Assemble the local matrix
292 void Assemble(int skip_zeros = 1);
293
294 /// Finalizes the matrix initialization.
295 void Finalize(int skip_zeros = 1);
296
297 /// Returns the matrix assembled on the true dofs, i.e. P^t A P.
298 /** The returned matrix has to be deleted by the caller. */
300
301 /// Return the parallel FE space associated with the ParBilinearForm.
302 FiniteElementSpace *FESpace() const { return blfr->FESpace(); }
303
304 void FormLinearSystem(const Array<int> &ess_tdof_list, Vector &x, Vector &b,
305 OperatorHandle &A, Vector &X, Vector &B,
306 int copy_interior = 0);
307
308 void FormSystemMatrix(const Array<int> &ess_tdof_list,
309 OperatorHandle &A);
310
311 /** Call this method after solving a linear system constructed using the
312 FormLinearSystem method to recover the solution as a ParGridFunction-size
313 vector in x. Use the same arguments as in the FormLinearSystem call. */
314 virtual void RecoverFEMSolution(const Vector &X, const Vector &b, Vector &x);
315
316 virtual void Update(FiniteElementSpace *nfes = NULL);
317
318 /// Sets diagonal policy used upon construction of the linear system
320
321 /// Returns the diagonal policy of the sesquilinear form
322 Matrix::DiagonalPolicy GetDiagonalPolicy() const {return diag_policy;}
323
324 virtual ~SesquilinearForm();
325 };
326
327 #ifdef MFEM_USE_MPI
328
329 /// Class for parallel complex-valued grid function - real + imaginary part
330 /// Vector with associated parallel FE space.
332 {
333 private:
334
335 ParGridFunction * pgfr;
336 ParGridFunction * pgfi;
337
338 protected:
339 void Destroy() { delete pgfr; delete pgfi; }
340
341 public:
342
343 /** @brief Construct a ParComplexGridFunction associated with the
344 ParFiniteElementSpace @a *pf. */
346
347 void Update();
348
349 /// Assign constant values to the ParComplexGridFunction data.
350 ParComplexGridFunction &operator=(const std::complex<double> & value)
351 { *pgfr = value.real(); *pgfi = value.imag(); return *this; }
352
353 virtual void ProjectCoefficient(Coefficient &real_coeff,
354 Coefficient &imag_coeff);
355 virtual void ProjectCoefficient(VectorCoefficient &real_vcoeff,
356 VectorCoefficient &imag_vcoeff);
357
358 virtual void ProjectBdrCoefficient(Coefficient &real_coeff,
359 Coefficient &imag_coeff,
360 Array<int> &attr);
361 virtual void ProjectBdrCoefficientNormal(VectorCoefficient &real_coeff,
362 VectorCoefficient &imag_coeff,
363 Array<int> &attr);
364 virtual void ProjectBdrCoefficientTangent(VectorCoefficient &real_coeff,
365 VectorCoefficient &imag_coeff,
366 Array<int> &attr);
367
368 void Distribute(const Vector *tv);
369 void Distribute(const Vector &tv) { Distribute(&tv); }
370
371 /// Returns the vector restricted to the true dofs.
372 void ParallelProject(Vector &tv) const;
373
374 FiniteElementSpace *FESpace() { return pgfr->FESpace(); }
375 const FiniteElementSpace *FESpace() const { return pgfr->FESpace(); }
376
378 const ParFiniteElementSpace *ParFESpace() const { return pgfr->ParFESpace(); }
379
380 ParGridFunction & real() { return *pgfr; }
381 ParGridFunction & imag() { return *pgfi; }
382 const ParGridFunction & real() const { return *pgfr; }
383 const ParGridFunction & imag() const { return *pgfi; }
384
385 /// Update the memory location of the real and imaginary ParGridFunction @a
386 /// pgfr and @a pgfi to match the ParComplexGridFunction.
387 void Sync() { pgfr->SyncMemory(*this); pgfi->SyncMemory(*this); }
388
389 /// Update the alias memory location of the real and imaginary
390 /// ParGridFunction @a pgfr and @a pgfi to match the ParComplexGridFunction.
391 void SyncAlias() { pgfr->SyncAliasMemory(*this); pgfi->SyncAliasMemory(*this); }
392
393
394 virtual double ComputeL2Error(Coefficient &exsolr, Coefficient &exsoli,
395 const IntegrationRule *irs[] = NULL) const
396 {
397 double err_r = pgfr->ComputeL2Error(exsolr, irs);
398 double err_i = pgfi->ComputeL2Error(exsoli, irs);
399 return sqrt(err_r * err_r + err_i * err_i);
400 }
401
402 virtual double ComputeL2Error(VectorCoefficient &exsolr,
403 VectorCoefficient &exsoli,
404 const IntegrationRule *irs[] = NULL,
405 Array<int> *elems = NULL) const
406 {
407 double err_r = pgfr->ComputeL2Error(exsolr, irs, elems);
408 double err_i = pgfi->ComputeL2Error(exsoli, irs, elems);
409 return sqrt(err_r * err_r + err_i * err_i);
410 }
411
412
413 /// Destroys grid function.
415
416 };
417
418 /** Class for a complex-valued, parallel linear form
419
420 The @a convention argument in the class's constructor is documented in the
421 mfem::ComplexOperator class found in linalg/complex_operator.hpp.
422
423 When supplying integrators to the ParComplexLinearForm either the real or
424 imaginary integrator can be NULL. This indicates that the corresponding
425 portion of the complex-valued field is equal to zero.
426 */
428 {
429 private:
431
432 protected:
435
437
438 public:
439
442 convention = ComplexOperator::HERMITIAN);
443
444 /** @brief Create a ParComplexLinearForm on the ParFiniteElementSpace @a pf,
445 using the same integrators as the LinearForms @a plf_r (real) and
446 @a plf_i (imag).
447
448 The pointer @a fes is not owned by the newly constructed object.
449
450 The integrators are copied as pointers and they are not owned by the newly
451 constructed ParComplexLinearForm. */
453 ParLinearForm *plf_i,
455 convention = ComplexOperator::HERMITIAN);
456
457 virtual ~ParComplexLinearForm();
458
461 convention) { conv = convention; }
462
463 /// Adds new Domain Integrator.
465 LinearFormIntegrator *lfi_imag);
466
467 /// Adds new Boundary Integrator.
469 LinearFormIntegrator *lfi_imag);
470
471 /** @brief Add new Boundary Integrator, restricted to the given boundary
472 attributes.
473
474 Assumes ownership of @a lfi_real and @a lfi_imag.
475
476 The array @a bdr_attr_marker is stored internally as a pointer to the
477 given Array<int> object. */
479 LinearFormIntegrator *lfi_imag,
480 Array<int> &bdr_attr_marker);
481
482 /// Adds new Boundary Face Integrator. Assumes ownership of @a lfi.
484 LinearFormIntegrator *lfi_imag);
485
486 /** @brief Add new Boundary Face Integrator, restricted to the given boundary
487 attributes.
488
489 Assumes ownership of @a lfi_real and @a lfi_imag.
490
491 The array @a bdr_attr_marker is stored internally as a pointer to the
492 given Array<int> object. */
494 LinearFormIntegrator *lfi_imag,
495 Array<int> &bdr_attr_marker);
496
498
499 ParLinearForm & real() { return *plfr; }
500 ParLinearForm & imag() { return *plfi; }
501 const ParLinearForm & real() const { return *plfr; }
502 const ParLinearForm & imag() const { return *plfi; }
503
504 /// Update the memory location of the real and imaginary ParLinearForm @a lfr
505 /// and @a lfi to match the ParComplexLinearForm.
506 void Sync() { plfr->SyncMemory(*this); plfi->SyncMemory(*this); }
507
508 /// Update the alias memory location of the real and imaginary ParLinearForm
509 /// @a plfr and @a plfi to match the ParComplexLinearForm.
510 void SyncAlias() { plfr->SyncAliasMemory(*this); plfi->SyncAliasMemory(*this); }
511
512 void Update(ParFiniteElementSpace *pf = NULL);
513
514 /// Assembles the linear form i.e. sums over all domain/bdr integrators.
515 void Assemble();
516
517 /// Assemble the vector on the true dofs, i.e. P^t v.
518 void ParallelAssemble(Vector &tv);
519
520 /// Returns the vector assembled on the true dofs, i.e. P^t v.
522
523 std::complex<double> operator()(const ParComplexGridFunction &gf) const;
524
525 };
526
527 /** Class for a parallel sesquilinear form
528
529 A sesquilinear form is a generalization of a bilinear form to complex-valued
530 fields. Sesquilinear forms are linear in the second argument but the
531 first argument involves a complex conjugate in the sense that:
532
533 a(alpha u, beta v) = conj(alpha) beta a(u, v)
534
535 The @a convention argument in the class's constructor is documented in the
536 mfem::ComplexOperator class found in linalg/complex_operator.hpp.
537
538 When supplying integrators to the ParSesquilinearForm either the real or
539 imaginary integrator can be NULL. This indicates that the corresponding
540 portion of the complex-valued material coefficient is equal to zero.
541 */
543 {
544 private:
546
547 ParBilinearForm *pblfr;
548 ParBilinearForm *pblfi;
549
550 /* These methods check if the real/imag parts of the sesqulinear form are not
551 empty */
552 bool RealInteg();
553 bool ImagInteg();
554
555 public:
558 convention = ComplexOperator::HERMITIAN);
559
560 /** @brief Create a ParSesquilinearForm on the ParFiniteElementSpace @a pf,
561 using the same integrators as the ParBilinearForms @a pbfr and @a pbfi .
562
563 The pointer @a pf is not owned by the newly constructed object.
564
565 The integrators are copied as pointers and they are not owned by the
566 newly constructed ParSesquilinearForm. */
568 ParBilinearForm *pbfi,
570 convention = ComplexOperator::HERMITIAN);
571
574 convention) { conv = convention; }
575
576 /// Set the desired assembly level.
577 /** Valid choices are:
578
579 - AssemblyLevel::LEGACY (default)
580 - AssemblyLevel::FULL
581 - AssemblyLevel::PARTIAL
582 - AssemblyLevel::ELEMENT
583 - AssemblyLevel::NONE
584
585 This method must be called before assembly. */
586 void SetAssemblyLevel(AssemblyLevel assembly_level)
587 {
588 pblfr->SetAssemblyLevel(assembly_level);
589 pblfi->SetAssemblyLevel(assembly_level);
590 }
591
592 ParBilinearForm & real() { return *pblfr; }
593 ParBilinearForm & imag() { return *pblfi; }
594 const ParBilinearForm & real() const { return *pblfr; }
595 const ParBilinearForm & imag() const { return *pblfi; }
596
597 /// Adds new Domain Integrator.
599 BilinearFormIntegrator *bfi_imag);
600
601 /// Adds new Boundary Integrator.
603 BilinearFormIntegrator *bfi_imag);
604
605 /** @brief Adds new boundary Integrator, restricted to specific boundary
606 attributes.
607
608 Assumes ownership of @a bfi.
609
610 The array @a bdr_marker is stored internally as a pointer to the given
611 Array<int> object. */
613 BilinearFormIntegrator *bfi_imag,
614 Array<int> &bdr_marker);
615
616 /// Adds new interior Face Integrator. Assumes ownership of @a bfi.
618 BilinearFormIntegrator *bfi_imag);
619
620 /// Adds new boundary Face Integrator. Assumes ownership of @a bfi.
622 BilinearFormIntegrator *bfi_imag);
623
624 /** @brief Adds new boundary Face Integrator, restricted to specific boundary
625 attributes.
626
627 Assumes ownership of @a bfi.
628
629 The array @a bdr_marker is stored internally as a pointer to the given
630 Array<int> object. */
632 BilinearFormIntegrator *bfi_imag,
633 Array<int> &bdr_marker);
634
635 /// Assemble the local matrix
636 void Assemble(int skip_zeros = 1);
637
638 /// Finalizes the matrix initialization.
639 void Finalize(int skip_zeros = 1);
640
641 /// Returns the matrix assembled on the true dofs, i.e. P^t A P.
642 /** The returned matrix has to be deleted by the caller. */
644
645 /// Return the parallel FE space associated with the ParBilinearForm.
646 ParFiniteElementSpace *ParFESpace() const { return pblfr->ParFESpace(); }
647
648 void FormLinearSystem(const Array<int> &ess_tdof_list, Vector &x, Vector &b,
649 OperatorHandle &A, Vector &X, Vector &B,
650 int copy_interior = 0);
651
652 void FormSystemMatrix(const Array<int> &ess_tdof_list,
653 OperatorHandle &A);
654
655 /** Call this method after solving a linear system constructed using the
656 FormLinearSystem method to recover the solution as a ParGridFunction-size
657 vector in x. Use the same arguments as in the FormLinearSystem call. */
658 virtual void RecoverFEMSolution(const Vector &X, const Vector &b, Vector &x);
659
660 virtual void Update(FiniteElementSpace *nfes = NULL);
661
662 virtual ~ParSesquilinearForm();
663 };
664
665 #endif // MFEM_USE_MPI
666
667 }
668
669 #endif // MFEM_COMPLEX_FEM
void SetConvention(const ComplexOperator::Convention &convention)
ParFiniteElementSpace * ParFESpace() const
AssemblyLevel
Enumeration defining the assembly level for bilinear and nonlinear form classes derived from Operator...
virtual void ProjectBdrCoefficientTangent(VectorCoefficient &real_coeff, VectorCoefficient &imag_coeff, Array< int > &attr)
ComplexOperator::Convention GetConvention() const
Class for an integration rule - an Array of IntegrationPoint.
Definition: intrules.hpp:90
Class for grid function - Vector with associated FE space.
Definition: gridfunc.hpp:30
const ParLinearForm & imag() const
void SetDiagonalPolicy(mfem::Matrix::DiagonalPolicy dpolicy)
Sets diagonal policy used upon construction of the linear system.
const GridFunction & imag() const
Definition: complex_fem.hpp:72
void FormSystemMatrix(const Array< int > &ess_tdof_list, OperatorHandle &A)
Base class for vector Coefficients that optionally depend on time and space.
virtual void ProjectBdrCoefficient(Coefficient &real_coeff, Coefficient &imag_coeff, Array< int > &attr)
ParBilinearForm & real()
FiniteElementSpace * FESpace()
Return the FE space associated with the BilinearForm.
ComplexSparseMatrix * AssembleComplexSparseMatrix()
Returns the matrix assembled on the true dofs, i.e. P^t A P.
virtual void ProjectCoefficient(Coefficient &real_coeff, Coefficient &imag_coeff)
Definition: complex_fem.cpp:88
const LinearForm & real() const
void SetConvention(const ComplexOperator::Convention &convention)
void SetAssemblyLevel(AssemblyLevel assembly_level)
Set the desired assembly level.
Pointer to an Operator of a specified type.
Definition: handle.hpp:33
void Finalize(int skip_zeros=1)
Finalizes the matrix initialization.
FiniteElementSpace * FESpace()
void AddDomainIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Domain Integrator.
void AddDomainIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Domain Integrator.
ParGridFunction & imag()
void SetAssemblyLevel(AssemblyLevel assembly_level)
Set the desired assembly level.
virtual double ComputeL2Error(Coefficient &exsolr, Coefficient &exsoli, const IntegrationRule *irs[]=NULL) const
Abstract parallel finite element space.
Definition: pfespace.hpp:28
HypreParVector * ParallelAssemble()
Returns the vector assembled on the true dofs, i.e. P^t v.
const GridFunction & real() const
Definition: complex_fem.hpp:71
void SyncAliasMemory(const Vector &v) const
Update the alias memory location of the vector to match v.
Definition: vector.hpp:235
ParBilinearForm & imag()
const ParFiniteElementSpace * ParFESpace() const
ParComplexLinearForm(ParFiniteElementSpace *pf, ComplexOperator::Convention convention=ComplexOperator::HERMITIAN)
Abstract base class LinearFormIntegrator.
Definition: lininteg.hpp:22
ParLinearForm & imag()
virtual void ProjectBdrCoefficientNormal(VectorCoefficient &real_coeff, VectorCoefficient &imag_coeff, Array< int > &attr)
FiniteElementSpace * FESpace()
Read+write access to the associated FiniteElementSpace.
Definition: linearform.hpp:117
std::complex< double > operator()(const ComplexGridFunction &gf) const
FiniteElementSpace * FESpace() const
void FormLinearSystem(const Array< int > &ess_tdof_list, Vector &x, Vector &b, OperatorHandle &A, Vector &X, Vector &B, int copy_interior=0)
ComplexLinearForm(FiniteElementSpace *fes, ComplexOperator::Convention convention=ComplexOperator::HERMITIAN)
ParFiniteElementSpace * ParFESpace()
SesquilinearForm(FiniteElementSpace *fes, ComplexOperator::Convention convention=ComplexOperator::HERMITIAN)
const BilinearForm & imag() const
const ParBilinearForm & imag() const
ParSesquilinearForm(ParFiniteElementSpace *pf, ComplexOperator::Convention convention=ComplexOperator::HERMITIAN)
ParComplexGridFunction & operator=(const std::complex< double > &value)
Assign constant values to the ParComplexGridFunction data.
ComplexHypreParMatrix * ParallelAssemble()
Returns the matrix assembled on the true dofs, i.e. P^t A P.
Class for parallel linear form.
Definition: plinearform.hpp:26
const ParBilinearForm & real() const
double f(const Vector &xvec)
Definition: lor_mms.hpp:32
void SetAssemblyLevel(AssemblyLevel assembly_level)
Set the desired assembly level.
void Distribute(const Vector &tv)
void AddBdrFaceIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new boundary Face Integrator. Assumes ownership of bfi.
ComplexGridFunction(FiniteElementSpace *f)
Construct a ComplexGridFunction associated with the FiniteElementSpace *f.
Definition: complex_fem.cpp:20
ParFiniteElementSpace * ParFESpace() const
Return the parallel FE space associated with the ParBilinearForm.
double b
Definition: lissajous.cpp:42
ParFiniteElementSpace * ParFESpace() const
Definition: plinearform.hpp:76
virtual void ProjectBdrCoefficient(Coefficient &real_coeff, Coefficient &imag_coeff, Array< int > &attr)
virtual double ComputeL2Error(VectorCoefficient &exsolr, VectorCoefficient &exsoli, const IntegrationRule *irs[]=NULL, Array< int > *elems=NULL) const
virtual ~ComplexGridFunction()
Destroys the grid function.
Definition: complex_fem.hpp:83
void FormSystemMatrix(const Array< int > &ess_tdof_list, OperatorHandle &A)
void Update(ParFiniteElementSpace *pf=NULL)
void SetConvention(const ComplexOperator::Convention &convention)
virtual void RecoverFEMSolution(const Vector &X, const Vector &b, Vector &x)
FiniteElementSpace * FESpace() const
Return the parallel FE space associated with the ParBilinearForm.
GridFunction & real()
Definition: complex_fem.hpp:69
Set the diagonal value to one.
Definition: operator.hpp:49
FiniteElementSpace * FESpace()
Definition: gridfunc.hpp:629
void SyncMemory(const Vector &v) const
Update the memory location of the vector to match v.
Definition: vector.hpp:232
ComplexOperator::Convention GetConvention() const
BilinearForm & real()
Wrapper for hypre's parallel vector class.
Definition: hypre.hpp:99
void Distribute(const Vector *tv)
void SetConvention(const ComplexOperator::Convention &convention)
void AddInteriorFaceIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new interior Face Integrator. Assumes ownership of bfi.
virtual void ProjectBdrCoefficientNormal(VectorCoefficient &real_coeff, VectorCoefficient &imag_coeff, Array< int > &attr)
Abstract base class BilinearFormIntegrator.
Definition: bilininteg.hpp:34
void Finalize(int skip_zeros=1)
Finalizes the matrix initialization.
Specialization of the ComplexOperator built from a pair of Sparse Matrices.
Class FiniteElementSpace - responsible for providing FEM view of the mesh, mainly managing the set of...
Definition: fespace.hpp:87
GridFunction & imag()
Definition: complex_fem.hpp:70
Base class Coefficients that optionally depend on space and time. These are used by the BilinearFormI...
Definition: coefficient.hpp:39
BilinearForm & imag()
void AddBoundaryIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new Boundary Integrator.
HYPRE_Int HYPRE_BigInt
virtual ~ParComplexGridFunction()
Destroys grid function.
const LinearForm & imag() const
const BilinearForm & real() const
void AddBoundaryIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Boundary Integrator.
std::complex< double > operator()(const ParComplexGridFunction &gf) const
ParComplexGridFunction(ParFiniteElementSpace *pf)
Construct a ParComplexGridFunction associated with the ParFiniteElementSpace *pf. ...
ParLinearForm & real()
virtual void RecoverFEMSolution(const Vector &X, const Vector &b, Vector &x)
void AddDomainIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new Domain Integrator.
FiniteElementSpace * FESpace()
Definition: complex_fem.hpp:66
A "square matrix" operator for the associated FE space and BLFIntegrators The sum of all the BLFInteg...
void Assemble()
Assembles the linear form i.e. sums over all domain/bdr integrators.
void AddBdrFaceIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Boundary Face Integrator. Assumes ownership of lfi.
virtual void ProjectBdrCoefficientTangent(VectorCoefficient &real_coeff, VectorCoefficient &imag_coeff, Array< int > &attr)
void Assemble(int skip_zeros=1)
Assemble the local matrix.
virtual double ComputeL2Error(Coefficient *exsol[], const IntegrationRule *irs[]=NULL) const
Definition: pgridfunc.hpp:273
void FormLinearSystem(const Array< int > &ess_tdof_list, Vector &x, Vector &b, OperatorHandle &A, Vector &X, Vector &B, int copy_interior=0)
const ParGridFunction & imag() const
const ParLinearForm & real() const
void Assemble(int skip_zeros=1)
Assemble the local matrix.
DiagonalPolicy
Defines operator diagonal policy upon elimination of rows and/or columns.
Definition: operator.hpp:46
Specialization of the ComplexOperator built from a pair of HypreParMatrices.
Native convention for Hermitian operators.
void AddDomainIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new Domain Integrator.
Class for parallel bilinear form.
const FiniteElementSpace * FESpace() const
Definition: complex_fem.hpp:67
Vector data type.
Definition: vector.hpp:60
const FiniteElementSpace * FESpace() const
const ParGridFunction & real() const
virtual void Update(FiniteElementSpace *nfes=NULL)
ComplexOperator::Convention GetConvention() const
void AddBdrFaceIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new boundary Face Integrator. Assumes ownership of bfi.
ParGridFunction & real()
Vector with associated FE space and LinearFormIntegrators.
Definition: linearform.hpp:23
ComplexGridFunction & operator=(const std::complex< double > &value)
Assign constant values to the ComplexGridFunction data.
Definition: complex_fem.hpp:48
void AddBdrFaceIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Boundary Face Integrator. Assumes ownership of lfi.
Class for parallel grid function.
Definition: pgridfunc.hpp:32
ParFiniteElementSpace * ParFESpace() const
Return the parallel FE space associated with the ParBilinearForm.
void ParallelProject(Vector &tv) const
Returns the vector restricted to the true dofs.
void Assemble()
Assembles the linear form i.e. sums over all domain/bdr integrators.
void AddBoundaryIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new Boundary Integrator.
virtual void Update(FiniteElementSpace *nfes=NULL)
void AddInteriorFaceIntegrator(BilinearFormIntegrator *bfi_real, BilinearFormIntegrator *bfi_imag)
Adds new interior Face Integrator. Assumes ownership of bfi.
void AddBoundaryIntegrator(LinearFormIntegrator *lfi_real, LinearFormIntegrator *lfi_imag)
Adds new Boundary Integrator.
ComplexOperator::Convention GetConvention() const
virtual void ProjectCoefficient(Coefficient &real_coeff, Coefficient &imag_coeff)
ParFiniteElementSpace * ParFESpace() const
Definition: pgridfunc.hpp:108
Matrix::DiagonalPolicy GetDiagonalPolicy() const
Returns the diagonal policy of the sesquilinear form.
|
__label__pos
| 0.945482 |
位置:主页 > 开发者平台 >
基于jquery的finkyUI插件与Ajax实现页面数据加载功能
作者:admin时间:2018-09-14
复制代码 代码如下: <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript" src="js/json.js"></script> <script type="text/javascript" src="js/jquery.funkyUI.js"></script> <script type="text/javascript"> $(document).ready(function(){ $("#Click").click(function(){ $.getJSON("ajaxuser.jspx",{},function(json){ //alert(json); $("#clickTab").children().each(function(){ $(this).remove(); }); $(json).each(function(){ var id=this.uid; var name=this.uname; var pwd=this.upwd; //alert(name); var htmlStr='<tr><td bgcolor="white">'+id+'</td>'+'<td bgcolor="white">'+name+'</td>'+'<td bgcolor="white">'+pwd+'</td></tr>'; $("#clickTab").append(htmlStr); }); }); }); $("#clickTab").ajaxStart(function(){ $.funkyUI({ showDialog:false }); }); $("#clickTab").ajaxStop(function(){ $.unfunkyUI(); }); }); </script> finkyUI是很好用的一个jquery插件写出这玩意的哥们很强 谢谢他了,不然自己做这效果不知要做到什么时候了...... ----------------------------功能--------------------------- 无限级弹出窗口 * Esc退出block弹出窗口 * 可拖动窗口 * 模态窗口 * 模态alert警告对话框 * 模态confirm对话框 * 页面局部模态 * 绑定按钮响应函数 * 弹出窗口加载iframe * 自定义背景样式 组件提供了六个函数: 复制代码 代码如下: $.funkyUI // 弹出模态窗口 $.unfunkyUI // 关闭模态窗口 $.alert //警告提示对话框 $.confirm //确认和取消对话框 $.fn.block //块模态 $.fn.unblock//解除块模态 调用示例: 复制代码 代码如下: $.blockUI({ url:"1.html",//弹出窗口显示的内容,使用iframe OKEvent:okEvent,//okEvent是自定义的确定按钮响应函数, css:{width:"700",height:"500"} }); $.alert("这是警告窗口"); $.confirm("这是个Boolean窗口"); $('#blocked').block();//id为blocked的元素设置为只读 $('#blocked').unblock();//解除
本文源自: AG环亚娱乐
开发者平台
联系我们
|
__label__pos
| 0.91482 |
Commit bec52b35 authored by Alexander Wiebel's avatar Alexander Wiebel
Browse files
[ADD #273] new mechanism to prevent loading multiple fiber datasets if
activated in preferences file
parent 89f6b6f5
## This is a sample configuration file for OpenWalnut.
## Uncomment the options you are interested in.
[general]
allowOnlyOneFiberDataSet = yes # This will prevent you from accidently loading multiple fiber data sets.
[modules]
## use this to specify the default module to add during load.
## It is a comma seperated list. If this is not specified the default empty is assumed.
......
......@@ -66,7 +66,8 @@
WMainWindow::WMainWindow() :
QMainWindow(),
m_iconManager()
m_iconManager(),
m_fibLoaded( false )
{
setupGUI();
}
......@@ -471,7 +472,34 @@ void WMainWindow::openLoadDialog()
{
stdFileNames.push_back( ( *constIterator ).toLocal8Bit().constData() );
}
m_loaderSignal( stdFileNames );
//
// WE KNOW THAT THIS IS KIND OF A HACK. Iis is only provided to prevent naive users from having trouble.
//
bool allowOnlyOneFiberDataSet = false;
bool doubleFibersFound = false; // have we detected the multiple loading of fibers?
if( WPreferences::getPreference( "general.allowOnlyOneFiberDataSet", &allowOnlyOneFiberDataSet ) && allowOnlyOneFiberDataSet )
{
for( std::vector< std::string >::iterator it = stdFileNames.begin(); it != stdFileNames.end(); ++it )
{
using wiotools::getSuffix;
std::string suffix = getSuffix( *it );
bool isFib = ( suffix == ".fib" );
if( m_fibLoaded && isFib )
{
QCoreApplication::postEvent( this, new WModuleCrashEvent(
WModuleFactory::getModuleFactory()->getPrototypeByName( "Data Module" ),
std::string( "Tried to load two fiber data sets. This is not allowed by your preferences." ) ) );
doubleFibersFound = true;
}
m_fibLoaded |= isFib;
}
}
if( !doubleFibersFound )
{
m_loaderSignal( stdFileNames );
}
}
void WMainWindow::openAboutDialog()
......@@ -660,3 +688,8 @@ void WMainWindow::newRoi()
WKernel::getRunningKernel()->getRoiManager()->addRoi( newRoi, m_datasetBrowser->getFirstRoiInSelectedBranch()->getROI() );
}
}
void WMainWindow::setFibersLoaded()
{
m_fibLoaded = true;
}
......@@ -178,6 +178,11 @@ public slots:
*/
void projectSave();
/**
* Sets that a fiber data set has already been loaded. Thi shelps to prevent multiple fiber data sets to be loaded.
*/
void setFibersLoaded();
private:
/**
* Sets up the permanent tool bar.
......@@ -207,6 +212,8 @@ private:
boost::shared_ptr< WQtNavGLWidget > m_navSagittal; //!< the sgittal view widget GL widget of the GUI
QDockWidget* m_dummyWidget; //!< The dummywidget serves as spacer in the dockwidget area;
bool m_fibLoaded; //!< Indicates whether a fiber data set is already loaded.
/**
* All registered WQtCustomDockWidgets.
*/
......
......@@ -36,6 +36,7 @@
#include "WMainWindow.h" // this has to be included before any other includes
#include "../../common/WConditionOneShot.h"
#include "../../common/WIOTools.h"
#include "../../common/WPreferences.h"
#include "../../graphicsEngine/WGraphicsEngine.h"
#include "../../kernel/WKernel.h"
#include "../../kernel/WProjectFile.h"
......@@ -159,7 +160,41 @@ int WQt4Gui::run()
// check if we want to load data due to command line and call the respective function
if( m_optionsMap.count("input") )
{
m_kernel->loadDataSets( m_optionsMap["input"].as< std::vector< std::string > >() );
//
// WE KNOW THAT THIS IS KIND OF A HACK. Iis is only provided to prevent naive users from having trouble.
//
bool allowOnlyOneFiberDataSet = false;
bool doubleFibersFound = false; // have we detected the multiple loading of fibers?
if( WPreferences::getPreference( "general.allowOnlyOneFiberDataSet", &allowOnlyOneFiberDataSet ) && allowOnlyOneFiberDataSet )
{
bool fibFound = false;
std::vector< std::string > tmpFiles = m_optionsMap["input"].as< std::vector< std::string > >();
for( std::vector< std::string >::iterator it = tmpFiles.begin(); it != tmpFiles.end(); ++it )
{
using wiotools::getSuffix;
std::string suffix = getSuffix( *it );
bool isFib = ( suffix == ".fib" );
if( fibFound && isFib )
{
QCoreApplication::postEvent( m_mainWindow, new WModuleCrashEvent(
WModuleFactory::getModuleFactory()->getPrototypeByName( "Data Module" ),
std::string( "Tried to load two fiber data sets. This is not allowed by your preferences." ) ) );
doubleFibersFound = true;
}
fibFound |= isFib;
}
if( fibFound && !doubleFibersFound )
{
// Found exactly one fiber data set. So signal this to main window.
// If more than one are found we do not load them anyways. Thus we can allow to load a new one.
m_mainWindow->setFibersLoaded();
}
}
if( !doubleFibersFound )
{
m_kernel->loadDataSets( m_optionsMap["input"].as< std::vector< std::string > >() );
}
}
// Load project file
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment
|
__label__pos
| 0.915246 |
1
Topic: Unable to Extend LDAP Schema
I have a new install of the latestest release of iRedAdminPro + iRedMail on a CentOS 7.2 server.
I need to extend the LDAP schema but it would seem the rootdn on an IRedMail server does not have write permissions to cn=config.
I just have one object class and one attribute to add, but I"m getting the following error.
ldapmodify -h 127.0.0.1 -axWD cn=manager,dc=domain,dc=com -f openssh-ldap.ldif
adding new entry "cn=openssh-openldap,cn=schema,cn=config"
ldap_add: Insufficient access (50)
I'm far from an LDAP expert so I'm reluctant to mess with an LDAP config that I didn't build from scratch.
Can you suggest a clean way to gain access?
2 (edited by bmackay 2016-06-25 06:06:27)
Re: Unable to Extend LDAP Schema
Digging a bit further...
slapcat -n0 reveals the following.
dn: olcDatabase={0}config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: {0}config
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=extern
al,cn=auth" manage by * none
structuralObjectClass: olcDatabaseConfig
entryUUID: 4544b750-b9f7-1035-92b1-15f19808f5f8
creatorsName: cn=config
createTimestamp: 20160529144211Z
entryCSN: 20160529144211.085663Z#000000#000#000000
modifiersName: cn=config
dn: olcDatabase={0}config,cn=config
modifyTimestamp: 20160529144211Z
dn: olcDatabase={0}config,cn=config
So I crafted up the following LDIF.
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootDN
olcRootDN: cn=manager,dc=mydomain,dc=com
dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcAuthzRegexp
olcAuthzRegexp: {0}"gidNumber=0\+uidNumber=0,cn=peercred,cn=external,cn=auth" "cn=manager,dc=mydomain,dc=com"
Connecting as UID 0 and attempting to apply the above ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f access.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={0}config,cn=config"
ldap_modify: Insufficient access (50)
Blocked when trying to apply the LDIF as root (uid0). Perhaps I misread the the slapcat output, but I thought
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage
would give me write perms.
Suggestions welcome.
3
Re: Unable to Extend LDAP Schema
The root dn 'cn=manager,dc=xx,dx=xx' has all privileges. Did you create 'cn=config' first?
----
Buy me a cup of coffee ($5) to support iRedMail:
buy me a cup of coffee
4
Re: Unable to Extend LDAP Schema
No, I assumed that any recent LDAP install would use dynamic configuration. Use of slapd.conf was deprecated and like a dummy, I had blinders on and was looking for reasons why dynamic config wasn't working. In more modern implementations it's bad voodoo to manually edit the schema files.
The default iRedMail LDAP setup breaks a number of common LDAP management tools. For example Apache Directory Studio is unable to process schema changes as it can't successfully submit LDIFs, even command line tools such as ldapadd/ldapmodify fail.
Once I realized what was going on and that I couldn't use common LDAP practices, I was able to extend the schema with no problem. I'm able to generate and store public keys in the directory as intended.
Sorry for troubling you.
|
__label__pos
| 0.928443 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
PerlMonks
Apache::CVS::HTML + Perl::Tidy
by jeffa (Bishop)
on Dec 29, 2002 at 18:08 UTC ( #222936=CUFP: print w/replies, xml ) Need Help??
Apache::CVS is a mod_perl handler that provides a web interface to CVS repositories. It's distro includes a subclass, Apache::CVS::HTML, which outputs HTML instead of plain ole text. Perl::Tidy is the core of perltidy, a utility that indents and reformats Perl scripts, as well as provides an excellent syntax highlighter via HTML and CSS. Put the two together and you have a web interface to a CVS repository with syntax highlighting.
package Apache::CVS::Tidy; use strict; use warnings; use base qw(Apache::CVS::HTML); use Perl::Tidy; use CGI qw(start_html); sub print_page_header { my $self = shift; return if $self->page_headers_sent(); $self->request()->print(start_html( -title => 'CVS Repository', -style => { src => '/path/to/perltidy.css' }, )); $self->print_path_links(); $self->page_headers_sent(1); } sub print_text_revision { my ($self,$content) = @_; my $html; perltidy( source => \$content, destination => \$html, argv => '-html -npod -css=/path/to/perltidy.css', errorfile => '/dev/null', ); # big thanks to Beatnik for this little snippet # Apache::CVS::HTML double-spaces the code, this remedies that $html =~ s/\n\n/\n/g; $self->request()->print($html); } 1;
And an example stylesheet:
body {background: #fffff3; color: #35351d} pre { color: #35351d; background: #fffff3; font-family: courier; } a { color: #de022a;} th { background: gray; } .c { color: #777777;} /* comment */ .cm { color: #35351d;} /* comma */ .co { color: #35351d;} /* colon */ .h { color: #CD5555; font-weight:bold;} /* here-doc-target */ .hh { color: #0f6d30; font-style:italic;} /* here-doc-text */ .i { color: #2c2255;} /* identifier */ .j { color: #2c2255; font-weight:bold;} /* label */ .k { color: #8d6b5f; font-weight:bold;} /* keyword */ .m { color: #2c2255; font-weight:bold;} /* subroutine */ .n { color: #B452CD;} /* numeric */ .p { color: #35351d;} /* paren */ .pd { color: #228B22; font-style:italic;} /* pod-text */ .pu { color: #35351d;} /* punctuation */ .q { color: #0f6d30;} /* quote */ .s { color: #35351d;} /* structure */ .sc { color: #35351d;} /* semicolon */ .v { color: #B452CD;} /* v-string */ .w { color: #35351d;} /* bareword */
Last note: i use the -npod option in the argv argument for perltidy() - this prevents POD from being parsed by POD::HTML. Instead, POD is simply italicized, which makes for a faster processing time.
jeffa
L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)
Replies are listed 'Best First'.
Re: Apache::CVS::HTML + Perl::Tidy
by PodMaster (Abbot) on Dec 30, 2002 at 10:30 UTC
This ought to fix your newline issue ;)(i submitted it on rt.cpan.org)
package Apache::CVS::Revision; sub content { my $self = shift; $self->_checkout() unless $self->co_file(); return undef if $self->is_binary(); open FILE, $self->co_file(); # my $content = join "\n", <FILE>; my $content = join '', <FILE>; close FILE; return $content; }
update: caching support added, even though apache2 doesn't like Apache::CVS (will test with mod_perl1x later on). Enjoy
package Apache::CVS::Tidy; use strict; use warnings; use base qw(Apache::CVS::HTML); use Perl::Tidy; use Cache::FileCache; ## PodMaster use CGI qw(start_html); sub print_page_header { my $self = shift; return if $self->page_headers_sent(); $self->request()->print(start_html( -title => 'CVS Repository', -style => { src => '/path/to/perltidy.css' }, )); $self->print_path_links(); $self->page_headers_sent(1); } ## PodMaster ## originally straight from Apache::CVS::HTML ## caching support added, and print_text_revision inlined sub handle_revision { my $self = shift; my ($uri_base, $revision_num) = @_; my $file = Apache::CVS::File->new($self->path(), $self->rcs_config +()); my $revision = $file->revision($revision_num); eval { if ($revision->is_binary()) { my $subrequest = $self->request()->lookup_file($revision->co_file()); $self->content_type($subrequest->content_type); $self->print_http_header(); $self->request()->send_fd($revision->filehandle()); close $revision->filehandle(); } else { $self->print_http_header(); $self->print_page_header(); my $cache = Cache::FileCache->new( { namespace => 'JeffasTidyCvs', # cache_root => 'someplace' # I like the default auto_purge_on_set => 0, auto_purge_on_get => 0, directory_umask => '077', # i don't care } ); my $key = $file->path().$file->name().$revision->number(); my $content = $cache->get($key) if defined $cache; if( defined $content ){ ## is it cached? PodMaster $self->request()->print($content); } else { ## your sub print_text_revision my $html; $content = $revision->content(); perltidy( source => \$content, destination => \$html, argv => '-html -npod -css=/path/to/perltidy.c +ss', errorfile => '/dev/null', ); $cache->set($key,$html) if defined $cache; $self->request()->print($html); } } }; if ($@) { $self->request()->log_error($@); $self->print_error("Unable to get revision.\n$@"); return; } } 1; package Apache::CVS::Revision; sub content { my $self = shift; $self->_checkout() unless $self->co_file(); return undef if $self->is_binary(); open FILE, $self->co_file(); # my $content = join "\n", <FILE>; my $content = join '', <FILE>; close FILE; return $content; } 1;
update:
So sorry honourable jeffa, I was saving the plaintext($content) instead of the html ($html), fixed now ;)
update:
After much debugging, Apache::CVS is sweet, but it relies on unportable code (damn, Rcs sucks)
MJD says you can't just make shit up and expect the computer to know what you mean, retardo!
** The Third rule of perl club is a statement of fact: pod is sexy.
Beautiful! This is so much better and lightning fast on the reload. Thanks PodMaster! :)
jeffa
L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)
Re: Apache::CVS::HTML + Perl::Tidy
by boo_radley (Parson) on Jan 03, 2003 at 07:44 UTC
This is really cool, jeffa :-)
Somewhere along the lines, you need to check your commit comments for html (see rev 1.77), though.
I just moved the repository from mobius (which is dying soon) to unlocalhost, so i changed your link above. Shortly after, i thought that i should finally get around to fixing the problem. :D
Here is the shortest workaround i could come up with:
# somewhere at top add this use HTML::Entities; # and somewhere in the middle add this sub print_revision { my $self = shift; my @time_units = ('days', 'hours', 'minutes', 'seconds'); my ($uri_base, $revision, $diff_revision) = @_; my $revision_uri = "$uri_base?r=" . $revision->number(); my $date = localtime($revision->date()); my $age = join(', ', map { $revision->age()->{$_} . ' ' . $_ } @time_units); my $symbol = $revision->symbol() || ' '; $self->request()->print("<tr> <td><a href=$revision_uri>" . $revision->number() . '</td>' . '<td>' . $revision->author() . '</td>' . '<td>' . $revision->state() . '</td>' . "<td>$symbol</td><td>$date</td><td>$age</td>" . '<td>' . encode_entities($revision->comment()) . '</td>'); if ($diff_revision eq $revision->number()) { $self->request()->print('<td>selected for diff</td>'); } else { if ($diff_revision) { $self->request()->print(qq|<td><a href="$uri_base?ds=| . $revision->number() . qq|&dt=$diff_revision">select for diff | . "with $diff_revision</a>"); } else { $self->request()->print(qq|<td><a href="$uri_base?ds=| . $revision->number() . '">select for diff</a>'); } } $self->request()->print('</tr>'); }
The magic line is:
'<td>' . encode_entities($revision->comment()) . '</td>');
ugly hack (having to repeat all of that code) .. but it works ;)
jeffa
L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: CUFP [id://222936]
Approved by lemming
Front-paged by newrisedesigns
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others cooling their heels in the Monastery: (8)
As of 2016-09-27 17:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Extraterrestrials haven't visited the Earth yet because:
Results (509 votes). Check out past polls.
|
__label__pos
| 0.530543 |
Sharing Top Content from the Angular-sphere.
Angular2 Http Interceptor and Loading Indicator – Longing to know
#blogged: #Angular2 Http Interceptor and Loading Indicator #angularjs
• In replacing the Http provider, my goal is to create something very similar to my old Request Interceptor.
• The Http provider has methods for get/post/put and other HTTP verbs, but, for the sake of brevity, I have listed only the general request method that is being implemented.
• Beyond the basic implementation, we then must tell Angular to use our HttpService rather than the built-in Http provider.
• It makes a few checks against the pending requests, determines, in the cases of turning off the modal, if it should decrement, and then uses the spinner jquery plugin to display/hide the spinner.
• When a request is detected, I want to use “spin.js” to display a modal spinner to the user.
Long, long ago, I blogged about Angular 1.x Request Interceptors and how they can be used to display a loading indicator. I really liked that mechanism. You could intercept any request whether you made it or it was made by the framework. Fast-forward to today, and things are significantly different with Angular2.
@long2know: #blogged: #Angular2 Http Interceptor and Loading Indicator #angularjs
Long, long ago, I blogged about Angular 1.x Request Interceptors and how they can be used to display a loading indicator. I really liked that mechanism. You could intercept any request whether you made it or it was made by the framework. Fast-forward to today, and things are significantly different with Angular2.
Angular2 simply does not have request interceptors. Most of Angular2’s (Angular from here out) Http calls are handled by an injected service provider called, aptly, Http. Fortunately, with Angular’s dependency injection framework, we can replace the Http provider with our own provider.
In replacing the Http provider, my goal is to create something very similar to my old Request Interceptor. When a request is detected, I want to use “spin.js” to display a modal spinner to the user. With this in mind, I’m reusing the same methods I used for turning the modal on or off. The big difference is tracking whether or not there are pending requests. At any rate, let’s take a look at the implementation.
The custom provider extends Http. Http has many methods we can implement and we define a constructor to indicate which “backend” we’re utilizing. Angular has a few different backends for XHR, JSONP, and such. The XHRBackend is appropriate in most cases since we will be making XHR requests. Suffice it to say, that these are framework details that are necessary to construct our “HttpService.”
import { Injectable } from ‘@angular/core’; import { Observable } from ‘rxjs/Observable’; import ‘rxjs/add/operator/map’; import ‘rxjs/add/operator/catch’; import ‘rxjs/add/operator/do’; import ‘rxjs/add/operator/finally’; import { Http, XHRBackend, RequestOptions, Request, RequestOptionsArgs, Response, Headers } from ‘@angular/http’; declare var $: any; @Injectable() export class HttpService extends Http { public pendingRequests: int = 0; public showLoading: bool = false; constructor(backend: XHRBackend, defaultOptions: RequestOptions) { super(backend, defaultOptions); } request(url: string | Request, options?: RequestOptionsArgs): Observable { return this.intercept(super.request(url, options)); }
The Http provider has methods for get/post/put and other HTTP verbs, but, for the sake of brevity, I have listed only the general request method that is being implemented. You can see in this method that we expect the method to return an Observable. We’ll use the base (super methods) class’s methods in our HttpService to create the Observable, but then attach catch/do/finally handlers in order to “intercept” the request. The intercept method, then, looks like the code below.
intercept(observable: Observable): Observable { console.log(“In the intercept routine..”); this.pendingRequests++; return observable .catch((err, source) => { console.log(“Caught error: ” + err); }) .do((res: Response) => { console.log(“Response: ” + res); }, (err: any) => { console.log(“Caught error: ” + err); }) .finally(() => { console.log(“Finally.. delaying, though.”) var timer = Observable.timer(1000); timer.subscribe(t => { this.turnOffModal(); }); }); }
During the request’s processing, we are using catch/do/finally blocks do increment or decrement our “pendingRequests” counter. This is used so that if we have a lot of simultaneously requests, we can effectively wait until they are all complete before hiding our spinning modal. You’ll also notice that I add an arbitrary delay timer. Without this, the requests will finish so quickly in the demo that you won’t see them.
The modal spinner toggling code is straight forward and, basically, the same as the old request interceptor code. It makes a few checks against the pending requests, determines, in the cases of turning off the modal, if it should decrement, and then uses the spinner jquery plugin to display/hide the spinner.
private turnOnModal() { if (!this.showLoading) { this.showLoading = true; $(‘body’).spin(“modal”, “#FFFFFF”, “rgba(51, 51, 51, 0.1)”); console.log(“Turned on modal”); } this.showLoading = true; } private turnOffModal() { this.pendingRequests–; if (this.pendingRequests <= 0) { if (this.showLoading) { $('body').spin("modal", "#FFFFFF", "rgba(51, 51, 51, 0.1)"); } this.showLoading = false; } console.log("Turned off modal"); } Beyond this basic implementation, we then must tell Angular to use our HttpService rather than the built-in Http provider. This is done in the app.module. import { Http, HttpModule, RequestOptions, XHRBackend } from ‘@angular/http’; import { HttpService } from ‘./services/http.service’; @NgModule({ imports: [BrowserModule, FormsModule, ReactiveFormsModule, JsonpModule, HttpModule, NgbModule.forRoot(), routing], declarations: [ AppComponent, Route1Component, Route2Component, Route3Component, DialogComponent, Multiselect, FilterPipe, EqualPipe ], providers: [ { provide: APP_BASE_HREF, useValue : document.location.pathname }, { provide: Http, useFactory: (backend: XHRBackend, options: RequestOptions) => { return new HttpService(backend, options); }, deps: [XHRBackend, RequestOptions] }, NavigationService, DialogService, ApiService], entryComponents: [DialogComponent], bootstrap: [ AppComponent ], })
The salient points here are only a few. First, HttpModule must be in the imports list. Second, the providers list must include that we are providing Http, specifying its factory method which returns our HttpService, and the constructor dependencies. A fully working plunk is below.
If you check the console, you can see the logic flow.
Angular2 Http Interceptor and Loading Indicator – Longing to know
|
__label__pos
| 0.610691 |
.\" Copyright (c) 2001-2003 The Open Group, All Rights Reserved .TH "GETHOSTBYADDR" P 2003 "IEEE/The Open Group" "POSIX Programmer's Manual" .\" gethostbyaddr .SH NAME gethostbyaddr, gethostbyname \- network host database functions .SH SYNOPSIS .LP \fB#include .br .sp \fP .LP \fBstruct hostent *gethostbyaddr(const void *\fP\fIaddr\fP\fB, socklen_t\fP \fIlen\fP\fB, .br \ \ \ \ \ \ int\fP \fItype\fP\fB); .br struct hostent *gethostbyname(const char *\fP\fIname\fP\fB); \fP \fB .br \fP .SH DESCRIPTION .LP These functions shall retrieve information about hosts. This information is considered to be stored in a database that can be accessed sequentially or randomly. Implementation of this database is unspecified. .TP 7 \fBNote:\fP In many cases it is implemented by the Domain Name System, as documented in RFC\ 1034, RFC\ 1035, and RFC\ 1886. .sp .LP Entries shall be returned in \fBhostent\fP structures. .LP The \fIgethostbyaddr\fP() function shall return an entry containing addresses of address family \fItype\fP for the host with address \fIaddr\fP. The \fIlen\fP argument contains the length of the address pointed to by \fIaddr\fP. The \fIgethostbyaddr\fP() function need not be reentrant. A function that is not required to be reentrant is not required to be thread-safe. .LP The \fIgethostbyname\fP() function shall return an entry containing addresses of address family AF_INET for the host with name \fIname\fP. The \fIgethostbyname\fP() function need not be reentrant. A function that is not required to be reentrant is not required to be thread-safe. .LP The \fIaddr\fP argument of \fIgethostbyaddr\fP() shall be an \fBin_addr\fP structure when \fItype\fP is AF_INET. It contains a binary format (that is, not null-terminated) address in network byte order. The \fIgethostbyaddr\fP() function is not guaranteed to return addresses of address families other than AF_INET, even when such addresses exist in the database. .LP If \fIgethostbyaddr\fP() returns successfully, then the \fIh_addrtype\fP field in the result shall be the same as the \fItype\fP argument that was passed to the function, and the \fIh_addr_list\fP field shall list a single address that is a copy of the \fIaddr\fP argument that was passed to the function. .LP The \fIname\fP argument of \fIgethostbyname\fP() shall be a node name; the behavior of \fIgethostbyname\fP() when passed a numeric address string is unspecified. For IPv4, a numeric address string shall be in the dotted-decimal notation described in \fIinet_addr\fP() \&. .LP If \fIname\fP is not a numeric address string and is an alias for a valid host name, then \fIgethostbyname\fP() shall return information about the host name to which the alias refers, and \fIname\fP shall be included in the list of aliases returned. .SH RETURN VALUE .LP Upon successful completion, these functions shall return a pointer to a \fBhostent\fP structure if the requested entry was found, and a null pointer if the end of the database was reached or the requested entry was not found. .LP Upon unsuccessful completion, \fIgethostbyaddr\fP() and \fIgethostbyname\fP() shall set \fIh_errno\fP to indicate the error. .SH ERRORS .LP These functions shall fail in the following cases. The \fIgethostbyaddr\fP() and \fIgethostbyname\fP() functions shall set \fIh_errno\fP to the value shown in the list below. Any changes to \fIerrno\fP are unspecified. .TP 7 .B HOST_NOT_FOUND .sp No such host is known. .TP 7 .B NO_DATA The server recognized the request and the name, but no address is available. Another type of request to the name server for the domain might return an answer. .TP 7 .B NO_RECOVERY .sp An unexpected server failure occurred which cannot be recovered. .TP 7 .B TRY_AGAIN A temporary and possibly transient error occurred, such as a failure of a server to respond. .sp .LP \fIThe following sections are informative.\fP .SH EXAMPLES .LP None. .SH APPLICATION USAGE .LP The \fIgethostbyaddr\fP() and \fIgethostbyname\fP() functions may return pointers to static data, which may be overwritten by subsequent calls to any of these functions. .LP The \fIgetaddrinfo\fP() and \fIgetnameinfo\fP() functions are preferred over the \fIgethostbyaddr\fP() and \fIgethostbyname\fP() functions. .SH RATIONALE .LP None. .SH FUTURE DIRECTIONS .LP The \fIgethostbyaddr\fP() and \fIgethostbyname\fP() functions may be withdrawn in a future version. .SH SEE ALSO .LP \fIendhostent\fP() , \fIendservent\fP() , \fIgai_strerror\fP() , \fIgetaddrinfo\fP() , \fIh_errno\fP() , \fIinet_addr\fP() , the Base Definitions volume of IEEE\ Std\ 1003.1-2001, \fI\fP .SH COPYRIGHT Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1, 2003 Edition, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 6, Copyright (C) 2001-2003 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/online.html .
|
__label__pos
| 0.962853 |
Skip to main content
Architecture
General overview
The protocol provides a disruptive access control mechanism. Whereas a central gateway usually safekeeps a specific set of resources, OKP4 protocol acts as a global source of trust for any operation:
OKP4 Basic Architecture Overview
In a user interface, providers indicate the rules for consuming their resources. The interface interacts with the blockchain, which stores the resources descriptions, revenue conditions, restrictions and execution instructions. A consumer asks the blockchain through an app to execute a request using multiple resources. If the rules set by the providers are met, the protocol emits an event to validate the request. An orchestration service, the guard on a resource relying solely on blockchain validation events, processes the request from the consumer. This request creates new knowledge as it leverages multiple resources from different providers with no required trust on a third party. The orchestration service reports the execution status to the blockchain to ensure reliable logging and payment.
Smart contracts carry all these interactions with the blockchain. Nodes validate smart contracts execution and persistent storage through a consensus. A distributed database, the so-called blockchain, commits to every change. The OKP4 blockchain is built with custom modules enabling the protocol governance and validation rules capabilities, among other functionalities.
The diagram above provides a simplified explanation of the system's overall operation. More precisely, the entire solution architecture is structured as follows:
OKP4 Architecture Schema
Blockchain
Distributed network
Multiple nodes execute and validate all actions submitted to the blockchain thanks to the Tendermint consensus. Tendermint is a partially synchronous Byzantine fault-tolerant (BFT) consensus protocol. The protocol requires a fixed, known set of validators, where their public key identifies each validator. Validators attempt to reach a consensus on one block at a time, where a block is a list of transactions. Voting for consensus on a block proceeds in rounds. Each round has a round leader, or proposer, who proposes a block. The validators then vote, in stages, on whether to accept the proposed block or move on to the next round. The proposer for a round is chosen deterministically from the ordered list of validators in proportion to their voting power. Tendermint's security is derived from optimal Byzantine fault tolerance through super-majority voting and a locking mechanism.
Shared state
When a user wants to update the blockchain state, he submits a transaction with cryptographic keys authentication. Users interacting with the network pay fees to these validators for every transaction. These fees are paid with the OKP4 blockchain native coin, $KNOW.
Validators guarantee the reliability of the blockchain. As they all share the same on-chain data and process the same transactions, if a node acts maliciously (e.g., the purpose of updating a state for its benefit without required permissions), the rest of the network will reject it. The economic incentive with $KNOW fees makes virtuous behavior more profitable than unwise conduct.
Built on Cosmos
The OKP4 blockchain is built with the Cosmos SDK to benefit from advanced implementation options without reinventing the wheel. Cosmos's modular architecture enables customized application-specific chains with ease. This unique flexibility fits the project's requirements, ensuring optimal performance and resource allocation.
Moreover, with Cosmos's Inter-Blockchain Communication (IBC) protocol, the OKP4 blockchain can communicate and transact seamlessly with other chains within the Cosmos ecosystem and external networks. This interchain compatibility expands the reach and potential user base and opens avenues for collaboration and synergy with other innovative projects in the Cosmos ecosystem and beyond.
Modules
The modules implement core functions particular to the OKP4 blockchain. They provide features for protocol governance, rules interpretation regarding resources, and more.
Logic
The logic module is designed to primarily address logical queries based on facts sourced from the ontology or the state of the chain, along with inference rules. Its main use in the protocol is the management of governance rules, written in Prolog. Thus, any smart contract deployed on the OKP4 blockchain can use the logic module to evaluate queries written in Prolog.
Prolog is a powerful declarative programming language to manage from simple restrictions to complex rules. This language is not natively operable on-chain; the logic module adds Prolog interpretation capability. The logic module is not strongly coupled to the OKP4 blockchain. It's designed to be reusable and can be integrated with any Cosmos appchain. Because it's open and interoperable, the support of Prolog programs is a significant gain for the Cosmos ecosystem. Using this open-source logic module implementation, any appchain can quickly adopt complex Prolog governance rules.
Mint, Vesting and other Cosmos SDK modules
The OKP4 blockchain uses production-grade modules from the Cosmos SDK. These generic modules facilitate the implementation of protocol governance, token operation management, validator punishment mechanisms, and authentication of accounts and transactions.
Two of them have a custom implementation: the mint and the vesting modules. “Mint”, used for creating new $KNOW units, applies specific tokenomics parameters. It determines the inflation rate for validators' and stackers' earnings. “Vesting” refers to distributing tokens over a specified period rather than all at once. This approach allows for controlled token distribution over time, promoting responsible token management and preventing immediate sell-offs that could negatively impact the OKP4 ecosystem.
WASM
OKP4 incorporates the CosmWasm smart contracting platform built for the Cosmos Ecosystem. The primary programming language used in this module is Rust for building secure and multi-chain smart contracts. Yet, any language compiled into WASM can be supported as they become available.
Smart Contracts
Smart contract overview
Smart contracts handle all interactions with external parties, whether to provide a digital resource with condition rules to use it, to consume this resource, or to send payments. These decentralized programs are centered around three fundamental pillars:
• Ontology: models all the different resources (e. g. datasets, computation services, infrastructure, orchestration engines, …), their descriptive properties and relations with each other.
It also includes access rules and execution instructions linked to resources. The ontology is a whole of objects, a way to store a semantic representation of the Dataverse.
• Governance: statutes the definition and the validation of resource permissions.
Rules (i.e. the conditions to give access to resources) can be applied at one among multiple levels: for everyone, for a set of resources, for a specific resource or between selected parties only.
• Orchestration: ensures the smooth coordination and execution of services.
It enforces the rules and policies that govern resource (datasets and services) authorizations.
Objectarium: unstructured data storage
The main goal of objectarium is to provide the protocol with a versatile mechanism for storing unstructured data, ensuring that the stored data remains immutable.
Any user or smart contract can ”pin” a stored object. Object deletion is impossible as long as it contains at least one “pin”. The user (or smart contract) should use the “unpin” function from objectarium to remove a pin.
A specific use case is the storing governance rules as a Prolog program. Prolog code defines governance rules, the “law”. These pieces of logic programming (usually from .pl files) are stored in raw with an objectarium smart contract.
Cognitarium: semantic data storage
The external resources description, usage rules and execution instructions are defined with an ontology, specifying metadata and relationships.
There is no restriction on the structure to allow ample expressiveness. But the core serves as a fundamental and essential structure that defines concepts and their relationships, forming the basis upon which additional extensions can be integrated.
Once inserted via the cognitarium smart contract, one can fetch the data from a “select” or a “describe” query. In the first case, it returns resources matching the criteria defined by the provided query. The second case returns the raw part of the ontology, the resource description identified by an Internationalized Resource Identifier (IRI).
Law-stone: source of rules
To create a governance rule, one should use the law-stone smart contract and instantiate it with a Prolog code.
A law-stone instance stores the Prolog code using objectarium smart-contract, pins it, and checks the dependencies to pin. Indeed, if the provided Prolog code depends on another Prolog program (already stored with objectarium), it's necessary to ensure its availability.
The rule cannot be changed; what is stored on-chain is immutable. The one who instantiates the governance rule is the administrator, the only one able to destroy it. When a law is deleted, the main Prolog code stored in objectarium is unpinned. All object with no other pin is deleted.
The law-stone has an “ask” method to execute Prolog queries from stored rules. It loads them from objectarium, and uses the logic module to get the answer with eventual substitutions for results.
Pactum: managing agreements
This smart contract guarantees the orchestration, the respect of the law and the value allocations. It can also handle more aspects regarding agreements, such as escrow mechanisms.
A pactum instantiation defines an agreement between at least two parties and details the prerequisites that fulfil its enforcement and the actions to proceed accordingly. These conditions to execute contractual obligations are materialized as a set of terms previously stored with the law-stone smart contract.
Service execution states are recorded in the ontology, forming the knowledge graph in cognitarium. The evaluation of conditions that must be met within the agreement changes over time based on the "facts" are also recorded in the ontology. Thus, a pactum instance applies token revenue-sharing distribution if conditions are met. It can also flag invalid service execution and block escrow in consequence.
Zone hub: entry point
zone-hub is the single entry point to any mutation in the protocol. This smart contract operates external requests and calls the other smart contracts methods. It checks errors and ensures authorization for protected operations.
When a service provider submits an entry to add new resources policies, zone-hub stores the Prolog code with an instantiation of a law-stone smart contract.
zone-hub populates the ontology with law-stone addresses as rules references, service execution instructions and all relevant resource metadata via an InsertData message of a cognitarium contract. This process applies to an agreement between parties, a resource consent or a zone (resources categorization) definition.
When a consumer submits an execution request, the zone-hub instance evaluates the required resources with a query to cognitarium. It extracts the governance rules, and, to evaluate resource access, it asks the related law-stone instance of the Prolog interpretation. If the response back is positive, zone-hub validates the user transaction and emits an event. This event triggers a service execution to (off-chain) orchestration services.
zone-hub also handles state reportings from orchestration executions. It stores messages in a cognitarium instances, before an evaluation by a pactum instance.
Orchestration
Orchestration services have access to off-chain resources. They only trust the OKP4 blockchain to execute jobs.
Identification and Authentication
An orchestration service can be authenticated using the concept of self-sovereign identity (SSI) and a decentralized identity management system. Self-sovereign identity is an approach that empowers individuals with control over their own digital identities. It allows individuals to own and manage their identity information, granting them the ability to selectively disclose it to various parties as needed. SSI allows individuals or entities to control their own digital identities, while DIDs provide a unique and persistent identifier for these entities.
Access authorization with Secret Management Services
Service providers should submit API keys (or private access instructions) to a secure storage solution and grant a read operation for an orchestration service. By utilizing SSI and DIDs, a Secret Management Service relies on the cryptographic integrity of the DIDs and verifiable credentials.
Service providers must add service execution instructions regarding secret keys alongside the governance rules when referencing a resource to the on-chain ontology.
Jobs executions
An orchestration service starts a new job when the OKP4 blockchain validates a user execution request. It fetches data, requests services, and performs various computes from one or several off-chain resources according to the necessary tasks to satisfy the user execution request.
An orchestration service has to report to the protocol the processes status to ensure service agreement enforcements.
The execution result is defined in the workflow, it may be integrated as a new dataset in the protocol or sent elsewhere outside the control of the protocol if the rules allow it. End users can visualize the final result of the execution through BI components, for example.
Trusted parties considerations
This part of the architecture involves trust in a party. However, a service provider can also deploy its own orchestration and secret services to remove this problematic point. The initial central workflow engine is a way to connect to any external resource without specific service deployment to wrap a protected entity.
To ensure decentralization, several external parties should propose their proper orchestration implementation.
Dataverse
The “dataverse” represents all resources referenced via the OKP4 protocol.
Ontology: Framework and Interoperability
The ontology is the way to formalize it on-chain, with a graph modelization. Using standards like RDF schema and Web Ontology Language (OWL) enables interoperability and avoids any expression restrictions.
Metadata
Unless there is minimal schema when providing a resource (to indicate service execution instructions and governance rules), users are free to specify the metadata they want. The more descriptive fields, the better.
Metadata profiles enable composability. Whereas they can be original and specific, they should respect existing industry standards. For example, AgMES (Agricultural Metadata Element Set) is an initiative to develop a standardized set of metadata elements specifically designed for agricultural information and data management systems. Standards are not limited to those of the industry: they are specifications like the E-Government Metadata Standard (e-GMS). Thus the UK public sector has a metadata standard for making data handling consistent in order to promote the efficient use of Web pages and documents.
Connectors
Although resources can be of any form, they are served via technological stacks that are more or less common and standardized. An OKP4 connector defines a specification for resource access and processing for a given solution (e.g. S3). It is then necessary to have an execution service that the orchestration service could use (e. g. Docker container image for an Argo Workflows engine), which understands how to interpret the instructions and returns the expected processing result.
Beyond the protocol: user interfaces, applications
OKP4 team builds user interfaces on the OKP4 protocol, but anyone can publish an alternative of the following applications.
Blocks indexers, transactions explorer
A blockchain explorer is designed to present blockchain data in a user-friendly and intuitive manner, making it easier for technical and non-technical users to navigate and understand the information stored on the blockchain. The UI provides transparency and visibility into the OKP4 blockchain's activity. It allows users to search and retrieve specific transaction information, such as sender and recipient addresses, transaction amounts, timestamps, and transaction statuses. Users can also view details about each block in the blockchain and the transactions it contains.
For greater efficiency, dapps may use indexers, databases replicating the blockchain state, especially for a block explorer interface.
Administration clients
Portal is an example of administration UI, the main OKP4 protocol web interface. It's the user gateway to parameter the addition of datasets and services, build workflows and create new knowledge from the shared resources. Portal offers an exploration view with advanced filters to get information like governance rules.
End-user interfaces
Newly generated knowledge could be presented with customized dashboards from any business intelligence component. White-label data platforms graph the results obtained by running workflows with the OKP4 protocol.
Since some companies cannot have a wallet to use the protocol, front-end interfaces might also provide a layer of abstraction to bring data on request without any blockchain interaction.
Connect'em all
Let's recap with a concrete example applied to the AI industry.
Two companies (Corp A and Corp B) grant access to their databases, while a third company (Corp C) provides a machine-learning training workflow using the data. A data scientist (Individual D) wants to get new knowledge and invokes an orchestration service that uses resources from Corp A, Corp B and Corp C.
note
The OKP4 solution orchestrates the training process without exposing raw data, ensuring privacy, sovereignty and security. Moreover, the protocol provides revenue-sharing conditions and immutable records of the ML model's training sources.
1. Each on their own, Corp A and Corp B indicate through an administration portal the underlying technology of their respective database. They store access tokens with a Secret Management Service, with authorization for orchestration service.
2. Corp A and Corp B continue on the administration portal and define the rules, with access restrictions and payment conditions (a fixed $KNOW fee per request, for example). A raw Prolog file is accepted, but the interface provides an ergonomic form for a better UX. Corp A and Corp B inform a maximum of metadata, especially to describe the different available datasets, their structures and the nature of their contents. The portal combines the service execution instructions, the governance rules and the descriptive pieces of information in a Turtle file. Then a transaction adds it to the on-chain ontology.
3. Metadata indicate that resources shared by Corp A and Corp B are compatible with a ML job from Corp C. Corp C also uses the administration portal to submit its training model algorithm, governance rules, and service execution instructions.
4. Individual D parameters its cloud environment with the portal, setting up how an orchestration service should store the execution request results.
5. Individual D uses the portal to request service execution with the ML Workflow from Corp C, using data from Corp A and Corp B. He submits a transaction with a Keplr wallet and pays with $KNOW tokens. The blockchain validates the execution request (access and execution authorizations).
6. The orchestration service listens to the event from the blockchain. It recovers access keys, and then executes the workflow training algorithm from Corp C, using Corp A & Corp B data it fetches.
7. The orchestration service tracks the progress and state changes of all jobs within the workflow and reports this information to the blockchain. If all works well, service agreement rules are applied, $KNOW tokens from Individual D are unlocked to Corp A, Corp B et Corp C.
8. The orchestration service stores the result in the provided Individual D's storage solution. Optionally, this new knowledge can also be referenced as a data source for other workflows.
9. Individual D can have access to the newly generated knowledge via a BI interface.
sequenceDiagram actor DataProv as A/B actor ServProv as C actor IndD as D participant OKP4 participant Secret participant Db A/B participant ML Workflow participant Orchestrator participant Storage Note right of DataProv: 1. DataProv->> Secret: Database keys store Note right of DataProv: 2. DataProv->> OKP4: Database reference Note right of ServProv: 3. ServProv->> OKP4: Check workflow compatibilities OKP4->> ServProv: Possible ML workflow (Db A + Db B) ServProv->> OKP4: ML Workflow reference Note right of IndD: 4. IndD->> Secret: Cloud storage keys for results Note right of IndD: 5. IndD->> OKP4: Knowledge request(ML Workflow(DbA + Db B)) - $KNOW payment OKP4->>OKP4: Valid knowledge request Note right of OKP4: 6. OKP4-->>Orchestrator: Chain event triggering Orchestrator->> Secret: Get resources access (Db A, Db B, algo C, storage D) Secret->> Orchestrator: Resources secret keys loop Db A, Db B Orchestrator->> Db A/B: Resource request Db A/B->> Orchestrator: Resource result end Orchestrator->> ML Workflow: Compute(DataA, DataB) ML Workflow->> Orchestrator: Knowledge result Note right of Orchestrator: 7. Orchestrator->> OKP4: Workflow state report OKP4->>OKP4: Service agreement exec ($KNOW rev share, ...) Note right of Orchestrator: 8. Orchestrator->> Storage: Store result Orchestrator->> OKP4: Result reference Note right of IndD: 9. IndD->> Storage: Get result Storage->>IndD: Newly generated knowledge
|
__label__pos
| 0.751125 |
How to build a calendar to borrow and lend objects
I want to build a Smart Contract that can let a user borrow items lended by another user. I've got some problems with the calendar design though. How can I create/design a calendar that checks if the item that a user is trying to borrow is not available in some specific dates?
Example: I want to borrow a bike from the 6th of this month for 30 days. The smart contract sees that the bike is free from the 6th for 50 days so I can borrow it.
How I though to handle this: convert time from seconds passed from 1970 to DD/MM/YYYY and then store the information in a mapping(uint16 => mapping(uint8 => uint8)).
So for example: I want to borrow an item from the 6th of november of 2023 for 4 days. The smart contract saves this date like so:
dates[2023][11][6]
dates[2023][11][7]
dates[2023][11][8]
dates[2023][11][9]
Let's say that somebody else wants the bike for two days starting from the 7th of November of 2023. The Smart Contract checks if the item is available but finds that the mapping is already populated in those dates so it won't let my request through.
What do you think about my way to handle this? Do you have a better approach?
Of course in this case the Smart contract must iterate from the starting date (so in this case the 7th of November) and it would have a complexity of O(2) because the numbers of days to iterate are two.
Hey @Allennick
my first advise is to use uint256 instead of uint16 or uint8
To do this, you can save the timestamp of the borrow and add to it the duraton of the borrow.
uint256 borrowEnd = block.timestamp + 30 days
In this way you only have to check the borrowEnd value when someone else try to borrow a bike during that period
Hi thanks for your answer.
What happens if I borrow the item from the 7th of November 2023 for 4 days and someone asks to borrow the item from the 2nd of November up until the 5th of November?
I would answer that this request is valid because I check the value of your "borrowEnd" variable against my "requestedBorrowEnd" value (the 5th of November).
The "borrowEnd" value is saved in a mapping? How can you handle preordered periods for an item with just this variable?
Should it be saved in a mapping like so:
//the first uint256 is the "borrowStart" whilst the second one is the "borrowEnd" variable
mapping(uint256 => uint256) occupiedSlots
I don't know which is your aim/idea. You should provide more info about what you want to achieve, like how it should work, what people can do etc.
Also I don't get this
mapping(uint256 => uint256) occupiedSlots
My idea is to let people borrow items from a lender.
People can preorder items in the future.
So a person X can preorder an item for Y days starting from XX/YY/ZZZZ up until XX'/YY'/ZZZZ'.
My smart contract must check if the item is available in those days.
If you need more information I can provide it to you.
occupiedSlots is a mapping that contains a startingDate and an endingDate. With just a variable as you suggested I can't understand how to check if the item will be available in the days I selected.
It is literally a calendar.
This calendar handles the availability of many items.
Ok so, imagine you have an id for each product.
I would do it in this way:
mapping(uint256 => BorrowInfo) borrowInfo //uint256 is the itemId
struct BorrowInfo {
bool available
uint256 startDate;
uint256 endDate;
}
And when you need to check if that item can be borrowed:
function borrowItem(uint256 itemId, uint256 start, uint256 end) external {
require(borrowInfo[itemId].available)
require(start <= end)
if(start < borrowInfo[itemId].end)
require(end < borrowInfo[itemId].start)
...
}
Yes the idea is pretty much what you show to me in that code.
But the borrowed item must contain more information on its future availability.
In the case you just described to me let's say that there is the item with ID = X and I borrow it from this day for 4 days. So the struct would be = {available = false, startDate = today (in seconds passed from 1970 of course), endDate = today + 4 days} and that's great.
Now let's say that you, another user of my Dapp, login in the same day as me and want the same item that I've just borrowed. So you see that it will be available from four days starting from now, you decide to preorder it from borrowInfo[ID of the item][endDate + 1 day] for 1 day.
So to be more precise let's think about this with a table.
In this table you can see a row with the item availability and some columns that range from today's date up until 14/09/2022.
We have both visited my dApp today (09/09/2022) and we want to book the same item, you try to borrow the item today but the Smart Contract says (nope, you can't), so you see that the item is available at the 13th but you choose to preorder it from 15/09/2022 for 1 day.
So to do this I need a more complex struct than the one you proposed (that was great, I really thank you by the way).
I would need something like:
//The first uint256 is the itemId, the second one is the startDate
mapping(uint256 => mapping(uint256 => BorrowInfo))
struct BorrowInfo {
status string //can be 'PREORDERED, AVAILABLE, NOT_AVAILABLE
uint256 endDate
}
Yex exactly, as I said only you know what you need. I hope the code example will be useful :stuck_out_tongue_winking_eye:
I think I was enough descriptive about my problem to be honest. That code example is good but it doesn't fully fit the example I have explained.
Thanks anyway.
|
__label__pos
| 0.611559 |
C++ Program to Multiply two Matrices by Passing Matrix to Function
#include <iostream>
using namespace std;
void enterData(int firstMatrix[][10], int secondMatrix[][10], int rowFirst, int columnFirst, int rowSecond, int columnSecond);
void multiplyMatrices(int firstMatrix[][10], int secondMatrix[][10], int multResult[][10], int rowFirst, int columnFirst, int rowSecond, int columnSecond);
void display(int mult[][10], int rowFirst, int columnSecond);
int main()
{
int firstMatrix[10][10], secondMatrix[10][10], mult[10][10], rowFirst, columnFirst, rowSecond, columnSecond, i, j, k;
cout << "Enter rows and column for first matrix: ";
cin >> rowFirst >> columnFirst;
cout << "Enter rows and column for second matrix: ";
cin >> rowSecond >> columnSecond;
// If colum of first matrix in not equal to row of second matrix, asking user to enter the size of matrix again.
while (columnFirst != rowSecond)
{
cout << "Error! column of first matrix not equal to row of second." << endl;
cout << "Enter rows and column for first matrix: ";
cin >> rowFirst >> columnFirst;
cout << "Enter rows and column for second matrix: ";
cin >> rowSecond >> columnSecond;
}
// Function to take matrices data
enterData(firstMatrix, secondMatrix, rowFirst, columnFirst, rowSecond, columnSecond);
// Function to multiply two matrices.
multiplyMatrices(firstMatrix, secondMatrix, mult, rowFirst, columnFirst, rowSecond, columnSecond);
// Function to display resultant matrix after multiplication.
display(mult, rowFirst, columnSecond);
return 0;
}
void enterData(int firstMatrix[][10], int secondMatrix[][10], int rowFirst, int columnFirst, int rowSecond, int columnSecond)
{
int i, j;
cout << endl << "Enter elements of matrix 1:" << endl;
for(i = 0; i < rowFirst; ++i)
{
for(j = 0; j < columnFirst; ++j)
{
cout << "Enter elements a"<< i + 1 << j + 1 << ": ";
cin >> firstMatrix[i][j];
}
}
cout << endl << "Enter elements of matrix 2:" << endl;
for(i = 0; i < rowSecond; ++i)
{
for(j = 0; j < columnSecond; ++j)
{
cout << "Enter elements b" << i + 1 << j + 1 << ": ";
cin >> secondMatrix[i][j];
}
}
}
void multiplyMatrices(int firstMatrix[][10], int secondMatrix[][10], int mult[][10], int rowFirst, int columnFirst, int rowSecond, int columnSecond)
{
int i, j, k;
// Initializing elements of matrix mult to 0.
for(i = 0; i < rowFirst; ++i)
{
for(j = 0; j < columnSecond; ++j)
{
mult[i][j] = 0;
}
}
// Multiplying matrix firstMatrix and secondMatrix and storing in array mult.
for(i = 0; i < rowFirst; ++i)
{
for(j = 0; j < columnSecond; ++j)
{
for(k=0; k<columnFirst; ++k)
{
mult[i][j] += firstMatrix[i][k] * secondMatrix[k][j];
}
}
}
}
void display(int mult[][10], int rowFirst, int columnSecond)
{
int i, j;
cout << "Output Matrix:" << endl;
for(i = 0; i < rowFirst; ++i)
{
for(j = 0; j < columnSecond; ++j)
{
cout << mult[i][j] << " ";
if(j == columnSecond - 1)
cout << endl << endl;
}
}
}
OUTPUT:
Enter rows and column for first matrix: 3
2
Enter rows and column for second matrix: 3
2
Error! column of first matrix not equal to row of second.
Enter rows and column for first matrix: 2
3
Enter rows and column for second matrix: 3
2
Enter elements of matrix 1:
Enter elements a11: 3
Enter elements a12: -2
Enter elements a13: 5
Enter elements a21: 3
Enter elements a22: 0
Enter elements a23: 4
Enter elements of matrix 2:
Enter elements b11: 2
Enter elements b12: 3
Enter elements b21: -9
Enter elements b22: 0
Enter elements b31: 0
Enter elements b32: 4
Output Matrix:
24 29
6 25
Leave a Comment
|
__label__pos
| 0.980347 |
JSX
Front End Developer at Zup - Uberlândia - MG
JSX
Quando surgiu como uma alternativa para escrevermos nosso template, o JSX logo foi mal visto e, realmente, ele pode parecer estranho no começo. Foi considerado um retrocesso no início por mesclar a estrutura do DOM com a lógica da aplicação, mas se mostrou poderoso posteriormente.
Como tudo que há de novo, inicialmente, temos um certo desconforto mesmo quando tal tecnologia, de certa forma, altera e nos retira da zona de conforto. Mas, após já ter amadurecido na comunidade, o JSX vem ganhando muita força e já mostrou o seu valor. Amado e odiado, ele é o mais popular template para criação da estrutura de uma interface.
Considerando o seguinte código abaixo, acredite, isso não é HTML! E neste artigo, iremos conhecer melhor sobre isto e sobre essa fantástica alternativa para templates.
const nav = (
<nav className="menu">
<ul>
<li><a href="#">Home</a></li>
<li><a href="#">About</a></li>
</ul>
</nav>
)
O que é JSX?
Em resumo, JSX é uma sintaxe semelhante ao XML, na qual você consegue escrever e compreender de uma forma melhor como será montado o seu component na UI.
JSX não é uma proposta para EcmaScript, ele é apenas uma sintaxe!
É claro que JSX não é interpretado pelo browser. Por este motivo devemos utilizar um transpiler para realizar a mágica. Hoje existem vários transpiladores que transpilam JSX, entre eles, o mais conhecido, Babel.
Basicamente, usando o JSX você pode escrever estruturas concisas do tipo HTML e no mesmo arquivo que você escreve o código JavaScript, então, o Babel transformará isso em código JavaScript. Ao contrário do passado, em vez de colocar JavaScript em HTML, o JSX nos permite colocar HTML em JavaScript.
JSX providencia uma sintaxe familiar para definir a árvore estrutural. Isso não requer um novo conhecimento nem precisamos abandonar o Javascript.
Então, o código acima terá o seguinte resultado após ser transpilado:
var nav = React.createElement(
"nav",
{ className: "menu" },
React.createElement(
"ul",
null,
React.createElement(
"li",
null,
React.createElement(
"a",
{ href: "#" },
"Home"
)
),
React.createElement(
"li",
null,
React.createElement(
"a",
{ href: "#" },
"About"
)
)
)
);
Viu? Eu já havia citado que isso não é HTML.
Element
No exemplo acima, o transpiler englobou os nós com a função React.createElement. Isso se dá porque neste caso, utilizamos o preset babel-preset-react e nele está contido o plugin transform-react-jsx, que utiliza por padrão o Pragma React.createElement.
Esse assunto já foi citado no meu artigo sobre Preact, e você poderá verificar que, no exemplo do Preact, é necessário alterar o Pragma. No Preact, a função que manipula o nó é a função h.
Vamos entender melhor esta estrutura. Para isto, iremos analisar um pequeno caso, apenas um elemento h1.
<h1 className="title">Hello World</h1>
Isso irá ser transpilado para isto:
React.createElement(
"h1",
{ className: "title" },
"Hello World"
);
Parâmetros da função createElement:
• Type
• Propriedades
• Children
O objetivo é tornar o resultado compreensível para ser renderizado. A função irá retornar um object com a estrutura abaixo.
{
type: 'h1',
props: { className: 'title' },
children: ['Hello World'],
}
Méritos do JSX
• Por se parecer com HTML, pessoas menos técnicas ainda podem entender e modificar as partes necessárias.
• Você pode aproveitar todo o poder do JavaScript em HTML e evitar aprender ou usar uma linguagem de modelos.
Quem usa?
Existe outras libs e frameworks além do React que utilizam JSX.
• Preact
• Inferno
• React-Native
• Vuejs(Opcional)
BrazilJS é uma iniciativa NASC
|
__label__pos
| 0.659397 |
wuauclt.exe
Process name: Microsoft Windows Update
Application using this process: Microsoft Windows Operating System
wuauclt.exe
Process name: Microsoft Windows Update
Application using this process: Microsoft Windows Operating System
wuauclt.exe
Click here to run a scan if you are experiencing issues with this process.
Process name: Microsoft Windows Update
Application using this process: Microsoft Windows Operating System
Recommended: Scan your system for invalid registry entries.
What is wuauclt.exe doing on my computer?
Wuauclt.exe is the AutoUpdate Client of Windows Update and is used to check for available updates (for the various versions of the MS Windows platform) from Microsoft Update. The wuauclt.exe file is included in the Task Manager’s list of active processes when it is waiting for a response or an action to be performed by the user.
Aside from updating a user’s MS Windows-based system, the wuauclt.exe file also looks for MS Windows-based software and hardware updates. These updates can help a user avoid stability issues and even security risks or system vulnerabilities.
The Automatic update feature embedded on MS Windows can be manually turned on or off and can also be scheduled by the user. The user can proceed to Control Panel and open Automatic Updates in order to access the set of update options provided by this MS Windows utility.
If you need any technical advice or assistance, call our Live Experts toll-free on one of the numbers below:
US/CA: 1-855-246-1076
AUS: 1-800-736-793
UK: 0-800-056-0977
wuauclt.exe is a system process that is needed for your PC to work properly. It should not be removed. Check your system now to identify unused processes that are slowing down your computer.
wuauclt.exe
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is wuauclt.exe harmful?
This process is considered safe. It is unlikely to pose any harm to your system. Check your computer for viruses or other malware infected files.
wuauclt.exe is a safe process
Can I stop or remove wuauclt.exe?
Since wuauclt.exe is a system process it should not be stopped. The process is required for your PC to work properly. Scan Windows processes now to identify unused processes that are affecting your PC’s performance.
Is wuauclt.exe CPU intensive?
This process is considered to be CPU intensive. Without proper management, CPU intensive processes can manipulate system resources causing speed loss. To protect your computer from CPU manipulation, we recommend that you SpeedUpMyPC’s Speed Tools to effectively improve your PC’s performance.
Why is wuauclt.exe giving me errors?
System process issues are mainly a result of conflicting applications running on your PC. Consider uninstalling any applications you are not using and using SpeedUpMyPC to scan your system for unused process and services.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
System Tools
PC Mechanic
Toolbox
ProcessQuicklink
|
__label__pos
| 0.548084 |
Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:
So the assignment is to ask for a variable in a certain range the print the individual digits of the numbers with three spaces in between the digit. For example 1234 should print
0 1 2 3 4
1 2 3 4
2 3 4
3 4
4
I'm pretty sure I have most of the assignment done I am just having trouble changing the variable number in my loop. The number sorts itself into the right if statement but then when it loops instead of the number going down a digit (i.e. 2343 to 343) all it does is print the same number 5 times. I have researched in my book and looked online but i'm not seeing it. It's probably something simple just not sure what. Here's the code:
#include <stdio.h>
#include <stdlib.h>
void loopingDigitprinter(int digit);
int division(int* digit);
int main()
{
int digitPrint;
printf("Please enter a number between 0 and 32,767: ");
scanf("%d", &digitPrint);
loopingDigitprinter (digitPrint);
return 0;
}
void loopingDigitprinter(int digit)
{
int loopLine= 0;
int thousand;
int hundred;
int original;
original = digit;
while(loopLine < 4)
{
if (digit > 10000 && digit <= 32767)
{
thousand = digit/ 1000;
hundred = digit % 1000;
printf("%02d%03d\n",thousand, hundred);
digit %= 10000;
}
else if (digit < 10000 && digit > 1000)
{
if (original > 10000)
{ thousand = digit/ 1000;
hundred = digit % 1000;
digit %= 1000;
printf("%01d%03d\n",thousand, hundred);
}
else
{
thousand = digit/ 1000;
hundred = digit % 1000;
printf("%02d%03d\n",thousand, hundred);
digit %= 1000;
}
}
else if (digit < 1000 && digit > 100)
{
if (original > 10000)
{
hundred = digit % 1000;
printf("%d\n", hundred);
digit %= 100;
}
else if (original < 10000 && original > 1000)
{
thousand = original / 1000;
hundred = digit % 1000;
printf("%d%d\n",thousand,digit);
printf("%d\n", digit);
digit %= 100;
digit %= 100;
}
else
{
thousand = digit/ 1000;
hundred = digit % 1000;
printf("%02d%03d\n",thousand, hundred);
digit %= 1000;
thousand = original / 1000;
hundred = digit % 1000;
printf("%d%d\n",thousand,digit);
printf("%d\n", digit);
digit %= 100;
printf("%d\n", digit);
digit %= 10;
printf("%d\n", digit);
}
}
else if (digit < 100 && digit > 10)
{
if (original > 10000)
{ hundred = digit % 1000;
printf("%d\n", hundred);
digit %= 10;
printf("%d\n", digit);
}
else if (original < 10000 && original > 1000)
{
thousand = original / 1000;
hundred = digit % 1000;
printf("%d\n",hundred);
digit %= 10;
printf("%d\n", digit);
}
else if (original < 1000 && original > 10)
{
thousand = digit/ 1000;
hundred = digit % 1000;
printf("%02d%03d\n",thousand, hundred);
digit %= 1000;
thousand = original / 1000;
hundred = digit % 1000;
printf("%d%03d\n",thousand,digit);
printf("%03d\n", digit);
digit %= 100;
printf("%d\n", digit);
digit %= 10;
printf("%d\n", digit);
}
else
printf("1");
}
else if(original > 0 && original < 10)
{
printf("0000%d\n", original);
printf("000%d\n", original);
printf("00%d\n", original);
printf("0%d\n", original);
printf("%d\n", original);
break;
}
loopLine++;
}
return;
}
share|improve this question
This is the updated code for anyone who runs into a similar problem. Works well just havent implemented the spaces yet. If there is a simpler solution I would like to know. Im trying to get a better grasp on implementing loops. – Stevenfowler16 Mar 5 '12 at 2:02
2 Answers 2
up vote 0 down vote accepted
I dont know what you are doing there as i did not read the code.
But do refer to my sample code. It pretty much does the needful
#include <stdio.h>
#include <string.h>
int main(int argc, char*argv[]){
int n,k,i=0;
printf("Enter a number please\n");
scanf("%d",&n);
while(i<=n){
for(k=i;k<=n;k++){
printf("%d ",k);
}
printf("\n");
i++;
}
return 0;
}
Output
------
0 1 2 3 4
1 2 3 4
2 3 4
3 4
4
share|improve this answer
The loopLine++ is called each loop so you eventually exit, but double check your last equality checks. Digit can't be greater than 10 AND less than 0.
You may need to call division(digit) in your last else if to make it change.
share|improve this answer
Thanks for the heads up on the equality check didnt even notice it. Calling division(digit) in the else if didn't change the variable so you whatever you type you get it printed 5 times – Stevenfowler16 Mar 4 '12 at 17:39
Figured it out. I wasn't passing the variable back up to the called function essentially rendering division null. – Stevenfowler16 Mar 4 '12 at 18:00
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.995355 |
How to build a Progressive Web App
Mobile now accounts for over half the web's traffic, and web applications enable users to do things in the browser that rival native apps, but there's a problem: the quality of connections and devices varies massively all over the world.
Catering both to users on lightning-fast connections in Seoul, and users in rural India on an outdated phone, is the latest usability challenge, and Progressive Web Apps are the solution.
PWAs use progressive enhancement to load the most important content first, then add presentational and functional extras as required, meaning that all your users get the same core experience as quickly as possible. If you want to reach the widest possible audience, PWAs are the way to go.
Though Progressive Web Apps bring a lot of benefits and functionality to the web, they don't require rewriting your entire application. Any app can be converted to a PWA by adding a few extra layers to it.
For best results, you'll want to put a strong emphasis on performance from the beginning – but that's true of any web app. Here, we'll walk through the steps to make your app progressive.
If you want to build a smooth-running site, make sure your web hosting is spot on and use a decent website builder.
01. Serve over HTTPS
Let's be honest: you should be doing this anyway. SSL adds an extra layer of security to the web, helping your users feel secure in using your site. With PWAs, HTTPS is essential for using service workers and allowing home screen installation. You can purchase an SSL certificate from your domain registrar at little expense and then configure it through your hosting service.
02. Create an application shell
Your app shell is the first thing that loads – the first thing the user sees. It should be exist entirely in your index HTML document, with inline CSS, to ensure it appears as fast as possible and your user isn't staring at a white screen for longer than necessary. The application shell forms part of the pattern of progressive enhancement. Your app should give the user content as soon as possible, and then progressively enhance it as more data (likely JavaScript) loads.
The example below is taken from a React.js application. The user is presented with an outline of the app and a loading indicator in the index.html. Then, once the JavaScript loads and React boots up, the full application is rendered within the shell.
<!--index.html-->
<body>
<div id="root">
<div id="container">
<div class="inner-container">
<div id="header">
<img src="/assets/icon.png" alt="logo" />
<h1>Chat</h1>
</div>
<div id="loading-container">
<img src="/assets/icon.png" alt="logo" id="loader"/>
</div>
</div>
</div>
</div>
</body>
// index.js
ReactDOM.render(
<App />,
document.getElementById('root')
);
03. Register a service worker
To tap into the full spectrum of PWA goodies (push notifications, caching, install prompts) you will need a service worker.
Luckily, they're pretty easy to set up. Below, we first check if the user's browser supports service workers. Then, if so, we can move ahead with registering the service worker file, here called service‑worker.js. Note that you don't need anything special inside that file at this point – it can be blank.
In the example below, however, we show how to tap into the three key service worker lifecycle events. These are 'install', when the user first visits your page; 'activate', right before registration completes; and 'fetch', when the application makes a network request. The last one is relevant for caching and offline capability.
<script>
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('service-worker.js').then(function(registration) {
// Registration was successful
console.log('Registered!');
}, function(err) {
// registration failed :(
console.log('ServiceWorker registration failed: ', err);
}).catch(function(err) {
console.log(err);
});
});
} else {
console.log('service worker is not supported');
}
</script>
// service-worker.js
self.addEventListener('install', function() {
console.log('Install!');
});
self.addEventListener("activate", event => {
console.log('Activate!');
});
self.addEventListener('fetch', function(event) {
console.log('Fetch!', event.request);
});
04. Add push notifications
Service workers allow your users to receive push notifications via the web Push API. To access it, you can tap into self.registration.pushManager from within your service worker file. Since the sending of push notifications relies heavily on your backend setup, we won't dive into it here.
If you're starting an app from scratch, Google's Firebase service comes with Firebase Cloud Messaging for relatively painless push notifications (remember: make sure you keep your design files safe in cloud storage). The code below shows how to register for push notifications via the Push API.
navigator.serviceWorker.ready.then(function(registration) {
if (!registration.pushManager) {
alert('No push notifications support.');
return false;
}
//To subscribe `push notification` from push manager
registration.pushManager.subscribe({
userVisibleOnly: true //Always show notification when received
})
.then(function (subscription) {
console.log('Subscribed.');
})
.catch(function (error) {
console.log('Subscription error: ', error);
});
})
05. Add a web app manifest
In order to make your application installable, you need to include a manifest.json in the application's root directory. You can think of this as a description of your application, similar to what you might submit to the App Store. It includes icons, a splash screen, a name and a description.
There's also some configuration for how your application appears when it is launched from the user's home screen: Do you want to show the address bar in the browser or not? What colour do you want the status bar to be? And so on. Note that a proper manifest.json should include a full spectrum of icon sizes for various devices. The code below is a preview of some of the properties your manifest can include.
{
"short_name": "Chat",
"name": "Chat",
"icons": [
{
"src":"/assets/icon.png",
"sizes": "192x192",
"type": "image/png"
}
],
"start_url": "/?utm_source=homescreen",
"background_color": "#e05a47",
"theme_color": "#e05a47",
"display": "standalone"
}
06. Configure the install prompt
When a user visits a PWA with a service worker and manifest, Chrome will automatically prompt them to install it to their homescreen, given the following: the user must visit the site twice, with five minutes between visits.
The idea here is to wait until the user demonstrates interest in your application, and then ask them to make it a fixture of their device (this is in sharp contrast to the native app approach, which asks for that investment up-front).
But there may be cases where you want to show the install prompt in different situations, such as after the user takes a particular useful action. To do so, we intercept the beforeinstallprompt event and save it for later, then deploy the prompt when we see fit.
window.addEventListener('beforeinstallprompt', e => {
console.log('beforeinstallprompt Event fired');
e.preventDefault();
// Stash the event so it can be triggered later.
this.deferredPrompt = e;
return false;
});
// When you want to trigger prompt:
this.deferredPrompt.prompt();
this.deferredPrompt.userChoice.then(choice => {
console.log(choice);
});
this.deferredPrompt = null;
07. Analyse your app's performance
Performance is the heart and soul of PWAs. Your app should be fast for users on all network conditions. Caching and offline capability helps a lot, but at the end of the day, your application should be speedy even if the user does not have the browser to support service worker technology. This is the definition of progressive enhancement – provide a great experience for everyone, regardless of device modernity or network conditions.
To do so, a useful set of metrics is the RAIL system. RAIL is what Google calls a 'user-centric performance model' – a set of guidelines for measuring our app's performance.
The acronym stands for Response (how long it takes for your app to respond to user actions), Animation (keeping animation speed at 60fps), Idle (using time when your app isn't doing anything else to load and cache additional assets) and Load (loading your app in one second or less).
Here is a table of meaningful benchmarks for application loading, as supplied by Meggin Kearney, tech writer at Google Web Fundamentals.
Click the icon in the top right to enlarge the image
08. Audit your app with Lighthouse
Google is the biggest champion pushing Progressive Web Apps as the future of the web. As such, it has supplied a useful tool for guiding your PWA development.
Formerly called Lighthouse and supplied as a Chrome Extension, as of Chrome 60 it's a part of the Chrome DevTools, under the 'Audits' tab. What Lighthouse does is run your application under different conditions and measure its response and success according to PWA guidelines. It then gives you a score out of 100. It can also score your app on web best practices at the same time.
The following text is a list of the values Lighthouse measured. In use also shows descriptions.
• Registers a Service Worker
• Responds with a 200 when offline
• Contains some content when JavaScript is not available
• Uses HTTPS
• Redirects HTTP traffic to HTTPS
• Page load is fast enough on 3G
• User can be prompted to install the Web App
• Configured for a custom splash screen
• Address bar matches brand colours
• Has a <meta name="viewport"> tag with width or initial-scale
• Content is sized correctly for the viewport
This article originally appeared in Web Designer; subscribe here.
Related articles:
Thank you for reading 5 articles this month* Join now for unlimited access
Enjoy your first month for just £1 / $1 / €1
*Read 5 free articles per month without a subscription
Join now for unlimited access
Try first month for just £1 / $1 / €1
Scott is a full-stack developer at MuseFind Technologies, author of upcoming book Progressive Web Apps with React and curates the Progressive Web App Newsletter.
|
__label__pos
| 0.633978 |
Hello i want to put more than one values in a table's row
I press the </>code and >_Inline code tabs. Neither of them open the dialog box
Only </> Code should open a dialog. What browser are you using? Any interfering plugins?
Apart from that you can paste your stuff, then select all text and press tab to indent as code.
I am using firefox. Neither working for me. Neither code or inline code. Yes only code used to open a dialog box and inline code used to get the code in brackets or something. That doesnt work for me. Nothing of the text processing working for me. B,I LINK and so on.. what did i do wrong?>
Member Avatar
Alternative: paste your code into the editor, ensure a blank line between your last bit of text and the first line of the code block. Select all entire code block and press TAB.
So back to the question. Show your SQL, your table and structure. The code you posted is of little value as we don't even know if you're using the right enctype in your form tag. C'mon Simon - help us to help you.
the editor works on Chrome
i want it to store like this
INSERT INTO table (title,desc,url, img, uid_fk, created, to_uid_fk) VALUES ('something','something','something','something','12','something','100,102,204')
The php code
$update=mysqli_real_escape_string($_POST['update_press']);
$press_image=$_POST['photopress'];
$update_description=mysqli_real_escape_string($_POST['update_descr']);
$press_url=mysqli_real_escape_string($_POST['pressUrl']);
$values = array();
foreach ($_POST['selected_ids'] as $i => $value) {
$values[] = sprintf('(%d)', $value);
}
$values = implode(',', $values);
$data=$Wall->Insert_Press_Release($uid,$update,$update_description,$press_url,$press_image,$values);
the ajax js code
$(".press_update_button").click(function()
{
var updateval = $("#update_press").val();
var pressUrl=$("#pressUrl").val();
var update_descr=$("#update_descr").val();
var uploadvalues=$("#photopress").val();
var selected_ids=$("#selected_ids").val();
var dataString = 'update_press='+updateval+'&photopress='+uploadvalues+'&update_descr='+update_descr+'&pressUrl='+pressUrl+'&selected_ids='+selected_ids;
$("#flashPress").show();
$("#flashPress").fadeIn(400).html('Loading Press Release...');
$.ajax({
type: "POST",
url: $.base_url+"press_release_ajax.php",
data: dataString,
cache: false,
success: function(html)
{
$("#flashPress").fadeOut('slow');
$("#presscontent").prepend(html);
$("#update_press").val('').focus().css("height", "40px");
('#update_descr').val('');
$('#selected_ids').val('');
$('#pressUrl').val('');
$('#photoimg').val('');
}
});
return false;
});
the function that Inserts tha data
public function Insert_Press_Release($uid,$update,$update_description,$press_url,$press_image,$values)
{
$update=mysqli_real_escape_string($this->db,$update);
$update_description=mysqli_real_escape_string($this->db,$update_description);
$press_url=mysqli_real_escape_string($this->db,$press_url);
$press_image=mysqli_real_escape_string($this->db,$press_image);
$time=time();
$query = mysqli_query($this->db,"INSERT INTO `table` (title,desc,url, img, uid_fk, created, to_uid_fk) VALUES ('$update','$update_description','$press_url','$press_image','$uid','$time',$val)") or die(mysql_error());
$result = mysqli_fetch_array($newquery,MYSQLI_ASSOC);
return $result;
}
Member Avatar
public function Insert_Press_Release($uid,$update,$update_description,$press_url,$press_image,$values)
You are passing $values, but are using $val:
('$update','$update_description','$press_url','$press_image','$uid','$time',$val)
This really does look like a lot of work with all that sanitizing. Why not create a prepared statement?
Is there any reason why you're placing multiple values into a single record field: '100,102,204'
This violates 1NF (1st normal form) for relational models. This makes searching for foreign ids very labour intensive - you should use a link table, just:
idTable1 | idTable2 (where both are FKs and make the composite PK for the link table)
So for instance, you'd have:
3 | 100
3 | 102
3 | 204
The values was a mistake but its not this. The '100,102,204' are finctional ids, the point is that i want to store multiple values in one row. Here is the code that i dont know if its right..
php
$val=array($s);
$val = implode(',', $s);
$data=$Wall->Insert_Press_Release($uid,$update,$update_description,$press_url,$press_image,$val);
or
$values = array();
foreach ($_POST['selected_ids'] as $i => $value) {
$values[] = sprintf('(%d)', $value);
}
$values = implode(',', $values);
and the js
var selected_ids=$("#selected_ids").val();
var dataString ='update_press='+updateval+'&photopress='+uploadvalues+'&update_descr='+update_descr+'&pressUrl='+pressUrl+'&selected_ids='+selected_ids;
$.ajax({
type: "POST",
url:$.base_url+"press_release_ajax.php",
data: dataString,
the html
<input type="text" name="friends" id="inputbox">
<input type='hidden' value="selected_ids" name="selected_ids[]" multiple="yes" id="selected_ids" />
i know that the js and html are fine. I am not sure about the php.
You mean i should it by creating another row on my table? Can you give an example?
Member Avatar
Posting the same code again? Not sure why.
The values was a mistake but its not this. The '100,102,204' are finctional ids, the point is that i want to store multiple values in one row. Here is the code that i dont know if its right.
The whole point about NOT storing multiple values in one row is that it's the WRONG way to do it. If you insist on this approach, I don't really know what to suggest, other than to wish you well with it.
I read both articles. I dont think that this will apply to my situation. Because there is no violation of first normal form. I use child tables or second and third normal forms in many cases on my db. Let me explain what i want to do. Lets say that you want to send the same message to more than one user. As far as i understand i ll have to make a second formal form with each user id and the message itself or only the same message id right? Why do that? The same message will go to several users. There will be no conflict in the db. There is not going to be the same message again anyway because they will be unique by their id.
Member Avatar
The issue comes when the user searches for messages to him/her. At the moment you want to place
'102,786,234,...' into a field. So if I'm user #786, the DB needs to search every row for 786 within that field. You'd probably need to use a FIND_IN_SET()in your WHERE clause. Urgh.
I'd use an index and normalized table, as I noted previously. But that's just MY opinion. Anyhow your original question is about the building of the string. Print the query to the screen (followed by exit) to see what it says.
Ok. I am not so advanced to fully understand what you are saying but i got the picture. In this case its not going to have a search option for the user but that its not the point, i ot what you are saying. I did an echo not print the values are showing ok but there not stored in the database and i dont know why
Member Avatar
$query = mysqli_query($this->db,"INSERT INTO `table` (title,desc,url, img, uid_fk, created, to_uid_fk) VALUES ('$update','$update_description','$press_url','$press_image','$uid','$time',$val)") or die(mysql_error());
$result = mysqli_fetch_array($newquery,MYSQLI_ASSOC);
$query or $newquery?
ANyhow my suggestion is this:
$sql = "INSERT INTO `table` (title,desc,url, img, uid_fk, created, to_uid_fk) VALUES ('$update','$update_description','$press_url','$press_image', '$uid', '$time', $val)"
echo $sql;
$newquery = mysqli_query($this->db, $sql) or die(mysql_error());
See what the SQL looks like.
It returns this.
2 and 17 are the ids that i want to store in the row
2,17INSERT INTO advertisments (title,desc,url, img, uid_fk, created, to_uid_fk) VALUES ('','','','', '16', '1441550633', '2,17')
Be a part of the DaniWeb community
We're a friendly, industry-focused community of 1.18 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
|
__label__pos
| 0.902629 |
@misc{rfc7162, series = {Request for Comments}, number = 7162, howpublished = {RFC 7162}, publisher = {RFC Editor}, doi = {10.17487/RFC7162}, url = {https://rfc-editor.org/rfc/rfc7162.txt}, author = {Alexey Melnikov and Dave Cridland}, title = {{IMAP Extensions: Quick Flag Changes Resynchronization (CONDSTORE) and Quick Mailbox Resynchronization (QRESYNC)}}, pagetotal = 52, year = 2014, month = may, abstract = {Often, multiple IMAP (RFC 3501) clients need to coordinate changes to a common IMAP mailbox. Examples include different clients working on behalf of the same user and multiple users accessing shared mailboxes. These clients need a mechanism to efficiently synchronize state changes for messages within the mailbox. Initially defined in RFC 4551, the Conditional Store facility provides a protected update mechanism for message state information and a mechanism for requesting only changes to the message state. This memo updates that mechanism and obsoletes RFC 4551, based on operational experience. This document additionally updates another IMAP extension, Quick Resynchronization, which builds on the Conditional STORE extension to provide an IMAP client the ability to fully resynchronize a mailbox as part of the SELECT/EXAMINE command, without the need for additional server-side state or client round trips. Hence, this memo obsoletes RFC 5162. Finally, this document also updates the line-length recommendation in Section 3.2.1.5 of RFC 2683.}, }
|
__label__pos
| 0.992356 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.