content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Implement callbacks
About this task
In the C Library integration, you will implement most callbacks based on a defined signature.
This includes:
• initialization_CB
• library_shutdown_CB
• register_pos_CB
• register_ped_CB
• create_new_tender_CB
• library_exception_CB
• cancel_or_refund_CB
• show_screen_CB
• tx_store_CB
We have defined a signature for each callback. The signature we provide looks like:
void initialization_CB(void * result, void * echo_pointer);
You will need to implement callback functions with the same signature.
The process differs for callbacks that are added to the register_device_request structure and passed with the register_ped call. For these callbacks, see Implement callbacks with register_device_request.
Steps
To use these callbacks:
1. Implement the callback with the same signature. In the following example we will implement the intitialization_CB callback.
void initialization_CB(void * result, void * echo_struct){
app_context_t * sPOS = (app_context_t *)echo_struct;
sPOS->pending_callback = 0; //mark handled
}
2. Pass intitialization_CB as a parameter of the init_library call.
ADYLibraryResult result = init_library(initReq, initialization_CB, sPOS);
|
__label__pos
| 0.980168 |
SUBROUTINE CUPGTR( UPLO, N, AP, TAU, Q, LDQ, WORK, INFO ) * * -- LAPACK routine (version 3.1) -- * Univ. of Tennessee, Univ. of California Berkeley and NAG Ltd.. * November 2006 * * .. Scalar Arguments .. CHARACTER UPLO INTEGER INFO, LDQ, N * .. * .. Array Arguments .. COMPLEX AP( * ), Q( LDQ, * ), TAU( * ), WORK( * ) * .. * * Purpose * ======= * * CUPGTR generates a complex unitary matrix Q which is defined as the * product of n-1 elementary reflectors H(i) of order n, as returned by * CHPTRD using packed storage: * * if UPLO = 'U', Q = H(n-1) . . . H(2) H(1), * * if UPLO = 'L', Q = H(1) H(2) . . . H(n-1). * * Arguments * ========= * * UPLO (input) CHARACTER*1 * = 'U': Upper triangular packed storage used in previous * call to CHPTRD; * = 'L': Lower triangular packed storage used in previous * call to CHPTRD. * * N (input) INTEGER * The order of the matrix Q. N >= 0. * * AP (input) COMPLEX array, dimension (N*(N+1)/2) * The vectors which define the elementary reflectors, as * returned by CHPTRD. * * TAU (input) COMPLEX array, dimension (N-1) * TAU(i) must contain the scalar factor of the elementary * reflector H(i), as returned by CHPTRD. * * Q (output) COMPLEX array, dimension (LDQ,N) * The N-by-N unitary matrix Q. * * LDQ (input) INTEGER * The leading dimension of the array Q. LDQ >= max(1,N). * * WORK (workspace) COMPLEX array, dimension (N-1) * * INFO (output) INTEGER * = 0: successful exit * < 0: if INFO = -i, the i-th argument had an illegal value * * ===================================================================== * * .. Parameters .. COMPLEX CZERO, CONE PARAMETER ( CZERO = ( 0.0E+0, 0.0E+0 ), $ CONE = ( 1.0E+0, 0.0E+0 ) ) * .. * .. Local Scalars .. LOGICAL UPPER INTEGER I, IINFO, IJ, J * .. * .. External Functions .. LOGICAL LSAME EXTERNAL LSAME * .. * .. External Subroutines .. EXTERNAL CUNG2L, CUNG2R, XERBLA * .. * .. Intrinsic Functions .. INTRINSIC MAX * .. * .. Executable Statements .. * * Test the input arguments * INFO = 0 UPPER = LSAME( UPLO, 'U' ) IF( .NOT.UPPER .AND. .NOT.LSAME( UPLO, 'L' ) ) THEN INFO = -1 ELSE IF( N.LT.0 ) THEN INFO = -2 ELSE IF( LDQ.LT.MAX( 1, N ) ) THEN INFO = -6 END IF IF( INFO.NE.0 ) THEN CALL XERBLA( 'CUPGTR', -INFO ) RETURN END IF * * Quick return if possible * IF( N.EQ.0 ) $ RETURN * IF( UPPER ) THEN * * Q was determined by a call to CHPTRD with UPLO = 'U' * * Unpack the vectors which define the elementary reflectors and * set the last row and column of Q equal to those of the unit * matrix * IJ = 2 DO 20 J = 1, N - 1 DO 10 I = 1, J - 1 Q( I, J ) = AP( IJ ) IJ = IJ + 1 10 CONTINUE IJ = IJ + 2 Q( N, J ) = CZERO 20 CONTINUE DO 30 I = 1, N - 1 Q( I, N ) = CZERO 30 CONTINUE Q( N, N ) = CONE * * Generate Q(1:n-1,1:n-1) * CALL CUNG2L( N-1, N-1, N-1, Q, LDQ, TAU, WORK, IINFO ) * ELSE * * Q was determined by a call to CHPTRD with UPLO = 'L'. * * Unpack the vectors which define the elementary reflectors and * set the first row and column of Q equal to those of the unit * matrix * Q( 1, 1 ) = CONE DO 40 I = 2, N Q( I, 1 ) = CZERO 40 CONTINUE IJ = 3 DO 60 J = 2, N Q( 1, J ) = CZERO DO 50 I = J + 1, N Q( I, J ) = AP( IJ ) IJ = IJ + 1 50 CONTINUE IJ = IJ + 2 60 CONTINUE IF( N.GT.1 ) THEN * * Generate Q(2:n,2:n) * CALL CUNG2R( N-1, N-1, N-1, Q( 2, 2 ), LDQ, TAU, WORK, $ IINFO ) END IF END IF RETURN * * End of CUPGTR * END
|
__label__pos
| 0.999991 |
Implementation of the hash code method
Hello sir/madam,
How do i implement the hashCode and equals() method on the class?Kindly elaborate...
I am having a problem in the implementation Class of the ManagedConnectionFactory of the J2EE connector architecture specifications..
Regards
Anuradha
anuchoudhryAsked:
Who is Participating?
doronbConnect With a Mentor Commented:
Hi,
The hashCode() method must never be implemented by:
return super.hashCode();
because then you are simply calling hashCode on the class that your class inherits from; invocation of the inherited super.hashCode method would happen normally since the method is inherited so strictly invoking the
super method is not going to change anything and supply a better hashCode than the super class would have.
The 2nd thing to remember is that most object inherit their hashCode implementation from the java.lang.Object which simply returns the memory location of the java object in question. This simple implementation is
good enough if the state which the object is in when you're asking for the hashCode should NEVER determine the hashCode value itself!!!
Most objects that contain data relevant to the hashCode (for example, objects that are keys or ID's) should not allow the data to change once the hashCode has been calculated and used (to store the object as a key in
a HashMap for instance). Changing the data for the object would render the hashCode value incompatible with the data.
In some cases, objects can re-calculate their hashCode based on their current data but this scenario is too complicated to explain right now, so lets stick with objects that calculate their hashCode only once and do not
allow their data to change once the hashCode has been calculated.
The code in my next post shows how an object can hold data used when calculating the hashCode. Once the hashCode has been calculated, the data members can never be changed!! Since the hashCode is determined by the data members, the equals method is also added to verify that two instances are equal if they contain the same data in their data members.
continued --->
0
CEHJCommented:
The hashCode() method can be implemented quite simply by:
return super.hashCode();
As far as the equals() method is concerned, sincce I'm not familiar with this interface, I can't really help. But generally, what you need to do is decide what fields are important in the basic functionality. It is these fields that will distinguish one interface from another. You should compare these individually and make equals() method reflect this. The same fields, of course, would be important in supplying a toString() method, which you should also implement probably.
0
doronbCommented:
0
Free Tool: ZipGrep
ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.
One of a set of tools we're offering as a way to say thank you for being a part of the community.
doronbCommented:
The code:
package <your-package>;
import java.io.Serializable;
public class RemoteClientId implements Serializable {
private String clientIpAddress;
private long timeStampKey;
private transient int hashCodeValue;
public RemoteClientId() {
clientIpAddress = null;
timeStampKey = 0;
hashCodeValue = 0;
}
public RemoteClientId(String ip, long timeStamp) {
setClientIpAddress(ip);
setTimeStampKey(timeStamp);
hashCodeValue = calcHashCode();
}
private int calcHashCode() {
int result = hashCodeValue;
try {
byte[] ba1 = clientIpAddress.getBytes();
byte[] ba2 = Long.toBinaryString(timeStampKey).getBytes();
int i;
int k = 1;
for (i = 0; i < ba1.length; i++) {
result += (int)ba1[i] * k;
k++;
}
for (i = 0; i < ba2.length; i++) {
result += (int)ba2[i] * k;
k++;
}
} catch (NullPointerException ex) {
if (clientIpAddress != null) {
// This address is invalid so create an illegal hashCode..
result = -1;
}
} catch (Exception ex) {
ex.printStackTrace();
}
return result;
}
public int hashCode() {
if (hashCodeValue == 0) {
hashCodeValue = calcHashCode();
}
return hashCodeValue;
}
public boolean equals(Object that) {
if (!(that instanceof RemoteClientId)) {
RemoteClientId rci = (RemoteClientId)that;
return (rci.timeStampKey != timeStampKey) ? false : ((clientIpAddress == null) ? (rci.clientIpAddress == null) : clientIpAddress.equals(rci.clientIpAddress));
}
return false;
}
public void setClientIpAddress(String ip) {
if (hashCodeValue == 0) {
clientIpAddress = ip;
}
}
public void setTimeStampKey(long timeStamp) {
if (hashCodeValue == 0) {
timeStampKey = timeStamp;
}
}
public String getClientIpAddress() {
return clientIpAddress;
}
public long getTimeStampKey() {
return timeStampKey;
}
}
Hope this helps,
Doron
0
yasser_helmyCommented:
I think doron gave some very valuable info. I want to add some thing. If the equals() method returns true for any two objects, the hashCode() method of the two objects would logically return equal values.
The equals() method compares should compare the fields of the two objects and returns true if and only if the fields are equal.
0
CEHJCommented:
>>If the equals() method returns true for any two objects, the hashCode() method of the two objects would logically return equal values.
That should only be the case if (o1 == o2) == true
otherwise you would be contradicting doronb's observation
>>The 2nd thing to remember is that most object inherit their hashCode implementation from the java.lang.Object which simply returns the memory location of the java object in question
since two different [but equal()] objects can't have the same reference
>>The equals() method compares should compare the fields of the two objects and returns true if and only if the fields are equal.
Not necessarily *all* the fields. There may be some that are unequal that are of no importance at all to the client.
0
doronbCommented:
Hi again,
A great source for discussion and examples about equals and hashCode is the book "EFFECTIVE JAVA Programming Language Guide by Joshua Bloch" displayed at http://java.sun.com/docs/books/effective/
Chapter 3, Item #8 talks EXACTLY about overriding the hashCode and equals methods.
Good luck,
Doron
0
doronbCommented:
One concept that you must INSIST on is that if you're overriding hashCode, you should override equals and vice versa, NOT doing so is at the programmer's responsibility >;)
0
CEHJCommented:
>>
The hashCode() method must never be implemented by:
return super.hashCode();
>>
This is a little harsh doronb! Especially as the book you recommended by Joshua Bloch has the following code as an example for Chapter 5:
public final int hashCode() {
return super.hashCode();
}
However I agree that:
>>
super method is not going to change anything and supply a better hashCode than the super class would have.
>>
So you may as well allow the superinterface's hashCode() method to be called.
Since this hash got a bit theoretical in parts, perhaps : anuchoudhry would like clarification?
0
doronbCommented:
Hi CEHJ,
Please note the "final" keyword in your example. I will explain its presence with an example:
class A {
public int hashCode() {
// Do some complex stuff, return value..
}
}
class B extends A {
public final int hashCode() {
// Stop anyone from overriding
// the hashCode method if
// inherited from class B!
return super.hashCode();
}
}
There is one more case where I would use super.hashCode and that is if my hashCode relies on my super-class hashCode. I can't imagine right now some problem that would make me do that, but I'm sure one can be found. :)
Doron
0
CEHJCommented:
Yes, I see. Of course! Thanks doronb.
0
Igor BazarnyCommented:
Hi,
> This is a little harsh doronb! Especially as the book you recommended by Joshua Bloch has the following code as an example for Chapter 5:
Note that Chapter 5 is titled 'Substitutes for C Constructs'. I guess you took your sample from the Item 21: Replace enum constructs with classes. In that spacific context, Object equals() and hashCode() behavior is correct, because each 'enum' value has it's own identity, so == comparison of such objects is OK (Object.equals() implementation is almost equivalent to pointer equality, it only doesn't work for null).
anuchoudhry,
Note that you need equals() and hashCode() implementation only if you need to compare your objects or use them as HashMap keys (or store in HashSet, it's almost the same). java.lang.Object provides default implementation of equals() which simply compares references, and hashCode() returns some kind of address value (you can't turn it into reference though).
Read Object.eqauls() and Object.hashCode() for the rules:
The equals method implements an equivalence relation:
equals() rules:
- It is reflexive: for any reference value x, x.equals(x) should return true.
- It is symmetric: for any reference values x and y, x.equals(y) should return true if and only if y.equals(x) returns true.
- It is transitive: for any reference values x, y, and z, if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true.
- It is consistent: for any reference values x and y, multiple invocations of x.equals(y) consistently return true or consistently return false, provided no information used in equals comparisons on the object is modified.
- For any non-null reference value x, x.equals(null) should return false.
The general contract of hashCode is:
- Whenever it is invoked on the same object more than once during an execution of a Java application, the hashCode method must consistently return the same integer, provided no information used in equals comparisons on the object is modified. This integer need not remain consistent from one execution of an application to another execution of the same application.
- If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
- It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hashtables.
Regards,
Igor Bazarny,
Brainbench MVP for Java 1
0
girionisCommented:
No comment has been added lately, so it's time to clean up this TA.
I will leave a recommendation in the Cleanup topic area that this question is:
- points to doronb
Please leave any comments here within the
next seven days.
PLEASE DO NOT ACCEPT THIS COMMENT AS AN ANSWER !
girionis
Cleanup Volunteer
0
Question has a verified solution.
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
All Courses
From novice to tech pro — start learning today.
|
__label__pos
| 0.967337 |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with
or
.
Download ZIP
Branch: jb
Fetching contributors…
Cannot retrieve contributors at this time
543 lines (467 sloc) 13.782 kB
/*
* QEMU Executable loader
*
* Copyright (c) 2006 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*
* Gunzip functionality in this file is derived from u-boot:
*
* (C) Copyright 2008 Semihalf
*
* (C) Copyright 2000-2005
* Wolfgang Denk, DENX Software Engineering, [email protected].
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of
* the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include "qemu-common.h"
#include "disas.h"
#include "sysemu.h"
#include "uboot_image.h"
#include <zlib.h>
/* return the size or -1 if error */
int get_image_size(const char *filename)
{
int fd, size;
fd = open(filename, O_RDONLY | O_BINARY);
if (fd < 0)
return -1;
size = lseek(fd, 0, SEEK_END);
close(fd);
return size;
}
/* return the size or -1 if error */
/* deprecated, because caller does not specify buffer size! */
int load_image(const char *filename, uint8_t *addr)
{
int fd, size;
fd = open(filename, O_RDONLY | O_BINARY);
if (fd < 0)
return -1;
size = lseek(fd, 0, SEEK_END);
lseek(fd, 0, SEEK_SET);
if (read(fd, addr, size) != size) {
close(fd);
return -1;
}
close(fd);
return size;
}
/* return the amount read, just like fread. 0 may mean error or eof */
int fread_targphys(target_phys_addr_t dst_addr, size_t nbytes, FILE *f)
{
uint8_t buf[4096];
target_phys_addr_t dst_begin = dst_addr;
size_t want, did;
while (nbytes) {
want = nbytes > sizeof(buf) ? sizeof(buf) : nbytes;
did = fread(buf, 1, want, f);
cpu_physical_memory_write_rom(dst_addr, buf, did);
dst_addr += did;
nbytes -= did;
if (did != want)
break;
}
return dst_addr - dst_begin;
}
/* returns 0 on error, 1 if ok */
int fread_targphys_ok(target_phys_addr_t dst_addr, size_t nbytes, FILE *f)
{
return fread_targphys(dst_addr, nbytes, f) == nbytes;
}
/* read()-like version */
int read_targphys(int fd, target_phys_addr_t dst_addr, size_t nbytes)
{
uint8_t buf[4096];
target_phys_addr_t dst_begin = dst_addr;
size_t want, did;
while (nbytes) {
want = nbytes > sizeof(buf) ? sizeof(buf) : nbytes;
did = read(fd, buf, want);
if (did != want) break;
cpu_physical_memory_write_rom(dst_addr, buf, did);
dst_addr += did;
nbytes -= did;
}
return dst_addr - dst_begin;
}
/* return the size or -1 if error */
int load_image_targphys(const char *filename,
target_phys_addr_t addr, int max_sz)
{
FILE *f;
size_t got;
f = fopen(filename, "rb");
if (!f) return -1;
got = fread_targphys(addr, max_sz, f);
if (ferror(f)) { fclose(f); return -1; }
fclose(f);
return got;
}
void pstrcpy_targphys(target_phys_addr_t dest, int buf_size,
const char *source)
{
static const uint8_t nul_byte = 0;
const char *nulp;
if (buf_size <= 0) return;
nulp = memchr(source, 0, buf_size);
if (nulp) {
cpu_physical_memory_write_rom(dest, (uint8_t *)source,
(nulp - source) + 1);
} else {
cpu_physical_memory_write_rom(dest, (uint8_t *)source, buf_size - 1);
cpu_physical_memory_write_rom(dest, &nul_byte, 1);
}
}
/* A.OUT loader */
struct exec
{
uint32_t a_info; /* Use macros N_MAGIC, etc for access */
uint32_t a_text; /* length of text, in bytes */
uint32_t a_data; /* length of data, in bytes */
uint32_t a_bss; /* length of uninitialized data area, in bytes */
uint32_t a_syms; /* length of symbol table data in file, in bytes */
uint32_t a_entry; /* start address */
uint32_t a_trsize; /* length of relocation info for text, in bytes */
uint32_t a_drsize; /* length of relocation info for data, in bytes */
};
#ifdef BSWAP_NEEDED
static void bswap_ahdr(struct exec *e)
{
bswap32s(&e->a_info);
bswap32s(&e->a_text);
bswap32s(&e->a_data);
bswap32s(&e->a_bss);
bswap32s(&e->a_syms);
bswap32s(&e->a_entry);
bswap32s(&e->a_trsize);
bswap32s(&e->a_drsize);
}
#else
#define bswap_ahdr(x) do { } while (0)
#endif
#define N_MAGIC(exec) ((exec).a_info & 0xffff)
#define OMAGIC 0407
#define NMAGIC 0410
#define ZMAGIC 0413
#define QMAGIC 0314
#define _N_HDROFF(x) (1024 - sizeof (struct exec))
#define N_TXTOFF(x) \
(N_MAGIC(x) == ZMAGIC ? _N_HDROFF((x)) + sizeof (struct exec) : \
(N_MAGIC(x) == QMAGIC ? 0 : sizeof (struct exec)))
#define N_TXTADDR(x) (N_MAGIC(x) == QMAGIC ? TARGET_PAGE_SIZE : 0)
#define _N_SEGMENT_ROUND(x) (((x) + TARGET_PAGE_SIZE - 1) & ~(TARGET_PAGE_SIZE - 1))
#define _N_TXTENDADDR(x) (N_TXTADDR(x)+(x).a_text)
#define N_DATADDR(x) \
(N_MAGIC(x)==OMAGIC? (_N_TXTENDADDR(x)) \
: (_N_SEGMENT_ROUND (_N_TXTENDADDR(x))))
int load_aout(const char *filename, target_phys_addr_t addr, int max_sz)
{
int fd, size, ret;
struct exec e;
uint32_t magic;
fd = open(filename, O_RDONLY | O_BINARY);
if (fd < 0)
return -1;
size = read(fd, &e, sizeof(e));
if (size < 0)
goto fail;
bswap_ahdr(&e);
magic = N_MAGIC(e);
switch (magic) {
case ZMAGIC:
case QMAGIC:
case OMAGIC:
if (e.a_text + e.a_data > max_sz)
goto fail;
lseek(fd, N_TXTOFF(e), SEEK_SET);
size = read_targphys(fd, addr, e.a_text + e.a_data);
if (size < 0)
goto fail;
break;
case NMAGIC:
if (N_DATADDR(e) + e.a_data > max_sz)
goto fail;
lseek(fd, N_TXTOFF(e), SEEK_SET);
size = read_targphys(fd, addr, e.a_text);
if (size < 0)
goto fail;
ret = read_targphys(fd, addr + N_DATADDR(e), e.a_data);
if (ret < 0)
goto fail;
size += ret;
break;
default:
goto fail;
}
close(fd);
return size;
fail:
close(fd);
return -1;
}
/* ELF loader */
static void *load_at(int fd, int offset, int size)
{
void *ptr;
if (lseek(fd, offset, SEEK_SET) < 0)
return NULL;
ptr = qemu_malloc(size);
if (read(fd, ptr, size) != size) {
qemu_free(ptr);
return NULL;
}
return ptr;
}
#define ELF_CLASS ELFCLASS32
#include "elf.h"
#define SZ 32
#define elf_word uint32_t
#define elf_sword int32_t
#define bswapSZs bswap32s
#include "elf_ops.h"
#undef elfhdr
#undef elf_phdr
#undef elf_shdr
#undef elf_sym
#undef elf_note
#undef elf_word
#undef elf_sword
#undef bswapSZs
#undef SZ
#define elfhdr elf64_hdr
#define elf_phdr elf64_phdr
#define elf_note elf64_note
#define elf_shdr elf64_shdr
#define elf_sym elf64_sym
#define elf_word uint64_t
#define elf_sword int64_t
#define bswapSZs bswap64s
#define SZ 64
#include "elf_ops.h"
/* return < 0 if error, otherwise the number of bytes loaded in memory */
int load_elf(const char *filename, int64_t address_offset,
uint64_t *pentry, uint64_t *lowaddr, uint64_t *highaddr)
{
int fd, data_order, host_data_order, must_swab, ret;
uint8_t e_ident[EI_NIDENT];
fd = open(filename, O_RDONLY | O_BINARY);
if (fd < 0) {
perror(filename);
return -1;
}
if (read(fd, e_ident, sizeof(e_ident)) != sizeof(e_ident))
goto fail;
if (e_ident[0] != ELFMAG0 ||
e_ident[1] != ELFMAG1 ||
e_ident[2] != ELFMAG2 ||
e_ident[3] != ELFMAG3)
goto fail;
#ifdef HOST_WORDS_BIGENDIAN
data_order = ELFDATA2MSB;
#else
data_order = ELFDATA2LSB;
#endif
must_swab = data_order != e_ident[EI_DATA];
#ifdef TARGET_WORDS_BIGENDIAN
host_data_order = ELFDATA2MSB;
#else
host_data_order = ELFDATA2LSB;
#endif
if (host_data_order != e_ident[EI_DATA])
return -1;
lseek(fd, 0, SEEK_SET);
if (e_ident[EI_CLASS] == ELFCLASS64) {
ret = load_elf64(fd, address_offset, must_swab, pentry,
lowaddr, highaddr);
} else {
ret = load_elf32(fd, address_offset, must_swab, pentry,
lowaddr, highaddr);
}
close(fd);
return ret;
fail:
close(fd);
return -1;
}
static void bswap_uboot_header(uboot_image_header_t *hdr)
{
#ifndef HOST_WORDS_BIGENDIAN
bswap32s(&hdr->ih_magic);
bswap32s(&hdr->ih_hcrc);
bswap32s(&hdr->ih_time);
bswap32s(&hdr->ih_size);
bswap32s(&hdr->ih_load);
bswap32s(&hdr->ih_ep);
bswap32s(&hdr->ih_dcrc);
#endif
}
#define ZALLOC_ALIGNMENT 16
static void *zalloc(void *x, unsigned items, unsigned size)
{
void *p;
size *= items;
size = (size + ZALLOC_ALIGNMENT - 1) & ~(ZALLOC_ALIGNMENT - 1);
p = qemu_malloc(size);
return (p);
}
static void zfree(void *x, void *addr)
{
qemu_free(addr);
}
#define HEAD_CRC 2
#define EXTRA_FIELD 4
#define ORIG_NAME 8
#define COMMENT 0x10
#define RESERVED 0xe0
#define DEFLATED 8
/* This is the maximum in uboot, so if a uImage overflows this, it would
* overflow on real hardware too. */
#define UBOOT_MAX_GUNZIP_BYTES 0x800000
static ssize_t gunzip(void *dst, size_t dstlen, uint8_t *src,
size_t srclen)
{
z_stream s;
ssize_t dstbytes;
int r, i, flags;
/* skip header */
i = 10;
flags = src[3];
if (src[2] != DEFLATED || (flags & RESERVED) != 0) {
puts ("Error: Bad gzipped data\n");
return -1;
}
if ((flags & EXTRA_FIELD) != 0)
i = 12 + src[10] + (src[11] << 8);
if ((flags & ORIG_NAME) != 0)
while (src[i++] != 0)
;
if ((flags & COMMENT) != 0)
while (src[i++] != 0)
;
if ((flags & HEAD_CRC) != 0)
i += 2;
if (i >= srclen) {
puts ("Error: gunzip out of data in header\n");
return -1;
}
s.zalloc = zalloc;
s.zfree = zfree;
r = inflateInit2(&s, -MAX_WBITS);
if (r != Z_OK) {
printf ("Error: inflateInit2() returned %d\n", r);
return (-1);
}
s.next_in = src + i;
s.avail_in = srclen - i;
s.next_out = dst;
s.avail_out = dstlen;
r = inflate(&s, Z_FINISH);
if (r != Z_OK && r != Z_STREAM_END) {
printf ("Error: inflate() returned %d\n", r);
return -1;
}
dstbytes = s.next_out - (unsigned char *) dst;
inflateEnd(&s);
return dstbytes;
}
/* Load a U-Boot image. */
int load_uimage(const char *filename, target_ulong *ep, target_ulong *loadaddr,
int *is_linux)
{
int fd;
int size;
uboot_image_header_t h;
uboot_image_header_t *hdr = &h;
uint8_t *data = NULL;
int ret = -1;
fd = open(filename, O_RDONLY | O_BINARY);
if (fd < 0)
return -1;
size = read(fd, hdr, sizeof(uboot_image_header_t));
if (size < 0)
goto out;
bswap_uboot_header(hdr);
if (hdr->ih_magic != IH_MAGIC)
goto out;
/* TODO: Implement other image types. */
if (hdr->ih_type != IH_TYPE_KERNEL) {
fprintf(stderr, "Can only load u-boot image type \"kernel\"\n");
goto out;
}
switch (hdr->ih_comp) {
case IH_COMP_NONE:
case IH_COMP_GZIP:
break;
default:
fprintf(stderr,
"Unable to load u-boot images with compression type %d\n",
hdr->ih_comp);
goto out;
}
/* TODO: Check CPU type. */
if (is_linux) {
if (hdr->ih_os == IH_OS_LINUX)
*is_linux = 1;
else
*is_linux = 0;
}
*ep = hdr->ih_ep;
data = qemu_malloc(hdr->ih_size);
if (read(fd, data, hdr->ih_size) != hdr->ih_size) {
fprintf(stderr, "Error reading file\n");
goto out;
}
if (hdr->ih_comp == IH_COMP_GZIP) {
uint8_t *compressed_data;
size_t max_bytes;
ssize_t bytes;
compressed_data = data;
max_bytes = UBOOT_MAX_GUNZIP_BYTES;
data = qemu_malloc(max_bytes);
bytes = gunzip(data, max_bytes, compressed_data, hdr->ih_size);
qemu_free(compressed_data);
if (bytes < 0) {
fprintf(stderr, "Unable to decompress gzipped image!\n");
goto out;
}
hdr->ih_size = bytes;
}
cpu_physical_memory_write_rom(hdr->ih_load, data, hdr->ih_size);
if (loadaddr)
*loadaddr = hdr->ih_load;
ret = hdr->ih_size;
out:
if (data)
qemu_free(data);
close(fd);
return ret;
}
Jump to Line
Something went wrong with that request. Please try again.
|
__label__pos
| 0.99388 |
How do you guys handle package management?
asked 2020-02-18 06:31:45 -0500
rezenders gravatar image
When building a robot that uses several ROS packages and regular packages, how do you guys usually manage all dependencies, building, branch control, running tests, and deploy?
Docker? Spack? Which tools do you guys find interesting?
edit retag flag offensive close merge delete
Comments
Docker for all of DevOps
viktor.holova gravatar image viktor.holova ( 2020-02-18 07:00:43 -0500 )edit
You may want to restrict the scope here. Releases, deployment, CI, CM, etc are all topics on their own. We could write novels on this topic so you should narrow into one thing to ask. Please edit your question.
stevemacenski gravatar image stevemacenski ( 2020-02-18 10:48:13 -0500 )edit
|
__label__pos
| 0.984548 |
score:2
Why does it currently behave as it does?
<Rand /> doesn't call your component function. It just calls React.createElement to create the React element (not an instance of it). Your component function is used to render an element instance, if and when you use it. In your "working" example you're doing:
<>{Math.random()}</>
...which calls Math.random and uses its result as text (not a component) within the fragment.
But your "not working" example just does:
<Rand />
The element is created, but not used, and your function isn't called. The "your function isn't called" part may be surprising — it was to me when I started using React — but it's true:
const { Fragment } = React;
function Rand() {
console.log("Rand called");
return <Fragment>{Math.random()}</Fragment>;
}
console.log("before");
const element = <Rand />;
console.log("after");
// Wait a moment before *using* it
setTimeout(() => {
ReactDOM.render(
element,
document.getElementById("root")
);
}, 1000);
<div id="root"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/17.0.2/umd/react.development.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/17.0.2/umd/react-dom.development.js"></script>
How can I prevent in both cases to re-render/re-initialize the component?
If you do what you've done in your example, which is to take the mounted component out of the tree entirely, you're unmounting the component instance; when you put it back, your function will get called again, so you'll get a new value. (This is also why the version with the counter doesn't exhibit this behavior: the component instance remained mounted.)
If you want to memoize what it shows, one approach is to pass that to it as a prop, and memoize what you pass it:
const { useState, useMemo, Fragment } = React;
function Rand({text}) {
return <Fragment>{text}</Fragment>;
}
const App = () => {
const [show, setShow] = useState(true);
const working = useMemo(() => <Fragment>{Math.random()}</Fragment>, []);
const randText = useMemo(() => String(Math.random()), []);
return(
<Fragment>
<button
onClick={() => {
setShow(!show);
}}>
{show?"Hide":"Show"}
</button>
<br />
Working:
{show && working}
<br />
Also Working Now:
{show && <Rand text={randText} />}
</Fragment>
);
}
ReactDOM.render(
<App />,
document.getElementById("root")
);
<div id="root"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/17.0.2/umd/react.development.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/17.0.2/umd/react-dom.development.js"></script>
Related Query
More Query from same tag
|
__label__pos
| 0.999403 |
[][src]Crate spake2
An implementation of the SPAKE2 password-authenticated key-exchange algorithm
This library implements the SPAKE2 password-authenticated key exchange ("PAKE") algorithm. This allows two parties, who share a weak password, to safely derive a strong shared secret (and therefore build an encrypted+authenticated channel).
A passive attacker who eavesdrops on the connection learns no information about the password or the generated secret. An active attacker (man-in-the-middle) gets exactly one guess at the password, and unless they get it right, they learn no information about the password or the generated secret. Each execution of the protocol enables one guess. The use of a weak password is made safer by the rate-limiting of guesses: no off-line dictionary attack is available to the network-level attacker, and the protocol does not depend upon having previously-established confidentiality of the network (unlike e.g. sending a plaintext password over TLS).
The protocol requires the exchange of one pair of messages, so only one round trip is necessary to establish the session key. If key-confirmation is necessary, that will require a second round trip.
All messages are bytestrings. For the default security level (using the Ed25519 elliptic curve, roughly equivalent to an 128-bit symmetric key), the message is 33 bytes long.
This implementation is generic over a Group, which defines the cyclic group to use, the functions which convert group elements and scalars to and from bytestrings, and the three distinctive group elements used in the blinding process. Only one such Group is implemented, named Ed25519Group, which provides fast operations and high security, and is compatible with my python implementation.
What Is It Good For?
PAKE can be used in a pairing protocol, like the initial version of Firefox Sync (the one with J-PAKE), to introduce one device to another and help them share secrets. In this mode, one device creates a random code, the user copies that code to the second device, then both devices use the code as a one-time password and run the PAKE protocol. Once both devices have a shared strong key, they can exchange other secrets safely.
PAKE can also be used (carefully) in a login protocol, where SRP is perhaps the best-known approach. Traditional non-PAKE login consists of sending a plaintext password through a TLS-encrypted channel, to a server which then checks it (by hashing/stretching and comparing against a stored verifier). In a PAKE login, both sides put the password into their PAKE protocol, and then confirm that their generated key is the same. This nominally does not require the initial TLS-protected channel. However note that it requires other, deeper design considerations (the PAKE protocol must be bound to whatever protected channel you end up using, else the attacker can wait for PAKE to complete normally and then steal the channel), and is not simply a drop-in replacement. In addition, the server cannot hash/stretch the password very much (see the note on "Augmented PAKE" below), so unless the client is willing to perform key-stretching before running PAKE, the server's stored verifier will be vulnerable to a low-cost dictionary attack.
Usage
Add the spake2 dependency to yourCargo.toml`:
[dependencies]
spake2 = "0.1"
and this to your crate root:
extern crate spake2;
Alice and Bob both initialize their SPAKE2 instances with the same (weak) password. They will exchange messages to (hopefully) derive a shared secret key. The protocol is symmetric: for each operation that Alice does, Bob will do the same.
However, there are two roles in the SPAKE2 protocol, "A" and "B". The two sides must agree ahead of time which one will play which role (the messages they generate depend upon which side they play). There are two separate constructor functions, start_a() and start_b(), and a complete interaction will use one of each (one start_a on one computer, and one start_b on the other computer).
Each instance of a SPAKE2 protocol uses a set of shared parameters. These include a group, a generator, and a pair of arbitrary group elements. This library comes a single pre-generated parameter set, but could be extended with others.
You start by calling start_a() (or _b) with the password and identity strings for both sides. This gives you back a state object and the first message, which you must send to your partner. Once you receive the corresponding inbound message, you pass it into the state object (consuming both in the process) by calling s.finish(), and you get back the shared key as a bytestring.
The password and identity strings must each be wrapped in a "newtype", which is a simple struct that protects against swapping the different types of bytestrings.
Thus a client-side program start with:
use spake2::{Ed25519Group, Identity, Password, SPAKE2};
let (s1, outbound_msg) = SPAKE2::<Ed25519Group>::start_a(
&Password::new(b"password"),
&Identity::new(b"client id string"),
&Identity::new(b"server id string"));
send(&outbound_msg);
let inbound_msg = receive();
let key1 = s1.finish(&inbound_msg).unwrap();
while the server-side might do:
use spake2::{Ed25519Group, Identity, Password, SPAKE2};
let (s1, outbound_msg) = SPAKE2::<Ed25519Group>::start_b(
&Password::new(b"password"),
&Identity::new(b"client id string"),
&Identity::new(b"server id string"));
send(&outbound_msg);
let inbound_msg = receive();
let key2 = s1.finish(&inbound_msg).unwrap();
If both sides used the same password, and there is no man-in-the-middle, then key1 and key2 will be identical. If not, the two sides will get different keys. When one side encrypts with key1, and the other side attempts to decrypt with key2, they'll get nothing but garbled noise.
The shared key can be used as an HMAC key to provide data integrity on subsequent messages, or as an authenticated-encryption key (e.g. nacl.secretbox). It can also be fed into HKDF to derive other session keys as necessary.
The SPAKE2 instances, and the messages they create, are single-use. Create a new one for each new session. finish consumes the instance.
Symmetric Usage
A single SPAKE2 instance must be used asymmetrically: the two sides must somehow decide (ahead of time) which role they will each play. The implementation includes the side identifier in the exchanged message to guard against a start_a talking to another start_a. Typically a "client" will take on the A role, and the "server" will be B.
This is a nuisance for more egalitarian protocols, where there's no clear way to assign these roles ahead of time. In this case, use start_symmetric() on both sides. This uses a different set of parameters (so it is not interoperable with start_A or start_b), but should otherwise behave the same way. The symmetric mode uses only one identity string, not two.
Carol does:
use spake2::{Ed25519Group, Identity, Password, SPAKE2};
let (s1, outbound_msg) = SPAKE2::<Ed25519Group>::start_symmetric(
&Password::new(b"password"),
&Identity::new(b"shared id string"));
send(&outbound_msg);
let inbound_msg = receive();
let key1 = s1.finish(&inbound_msg).unwrap();
Dave does exactly the same:
use spake2::{Ed25519Group, Identity, Password, SPAKE2};
let (s1, outbound_msg) = SPAKE2::<Ed25519Group>::start_symmetric(
&Password::new(b"password"),
&Identity::new(b"shared id string"));
send(&outbound_msg);
let inbound_msg = receive();
let key1 = s1.finish(&inbound_msg).unwrap();
Identifier Strings
The SPAKE2 protocol includes a pair of "identity strings" idA and idB that are included in the final key-derivation hash. This binds the key to a single pair of parties, or for some specific purpose.
For example, when user "alice" logs into "example.com", both sides should set idA = b"alice" and idB = b"example.com". This prevents an attacker from substituting messages from unrelated login sessions (other users on the same server, or other servers for the same user).
This also makes sure the session is established with the correct service. If Alice has one password for "example.com" but uses it for both login and file-transfer services, idB should be different for the two services. Otherwise if Alice is simultaneously connecting to both services, and attacker could rearrange the messages and cause her login client to connect to the file-transfer server, and vice versa.
idA and idB must be bytestrings (slices of <u8>).
start_symmetric uses a single idSymmetric= string, instead of idA and idB. Both sides must provide the same idSymmetric=, or leave it empty.
Serialization
Sometimes, you can't hold the SPAKE2 instance in memory for the whole negotiation: perhaps all your program state is stored in a database, and nothing lives in RAM for more than a few moments.
Unfortunately the Rust implementation does not yet provide serialization of the state object. A future version should correct this.
Security
This library is probably not constant-time, and does not protect against timing attacks. Do not allow attackers to measure how long it takes you to create or respond to a message. This matters somewhat less for pairing protocols, because their passwords are single-use randomly-generated keys, so an attacker has much less to work with.
This library depends upon a strong source of random numbers. Do not use it on a system where os.urandom() is weak.
Speed
To run the built-in speed tests, just run cargo bench.
SPAKE2 consists of two phases, separated by a single message exchange. The time these phases take is split roughly 50/50. On my 2.8GHz Core-i7 (i7-7600U) cpu, the built-in Ed25519Group parameters take about 112 microseconds for each phase, and the message exchanged is 33 bytes long.
Testing
Run cargo test to run the built-in test suite.
History
The protocol was described as "PAKE2" in "cryptobook" from Dan Boneh and Victor Shoup. This is a form of "SPAKE2", defined by Abdalla and Pointcheval at RSA 2005. Additional recommendations for groups and distinguished elements were published in Ladd's IETF draft.
The Ed25519 implementation uses code adapted from Daniel Bernstein (djb), Matthew Dempsky, Daniel Holth, Ron Garret, with further optimizations by Brian Warner5. The "arbitrary element" computation, which must be the same for both participants, is from python-pure25519 version 0.5.
The Boneh/Shoup chapter that defines PAKE2 also defines an augmented variant named "PAKE2+", which changes one side (typically a server) to record a derivative of the password instead of the actual password. In PAKE2+, a server compromise does not immediately give access to the passwords: instead, the attacker must perform an offline dictionary attack against the stolen data before they can learn the passwords. PAKE2+ support is planned, but not yet implemented.
The security of the symmetric case was proved by Kobara/Imai6 in 2003, and uses different (slightly weaker?) reductions than that of the asymmetric form. See also Mike Hamburg's analysis7 from 2015.
Brian Warner first wrote the Python version in July 2010. He wrote this Rust version in in May 2017.
footnotes
Structs
Ed25519Group
Identity
Password
SPAKE2
SPAKEErr
Enums
ErrorType
Traits
Group
|
__label__pos
| 0.698057 |
Codable Tuples
Swift doesn't let you conform to codable when you have a variable thats a tuple and from what I looked up JSON on its own doesn't support tuples.
so something like below won't work ,
struct Todo : Codable {
var preview : (title: String, subject: String?)
init(preview: (title: String, subject: String?)) {
self.preview = preview
}
}
// ERROR: Type 'Todo' does not conform to protocol 'Decodable'
// ERROR: Type 'Todo' does not conform to protocol 'Encodable'
Would the swift community discuss on ways to support this via a standard delimiter, ? I suggest using the pipe operator ' | ' to accurately depict the notion of tuples.
No, arrays, are not an acceptable way of encoding into JSON because its not reflective of the structure we are trying to represent. Encoding the variables into separate JSON values is also not reflective of the idea on how tuple values are coupled with each other and besides, I see this being a disaster for developers.
If you are for this, along these same lines, certain things that may come up in the discussion are below.
1. How to support optional variables that are in codable tuples. The mechanism must account for nil tuple values to exist in the edges and of course anywhere in the middle. This I answer in the next point.
2. Should there be naming? Tuples are anonymous types in swift so the default naming is the order they are initialized and names given if needed. Personally think, there should no naming in the JSON encoded value to keep it lightweight but there of course needs to be a way to track the order. Originally, I mentioned the use of the pipe operator, so it makes sense to use that operator followed by the order the variable is in. So the Todo in JSON would look like below (ignore the implicit unwrapping)
var todoOne = Todo.init(preview: (title: "Hello World", subject: "Intro"))
var encodedTodoOne = try? JSONEncoder().encode(todoOne)
let todoOneString = String.init(data: encodedTodoOne!, encoding: .utf8)
// result of TodoOne
// { preview: |0 Hello World |1 Intro }
var todoTwo = Todo.init(preview: (title: "Foo Bar", subject: nil))
var encodedTodoTwo = try? JSONEncoder().encode(todoTwo)
let todoTwoString = String.init(data: encodedTodoTwo!, encoding: .utf8)
// result TodoTwo :
// { preview : |0 Foo Bar }
// ALT: { preview : |0 Foo Bar |1 }
3. Theres more stuff like custom types in Tuples which I don't think would cause any issues.
If there is support, I can write a full draft that highlights the complete motivation and reasoning beyond what I simply said above.
This touches on a much broader topic than just Codable: allowing tuples to conform to protocols. We should focus on that front rather than trying to hack something into the compiler to allow for just Codable synthesis on tuples.
1 Like
Correct me if Im wrong but I believe the core-team wanted Tuples to be anonymous types.
Can they remain anonymous types while still conforming to protocols? My assumption is no because Tuples are generated with gybs. Sorry I'm not too familiar with how gybs work under the hood and need to read more on how LLVM parses swift. Would really love if you have a detailed reference (that goes over compiler theory as well) I can look at from beginner since the LLVM project is so large to start from just reading the documentation.
Yes, the hope is that tuples could conform to protocols in future. This is covered in the generics manifesto as "Extensions of Structural Types".
Tuples are not generated by gyb. They are a part of the language. You may be thinking of the implementation of == for tuples, which does use gyb currently but would not necessarily in some future version of the language.
8 Likes
On a related note, does the Generics Manifesto assume that tuples can have 0, 1, 2 or more elements?
If not, that is if it assumes that single-element tuples continue to be prevented, how would eg the following example from the Generics Manifesto work for functions taking a single argument?
func apply<... Args, Result>(fn: (Args...) -> Result, // function taking some number of arguments and producing Result
args: (Args...)) -> Result { // tuple of arguments
return fn(args...) // expand the arguments in the tuple "args" into separate arguments
}
Seems to me that this code example, and especially the comment on the second line, assumes that tuples can have 0, 1, 2 or more elements.
EDIT: Two days later, I've re-asked this question in a separate thread.
I see this discussion is a year old, but I recently ran into this problem myself so hopefully it's OK to revive this. :slight_smile:
Instead of diving into the more open-ended question of whether tuples can/should conform to protocols, can this instead be solved by code generation? The quick solution to the example in the OP is to convert the tuple to a type, which seems to be something that the synthesizer could easily handle. Eg:
struct Todo : Codable {
var preview : (title: String, subject: String?)
init(preview: (title: String, subject: String?)) {
self.preview = preview
}
}
the synthesizer could generate:
struct Todo : Codable {
var preview : (title: String, subject: String?)
init(preview: (title: String, subject: String?)) {
self.preview = preview
}
private struct _Preview: Codable {
let title: String
let subject: String?
}
}
and then also generate the code to translate to/from the tuple to the type for decoding and encoding.
As I see it the difficulty with having any standardized way of Tuples implementing Codable is that there isn't an existing standard nor is there a clear (traditional) option to use.
Tuples are not a standard structure so many serializing formats will not have a structure that fits them. Even if you're talking about only JSON we run into problems. A Tuple could be serialized into an object, an array, or a String with the contents. Yes ,we could just pick one, but that decision would be arbitrary and not necessarily translate well to other formats.
Instead, I think the route to go is making it easy do manually. If/when variadic generics drops that should enable some structures that simplify .
Also, one current option would be using a propertyWrapper
@propertyWrapper
struct MyTupleCoding: Codable {
var wrappedValue: (String, String)
init(wrappedValue: (String, String)) {
self.wrappedValue = wrappedValue
}
init(from decoder: Decoder) throws {
let serializedTuple = try String(from: decoder)
let items = serializedTuple.split(separator: "|")
guard items.count == 2 else {
throw DecodingError.valueNotFound(Self.self, DecodingError.Context(codingPath: decoder.codingPath,
debugDescription: "Expected \((String, String).self) but could not convert \(serializedTuple) to (String, String)"))
}
self.wrappedValue = (String(items[0]), String(items[1]))
}
func encode(to encoder: Encoder) throws {
try "\(wrappedValue.0)|\(wrappedValue.1)".encode(to: encoder)
}
}
struct MyStruct: Codable {
@MyTupleCoding var preview: (String, String)
}
let serialized = try! JSONEncoder().encode(MyStruct(preview: ("hi", "there")))
let json = String(data: serialized, encoding: .utf8)!
print(json) // prints {"preview":"hi|there"}
let reversed = try! JSONDecoder().decode(MyStruct.self, from: serialized)
// it works!
That example has a logic error:
What if the data contains “|”?
It throws an error on decode.
At least it doesn’t unknowingly misparse the data, but the data is still ambiguous.
Just using the OP suggestion as the encoding. That's an example, not production code.
Sorry if I'm missing something obvious, but I thought tuples are equivalent to an anonymous record and can always be represented by a struct. Codable synthesis knows how to handle structs, therefore every tuple can also be handled.
A Tuple could be serialized into an object, an array, or a String with the contents.
With what I said above, I'm not sure I follow what you mean. Wouldn't this same issue also be present with serializing the contents of a struct? It's an implementation detail of the decoder/encoder for how to handle containers.
I can't speak to how it's implemented at the to compiler/runtime level.
As far as usage, if by that you mean you could always write an equivalent struct, then sure! but AFAIK at this point there's no way to convert a Tuple automatically to another Type.
According to the docs
A tuple type is a comma-separated list of types, enclosed in parentheses.
My understanding is the current design is conceptually only a generic container. It's intended to be generic and flexible. My concern isn't whether it's possible to synthesize codable, it's what to synthesize.
e.g. if you have var colorHexes (textColorHex: String, backgroundHex: String).There are a lot of ways to serialize that and IMO there isn't one that's "obviously correct".
{
"colorHexes": "textColorHexValue|backgroundHexValue",
or
"colorHexes": {
"textColorHex": "textColorHexValue",
"backgroundHex": "backgroundHexValue",
},
or
"textColorHex": "textColorHexValue",
"backgroundHex": "backgroundHexValue",
or
"colorHexes": [
"textColorHexValue",
"backgroundHexValue"
]
}
1 Like
My understanding is the current design is conceptually only a generic container . It's intended to be generic and flexible. My concern isn't whether it's possible to synthesize codable, it's what to synthesize.
IMHO treating it the same as a struct seems reasonable and practical. I do see your point though.
e.g. if you have var colorHexes (textColorHex: String, backgroundHex: String) .There are a lot of ways to serialize that and IMO there isn't one that's "obviously correct".
But isn't that an implementation detail of the decoder/encoder? If you have:
struct ColorHexes {
textColorHex: String
backgroundHex: String
}
you run into the same problem. It's up to the decoder/encoder to interpret containers.
Decoder/Encoder interpret how a structure is serialized. It doesn't define what the structure itself is.
e.g. Decoder has methods for
• container, e.g. "ColorHexes" : { "Key" : "Value"}`
• singleValueContainer (e.g. "Key" : "Value"), and
• unKeyedContainer e.g. "Value"
The implementer of Decodable decides which to use. when finding it's value.
1 Like
I was thinking about a closely related feature a long time ago. I tend to agree with code generation in a very specific sense:
Auto-generating structs and boxing/unboxing code when we declare conformance of a tuple to a protocol.
A straw man syntax and example of what I mean:
extension (Double,Double): Hashable {}
// Expands to:
struct $Tuple_0_Double_1_Double: Hashable {
let _0: Double
let _1: Double
var $unwrap: (Double, Double) { (_0, _1) }
init(_ tuple: (Double, Double)) {
self._0 = tuple.0
self._1 = tuple.1
}
}
// `Hashable` conformance is synthesized for
// `$Tuple_0_Double_1_Double` struct in this example
// At use site:
var gisPlane1: [(Double,Double): Double] = [:]
// becomes:
var gisPlane: [$Tuple_0_Double_1_Double: Double] = [:]
// Then:
var coordinates = (31.333, 37.557)
gisPlane1[coordinates] = 2.317
// becomes:
var coordinates = (31.333, 37.557) // Stays tuple
gisPlane1[$Tuple_0_Double_1_Double(coordinates)] = 2.317
// and:
for coordinates in gisPlane1.keys {
print(coordinates)
}
// becomes:
for coordinates in gisPlane1.keys {
print(coordinates.$unwrap)
}
This feature works for Codable as well as any other protocol. The body of the extension above can define methods and computed properties to implement the protocol and the method body can refer to tuple elements as self.0, self.1 which will become self._0 and self._1 in the generated code above.
I need to work a bit on the interaction between this auto-box/unbox feature and the very important special case of supporting homogenous tuples acting as fixed-sized arrays.
Protocol conformance of any distinct structural type can only be declared once in the scope of a module. Otherwise, the struct will be redeclared. I am assuming (Double, Double) and (x: Double, y: Double) to be distinct, but this is debatable.
1 Like
Terms of Service
Privacy Policy
Cookie Policy
|
__label__pos
| 0.996527 |
Click here to Skip to main content
15,968,824 members
Please Sign up or sign in to vote.
1.00/5 (1 vote)
I am currently migration from asp.net to asp.net core. As i am going on with it i have an issue where my field search which is a feature is not working. (It hits to the break point but then does not fetch the data from the json result.)
Please help me with this issue.
What I have tried:
public JsonResult GetFieldByName(string searchText)
{
Share.Log("Dec field Controller -> GetFieldByName -> Start");
bool isSuccess = false;
List<hk_field> fieldList = new List<hk_field>();
string message = string.Empty;
try
{
fieldList = _adapter.GetFieldsByName(searchText);
isSuccess = true;
}
catch (Exception ex)
{
Share.Log("Dec field Controller -> GetFieldByName -> " + ex.Message, Share.MESSAGE_TYPE_ERROR);
message = ex.Message;
}
Share.Log("Dec field Controller -> GetFieldByName -> End");
return Json(new { type = StatusType.Success.ToDescription(), message = message.ToString(), isSuccess = isSuccess, data = fieldList });
}
Posted
Comments
F-ES Sitecore 19-Jun-20 7:15am
I imagine the problem is in _adapter.GetFieldsByName which you haven't shown the code for so there isn't much we can help with.
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900
|
__label__pos
| 0.567233 |
The object xyz must be a prefab in the project view (bug?)
I am using Network.Instantiate() from an object that is a prefab. I think this is a bug in Unity 4.6 ( Link to Unity forums thread ), but is there any way around this?
Well it’s most likely that the reference you pass to Network.Instantiate isn’t a reference to a prefab but an object in the scene. Maybe the reference you’re using got replaced by a reference to an instance. This could have happend in one of your scripts or if a self-referencing reference got instantiated. The latter is something most people have problems with to understand.
Your actual code, an explanation where the script is attached to, how you’ve setup your prefab reference and where / when you execute the Network.Instantiate would help. The information you provided has too many unresolved variables.
|
__label__pos
| 0.83489 |
Manu Manu - 7 months ago 48
Java Question
difference between newLine() and carriage return("\r")
What is the difference between
newLine()
and carriage return ("\r")?
Which one is best to use?
File f = new File(strFileGenLoc);
BufferedWriter bw = new BufferedWriter(new FileWriter(f, false));
rs = stmt.executeQuery("select * from jpdata");
while ( rs.next() )
{
bw.write(rs.getString(1)==null? "":rs.getString(1));
bw.newLine();
}
Answer
Assuming you mean this:
public static String newLine = System.getProperty("line.separator");
newLine is environment agnostic \r isn't.
So newLine will give you \r\n on windows but \n on another environment.
However, you shouldn't use this in a JTextArea and println will work fine with just \n on windows.
Edit now that I've seen the code and your comment
In your situation. I think you should use your own constant - \r\n
File f = new File(strFileGenLoc);
BufferedWriter bw = new BufferedWriter(new FileWriter(f, false));
rs = stmt.executeQuery("select * from jpdata");
while ( rs.next() ) {
bw.write(rs.getString(1)==null? "":rs.getString(1));
bw.write("\\r\\n");
}
|
__label__pos
| 0.976278 |
Answers
Solutions by everydaycalculation.com
Answers.everydaycalculation.com » A% of what number is B
16 percent of what number is 180?
180 is 16% of 1125
Steps to solve "180 is 16 percent of what number?"
1. We have, 16% × x = 180
2. or, 16/100 × x = 180
3. Multiplying both sides by 100 and dividing both sides by 16,
we have x = 180 × 100/16
4. x = 1125
If you are using a calculator, simply enter 180×100÷16, which will give you the answer.
MathStep (Works offline)
Download our mobile app and learn how to work with percentages in your own time:
Android and iPhone/ iPad
More percentage problems:
Find another
is % of
© everydaycalculation.com
|
__label__pos
| 0.998097 |
DmitryKlenov's blog
By DmitryKlenov5 years ago, In English,
As there is no tutorial of Codeforces Beta Round #5 in English, I have decided to write this one. Possibly, some ideas will be similar to ones, expressed in Russian tutorial made by SkidanovAlex, but I hope everybody will find some useful in this problem analysis too.
Problems A and B.
Both are implementation problems. The only difficult, many participants faced with - to read data correctly. It is recommended to use gets(s) or getline(cin, s) in C++, readLine() method of BufferedReader class in Java.
Problem C.
First of all, for each closing bracket in our string let's define 2 values:
• d[j] = position of corresponding open bracket, or -1 if closing bracket doesn't belong to any regular bracket sequence.
• c[j] = position of earliest opening bracket, such that substring s(c[j], j) (both boundaries are inclusive) is a regular bracket sequence. Let's consider c[j] to be -1 if closing bracket doesn't belong to any regular bracket sequence.
It can be seen, that c[j] defines the beginning position of the longest regular bracket sequence, which will end in position j. So, having c[j] answer for the problem can be easily calculated.
Both d[j] and c[j] can be found with following algorithm, which uses stack.
1. Iterate through the characters of the string.
2. If current character is opening bracket, put its position into the stack.
3. If current character is closing bracket, there are 2 subcases:
• Stack is empty - this means that current closing bracket doesn't have corresponding open one. Hence, both d[j] and c[j] are equal to -1.
• Stack is not empty - we will have position of the corresponding open bracket on the top of the stack - let's put it to d[j] and remove this position from the stack. Now it is obvious, that c[j] is equal at least to d[j]. But probably, there is a better value for c[j]. To find this out, we just need to look at the position d[j] - 1. If there is a closing bracket at this position, and c[d[j] - 1] is not -1, than we have 2 regular bracket sequences s(c[d[j] - 1], d[j] - 1) and s(d[j], j), which can be concatenated into one larger regular bracket sequence. So we put c[j] to be c[d[j] - 1] for this case.
Problem D.
This problem can be solved by careful case handling. Let's construct O(1) solution for it.
First of all, let's define 2 functions:
1. dist(speed, time) - calculates the distance will be covered in specified time, if car's current speed is speed. This function will not take car's speed limit into account. Also it assumes, that car is always driven with maximum acceleration a. It is obvious that required distance is equal to .
2. travelTime(distance, speed) - calculates the time, required to travel specified distance, if car have starting speed equal to speed. This function will also take care about car's speed limit.
We will have the following quadratic equation for time t: . This equation will have exactly 2 different roots. Using Viete's formulas it can be concluded, that one root of the equation is non-positive and other is non-negative. Let's define the larger root of the equation as tAll. It will be the answer, if there is no car's speed limit. To take the limit into account let's find the time, required to gain car's max speed. tMax = (v - speed) / a. If tMax ≥ tAll, function should just returns tAll as a result. Otherwise result is tMax hours to achieve car's maximal speed plus (distance - dist(speed, tMax)) / v hours to cover remaining distance.
Having these functions, solution will be the following:
1. If v ≤ w, answer is travelTime(l, 0).
2. Calculate tw = w / a - time, required to gain speed w.
3. Consider dw = dist(0, tw).
4. If dw ≥ d, we will pass the point there sign is placed before we gain speed w. Answer for this case is travelTime(l, 0) as well.
5. Otherwise, we will gain speed w before the sign. Let's consider segment of the road [dw, d]. We need to find out the best strategy to drive it. It is obvious, that we definitely should have speed w at the both ends of this segment. Also we know, that acceleration is equal to deceleration. Taking these facts into account we can see, that the speed in the optimal solution will be symmetrical with respect to the middle of the segment [dw, d]. Hence answer for this case will be tw + 2 * travelTime(0.5 * (d - dw), w) + travelTime(l - d, w).
Problem E.
Let's reduce the problem from the circle to the straight line. Perform the following actions to do it:
1. Find the hill with the maximal height (if it is not unique, choose any).
2. Rotate all the sequence in such a way that hill with maximal height goes first.
3. For convenience, add one more hill with maximum height to the end of the sequence. (It will represent the first hill, which goes after the last in the circle order).
Now we have almost the same problem on the straight line. One exception is that now first hill is doubled.
General idea of the solution:
Consider there is a pair of hills, such that these hills are visible from each other. Let's define hill with lower height (if heights are equal - with lower position) as responsible for adding this pair to the answer.
From this point of view, hill x will adds to the answer 3 kinds of hills as his pair:
• First hill to the left of the x, which is strictly higher than x. (Let's define its position as l[x])
• First hill to the right of the x, which is strictly higher than x. (Let's call this hill y and define it's position as r[x]).
• All hills that are as high as x and are located between x and y. (Let's define this count as c[x]).
Arrays r[x] and c[x] can be calculated by the following piece of code:
c[n] = 0;
for(int i = n - 1; i >= 0; --i) {
r[i] = i + 1;
while (r[i] < n && height[i] > height[r[i]]) r[i] = r[r[i]];
if (r[i] < n && height[i] == height[r[i]]) {
c[i] = c[r[i]] + 1;
r[i] = r[r[i]];
}
}
I am not going to prove here, that it works for the O(N) time, but believe it does :)
Pay attention, that r[x] is undefined for hills with maximum height and this algorithm will find r[x] = n for such hills.
Array l[x] can be found in a similar way.
Having l[x], r[x] and c[x], it's not so difficult to calculate the answer. We should just notice, that:
• Each hill will add c[x] pairs to the answer.
• Each hill, lower than maximal, will also add 2 pairs (x, l[x]) and (x, r[x]) to the answer. The only corner case here is l[x] = 0 and r[x] = n, because (x, 0) and (x, n) is the same pair of hills in the original problem, where hills are circled.
• Vote: I like it
• +12
• Vote: I do not like it
5 years ago, # |
Vote: I like it 0 Vote: I do not like it
Excellent Tutorial. Specially for explanation of Problem C.
»
23 months ago, # |
Vote: I like it +3 Vote: I do not like it
very nice. but what is the exact meaning of array c[]?
»
13 months ago, # |
Vote: I like it 0 Vote: I do not like it
Although i use readLine() method of BufferedReader class in Java.
To pass Online Judge
How i can test my code locally ??
Its ALWAYS ask me for input and don't terminate ??
Thats because Online Judge System terminate if (line == null) but how i terminate my input locally ??
|
__label__pos
| 0.968945 |
draw.appetic.com
ASP.NET Web PDF Document Viewer/Editor Control Library
public void PostNote(string note) { IChatClient clientCallback = OperationContext.Current.GetCallbackChannel<IChatClient>(); string name = clientsAndNames[clientCallback]; Console.WriteLine("{0}: {1}", name, note); // ToArray() makes copy of the collection. This avoids an // exception due to the collection being modified if we have // to disconnect a client part way through the loop. KeyValuePair<IChatClient, string>[] copiedNames = clientsAndNames.ToArray(); foreach (KeyValuePair<IChatClient, string> client in copiedNames) { // Avoid sending the message back to the client that just sent // it - they already know what they just typed. if (client.Key != clientCallback) { Console.WriteLine("Sending note to {0}", client.Value); try { client.Key.NotePosted(name, note); } catch (Exception x) { Console.WriteLine("Error: {0}", x); DisconnectClient(client.Key); } } }
excel 2013 barcode font download, how to add barcode font to excel 2003, excel barcode generator macro, barcode font excel free download, barcode excel 2010 freeware, how to install barcode font in excel 2010, creare barcode excel 2013, barcode font excel 2010 free download, barcode excel vba free, barcode add in for excel 2016,
Here s the markup: <li id="PHGraph"> <div id="PHGraphDiv"> <table width="400px" cellspacing="0" cellpadding="0" style="border-width: 0"> <tr> <td style="background-color: #1077AD"> <span class="style2"> Price History Graph </span> </td> </tr> <tr> <td> <atlas:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Label ID="lblPHGraph" runat="server" Text="Label"> </asp:Label> </ContentTemplate> </atlas:UpdatePanel> <atlas:UpdateProgress runat="server" ID="Prog4"> <ProgressTemplate> Loading... </ProgressTemplate> </atlas:UpdateProgress> </td> </tr> </table> </div> </li> The <li> tag denotes this section as being part of an HTML list. The drag-and-drop mechanism, which I ll discuss later in the Using Atlas Client Controls for an Enhanced UI section, uses this. The markup defines a table with a header row containing the text Price History Graph and a second row containing the graph itself. The graph is implemented using a Label control, which is contained within the content template of an UpdatePanel control. The UpdatePanel control is contained within a <td> element, putting it within a cell in the table. The name of this Label control is lblPHGraph. As you probably remember, the entire page update is triggered by the user changing the contents of the text box to select a new stock ticker. This calls the DoUpdate() function on the server. Within this function, you ll find this line of C# code: lblPHGraph.Text = "<img src='PH.aspx ticker=" + TextBox1.Text + "&days=100' />";
}
Again, we begin by retrieving the callback interface for the current client. Remember, our chat server will usually have multiple clients attached, and this lets us discover which particular one is sending a note. The next line looks up the callback interface in the dictionary to find out what name this user originally passed to Connect this is why we were able to remove the argument we previously had on this method in which the caller passed her name. We remember her name from before we have to remember it to guarantee uniqueness and since we re remembering it, there s no need to make the client pass in the name every single time. This code then iterates through all the connected clients in the clientsAndNames dictionary, to deliver the new note to each client. It calls the NotePosted on the proxy. Notice that we wrapped this in exception-handling code. If a client becomes inaccessible because of a network failure, a crash, a machine failure, or a programming error that caused it to exit without remembering to call Disconnect, the proxy s NotePosted method will throw an exception. Our code catches this and removes the client from the list, to avoid trying to send it any more notes.
So, how do you go about actually running CMake The first step is to start a command prompt to orient you to your project directory (the directory containing the CMakeLists.txt file). From this directory, you can use the following line to build the project inside the source tree: cmake . On Windows, you might have to tell CMake to using MinGW for building using the -G command line option. This gives you the following command: cmake . -G "MinGW Makefiles" The . passed to CMake refers to the current directory. It tells CMake that this is the source directory. If you want to build outside the source, it is how you tell CMake what to build. Let s start from the project directory as before, but now build in a separate directory that you create: mkdir build cd build cmake .. Sometimes you might have to add -G "MinGW Makefiles" to the cmake command to get it to work properly. By building outside the source tree, you can see which files CMake creates and how the system works. One of the central files that can give you trouble is the CMakeCache.txt that CMake generates. If you want to change the generator, you need to remove this file to get CMake to regenerate the build system. CMake also creates a CMakeFiles directory, which contains many of the intermediate files created in the build process. However, the files generated by the meta-object compiler and the user interface compiler are not placed here. Instead, they are placed next to the files from which they are generated or, if building outside the source, in the corresponding location in the build directory.
Copyright 2020.
|
__label__pos
| 0.80694 |
Namespaces
Variants
Views
Actions
std::vector::erase
From cppreference.com
< cpp | container | vector
Revision as of 16:38, 2 November 2012 by P12bot (Talk | contribs)
(1)
iterator erase( iterator pos );
(until C++11)
iterator erase( const_iterator pos );
(since C++11)
(2)
iterator erase( iterator first, iterator last );
(until C++11)
iterator erase( const_iterator first, const_iterator last );
(since C++11)
Removes specified elements from the container.
1) Removes the element at pos.
2) Removes the elements in the range [first; last).
Invalidates iterators and references at or after the point of the erase, including the end() iterator.
The iterator pos must be valid and dereferenceable. Thus the end() iterator (which is valid, but is not dereferencable) cannot be used as a value for pos.
Contents
Parameters
pos - iterator to the element to remove
first, last - range of elements to remove
Return value
Iterator following the last removed element.
Example
#include <vector>
#include <iostream>
int main( )
{
std::vector<int> c{0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
for (auto &i : c) {
std::cout << i << " ";
}
std::cout << '\n';
c.erase(c.begin());
for (auto &i : c) {
std::cout << i << " ";
}
std::cout << '\n';
c.erase(c.begin()+2, c.begin()+5);
for (auto &i : c) {
std::cout << i << " ";
}
std::cout << '\n';
}
Output:
0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
1 2 6 7 8 9
Complexity
1) Linear in the distance between position and last.
2) Linear in distance between position and the end of the container.
See also
clears the contents
(public member function) [edit]
|
__label__pos
| 0.681526 |
This document describes the current stable version of Celery (5.4). For development docs, go here.
Source code for celery.bin.multi
"""Start multiple worker instances from the command-line.
.. program:: celery multi
Examples
========
.. code-block:: console
$ # Single worker with explicit name and events enabled.
$ celery multi start Leslie -E
$ # Pidfiles and logfiles are stored in the current directory
$ # by default. Use --pidfile and --logfile argument to change
$ # this. The abbreviation %n will be expanded to the current
$ # node name.
$ celery multi start Leslie -E --pidfile=/var/run/celery/%n.pid
--logfile=/var/log/celery/%n%I.log
$ # You need to add the same arguments when you restart,
$ # as these aren't persisted anywhere.
$ celery multi restart Leslie -E --pidfile=/var/run/celery/%n.pid
--logfile=/var/log/celery/%n%I.log
$ # To stop the node, you need to specify the same pidfile.
$ celery multi stop Leslie --pidfile=/var/run/celery/%n.pid
$ # 3 workers, with 3 processes each
$ celery multi start 3 -c 3
celery worker -n celery1@myhost -c 3
celery worker -n celery2@myhost -c 3
celery worker -n celery3@myhost -c 3
$ # override name prefix when using range
$ celery multi start 3 --range-prefix=worker -c 3
celery worker -n worker1@myhost -c 3
celery worker -n worker2@myhost -c 3
celery worker -n worker3@myhost -c 3
$ # start 3 named workers
$ celery multi start image video data -c 3
celery worker -n image@myhost -c 3
celery worker -n video@myhost -c 3
celery worker -n data@myhost -c 3
$ # specify custom hostname
$ celery multi start 2 --hostname=worker.example.com -c 3
celery worker -n [email protected] -c 3
celery worker -n [email protected] -c 3
$ # specify fully qualified nodenames
$ celery multi start [email protected] [email protected] -c 3
$ # fully qualified nodenames but using the current hostname
$ celery multi start foo@%h bar@%h
$ # Advanced example starting 10 workers in the background:
$ # * Three of the workers processes the images and video queue
$ # * Two of the workers processes the data queue with loglevel DEBUG
$ # * the rest processes the default' queue.
$ celery multi start 10 -l INFO -Q:1-3 images,video -Q:4,5 data
-Q default -L:4,5 DEBUG
$ # You can show the commands necessary to start the workers with
$ # the 'show' command:
$ celery multi show 10 -l INFO -Q:1-3 images,video -Q:4,5 data
-Q default -L:4,5 DEBUG
$ # Additional options are added to each celery worker's command,
$ # but you can also modify the options for ranges of, or specific workers
$ # 3 workers: Two with 3 processes, and one with 10 processes.
$ celery multi start 3 -c 3 -c:1 10
celery worker -n celery1@myhost -c 10
celery worker -n celery2@myhost -c 3
celery worker -n celery3@myhost -c 3
$ # can also specify options for named workers
$ celery multi start image video data -c 3 -c:image 10
celery worker -n image@myhost -c 10
celery worker -n video@myhost -c 3
celery worker -n data@myhost -c 3
$ # ranges and lists of workers in options is also allowed:
$ # (-c:1-3 can also be written as -c:1,2,3)
$ celery multi start 5 -c 3 -c:1-3 10
celery worker -n celery1@myhost -c 10
celery worker -n celery2@myhost -c 10
celery worker -n celery3@myhost -c 10
celery worker -n celery4@myhost -c 3
celery worker -n celery5@myhost -c 3
$ # lists also works with named workers
$ celery multi start foo bar baz xuzzy -c 3 -c:foo,bar,baz 10
celery worker -n foo@myhost -c 10
celery worker -n bar@myhost -c 10
celery worker -n baz@myhost -c 10
celery worker -n xuzzy@myhost -c 3
"""
import os
import signal
import sys
from functools import wraps
import click
from kombu.utils.objects import cached_property
from celery import VERSION_BANNER
from celery.apps.multi import Cluster, MultiParser, NamespacedOptionParser
from celery.bin.base import CeleryCommand, handle_preload_options
from celery.platforms import EX_FAILURE, EX_OK, signals
from celery.utils import term
from celery.utils.text import pluralize
__all__ = ('MultiTool',)
USAGE = """\
usage: {prog_name} start <node1 node2 nodeN|range> [worker options]
{prog_name} stop <n1 n2 nN|range> [-SIG (default: -TERM)]
{prog_name} restart <n1 n2 nN|range> [-SIG] [worker options]
{prog_name} kill <n1 n2 nN|range>
{prog_name} show <n1 n2 nN|range> [worker options]
{prog_name} get hostname <n1 n2 nN|range> [-qv] [worker options]
{prog_name} names <n1 n2 nN|range>
{prog_name} expand template <n1 n2 nN|range>
{prog_name} help
additional options (must appear after command name):
* --nosplash: Don't display program info.
* --quiet: Don't show as much output.
* --verbose: Show more output.
* --no-color: Don't display colors.
"""
def main():
sys.exit(MultiTool().execute_from_commandline(sys.argv))
def splash(fun):
@wraps(fun)
def _inner(self, *args, **kwargs):
self.splash()
return fun(self, *args, **kwargs)
return _inner
def using_cluster(fun):
@wraps(fun)
def _inner(self, *argv, **kwargs):
return fun(self, self.cluster_from_argv(argv), **kwargs)
return _inner
def using_cluster_and_sig(fun):
@wraps(fun)
def _inner(self, *argv, **kwargs):
p, cluster = self._cluster_from_argv(argv)
sig = self._find_sig_argument(p)
return fun(self, cluster, sig, **kwargs)
return _inner
class TermLogger:
splash_text = 'celery multi v{version}'
splash_context = {'version': VERSION_BANNER}
#: Final exit code.
retcode = 0
def setup_terminal(self, stdout, stderr,
nosplash=False, quiet=False, verbose=False,
no_color=False, **kwargs):
self.stdout = stdout or sys.stdout
self.stderr = stderr or sys.stderr
self.nosplash = nosplash
self.quiet = quiet
self.verbose = verbose
self.no_color = no_color
def ok(self, m, newline=True, file=None):
self.say(m, newline=newline, file=file)
return EX_OK
def say(self, m, newline=True, file=None):
print(m, file=file or self.stdout, end='\n' if newline else '')
def carp(self, m, newline=True, file=None):
return self.say(m, newline, file or self.stderr)
def error(self, msg=None):
if msg:
self.carp(msg)
self.usage()
return EX_FAILURE
def info(self, msg, newline=True):
if self.verbose:
self.note(msg, newline=newline)
def note(self, msg, newline=True):
if not self.quiet:
self.say(str(msg), newline=newline)
@splash
def usage(self):
self.say(USAGE.format(prog_name=self.prog_name))
def splash(self):
if not self.nosplash:
self.note(self.colored.cyan(
self.splash_text.format(**self.splash_context)))
@cached_property
def colored(self):
return term.colored(enabled=not self.no_color)
[docs] class MultiTool(TermLogger): """The ``celery multi`` program.""" MultiParser = MultiParser OptionParser = NamespacedOptionParser reserved_options = [ ('--nosplash', 'nosplash'), ('--quiet', 'quiet'), ('-q', 'quiet'), ('--verbose', 'verbose'), ('--no-color', 'no_color'), ] def __init__(self, env=None, cmd=None, fh=None, stdout=None, stderr=None, **kwargs): # fh is an old alias to stdout. self.env = env self.cmd = cmd self.setup_terminal(stdout or fh, stderr, **kwargs) self.fh = self.stdout self.prog_name = 'celery multi' self.commands = { 'start': self.start, 'show': self.show, 'stop': self.stop, 'stopwait': self.stopwait, 'stop_verify': self.stopwait, # compat alias 'restart': self.restart, 'kill': self.kill, 'names': self.names, 'expand': self.expand, 'get': self.get, 'help': self.help, }
[docs] def execute_from_commandline(self, argv, cmd=None): # Reserve the --nosplash|--quiet|-q/--verbose options. argv = self._handle_reserved_options(argv) self.cmd = cmd if cmd is not None else self.cmd self.prog_name = os.path.basename(argv.pop(0)) if not self.validate_arguments(argv): return self.error() return self.call_command(argv[0], argv[1:])
[docs] def validate_arguments(self, argv): return argv and argv[0][0] != '-'
[docs] def call_command(self, command, argv): try: return self.commands[command](*argv) or EX_OK except KeyError: return self.error(f'Invalid command: {command}')
def _handle_reserved_options(self, argv): argv = list(argv) # don't modify callers argv. for arg, attr in self.reserved_options: if arg in argv: setattr(self, attr, bool(argv.pop(argv.index(arg)))) return argv
[docs] @splash @using_cluster def start(self, cluster): self.note('> Starting nodes...') return int(any(cluster.start()))
[docs] @splash @using_cluster_and_sig def stop(self, cluster, sig, **kwargs): return cluster.stop(sig=sig, **kwargs)
[docs] @splash @using_cluster_and_sig def stopwait(self, cluster, sig, **kwargs): return cluster.stopwait(sig=sig, **kwargs)
stop_verify = stopwait # compat
[docs] @splash @using_cluster_and_sig def restart(self, cluster, sig, **kwargs): return int(any(cluster.restart(sig=sig, **kwargs)))
[docs] @using_cluster def names(self, cluster): self.say('\n'.join(n.name for n in cluster))
[docs] def get(self, wanted, *argv): try: node = self.cluster_from_argv(argv).find(wanted) except KeyError: return EX_FAILURE else: return self.ok(' '.join(node.argv))
[docs] @using_cluster def show(self, cluster): return self.ok('\n'.join( ' '.join(node.argv_with_executable) for node in cluster ))
[docs] @splash @using_cluster def kill(self, cluster): return cluster.kill()
[docs] def expand(self, template, *argv): return self.ok('\n'.join( node.expander(template) for node in self.cluster_from_argv(argv) ))
[docs] def help(self, *argv): self.say(__doc__)
def _find_sig_argument(self, p, default=signal.SIGTERM): args = p.args[len(p.values):] for arg in reversed(args): if len(arg) == 2 and arg[0] == '-': try: return int(arg[1]) except ValueError: pass if arg[0] == '-': try: return signals.signum(arg[1:]) except (AttributeError, TypeError): pass return default def _nodes_from_argv(self, argv, cmd=None): cmd = cmd if cmd is not None else self.cmd p = self.OptionParser(argv) p.parse() return p, self.MultiParser(cmd=cmd).parse(p)
[docs] def cluster_from_argv(self, argv, cmd=None): _, cluster = self._cluster_from_argv(argv, cmd=cmd) return cluster
def _cluster_from_argv(self, argv, cmd=None): p, nodes = self._nodes_from_argv(argv, cmd=cmd) return p, self.Cluster(list(nodes), cmd=cmd)
[docs] def Cluster(self, nodes, cmd=None): return Cluster( nodes, cmd=cmd, env=self.env, on_stopping_preamble=self.on_stopping_preamble, on_send_signal=self.on_send_signal, on_still_waiting_for=self.on_still_waiting_for, on_still_waiting_progress=self.on_still_waiting_progress, on_still_waiting_end=self.on_still_waiting_end, on_node_start=self.on_node_start, on_node_restart=self.on_node_restart, on_node_shutdown_ok=self.on_node_shutdown_ok, on_node_status=self.on_node_status, on_node_signal_dead=self.on_node_signal_dead, on_node_signal=self.on_node_signal, on_node_down=self.on_node_down, on_child_spawn=self.on_child_spawn, on_child_signalled=self.on_child_signalled, on_child_failure=self.on_child_failure, )
[docs] def on_stopping_preamble(self, nodes): self.note(self.colored.blue('> Stopping nodes...'))
[docs] def on_send_signal(self, node, sig): self.note('\t> {0.name}: {1} -> {0.pid}'.format(node, sig))
[docs] def on_still_waiting_for(self, nodes): num_left = len(nodes) if num_left: self.note(self.colored.blue( '> Waiting for {} {} -> {}...'.format( num_left, pluralize(num_left, 'node'), ', '.join(str(node.pid) for node in nodes)), ), newline=False)
[docs] def on_still_waiting_progress(self, nodes): self.note('.', newline=False)
[docs] def on_still_waiting_end(self): self.note('')
[docs] def on_node_signal_dead(self, node): self.note( 'Could not signal {0.name} ({0.pid}): No such process'.format( node))
[docs] def on_node_start(self, node): self.note(f'\t> {node.name}: ', newline=False)
[docs] def on_node_restart(self, node): self.note(self.colored.blue( f'> Restarting node {node.name}: '), newline=False)
[docs] def on_node_down(self, node): self.note(f'> {node.name}: {self.DOWN}')
[docs] def on_node_shutdown_ok(self, node): self.note(f'\n\t> {node.name}: {self.OK}')
[docs] def on_node_status(self, node, retval): self.note(retval and self.FAILED or self.OK)
[docs] def on_node_signal(self, node, sig): self.note('Sending {sig} to node {0.name} ({0.pid})'.format( node, sig=sig))
[docs] def on_child_spawn(self, node, argstr, env): self.info(f' {argstr}')
[docs] def on_child_signalled(self, node, signum): self.note(f'* Child was terminated by signal {signum}')
[docs] def on_child_failure(self, node, retcode): self.note(f'* Child terminated with exit code {retcode}')
@cached_property def OK(self): return str(self.colored.green('OK')) @cached_property def FAILED(self): return str(self.colored.red('FAILED')) @cached_property def DOWN(self): return str(self.colored.magenta('DOWN'))
@click.command( cls=CeleryCommand, context_settings={ 'allow_extra_args': True, 'ignore_unknown_options': True } ) @click.pass_context @handle_preload_options def multi(ctx, **kwargs): """Start multiple worker instances.""" cmd = MultiTool(quiet=ctx.obj.quiet, no_color=ctx.obj.no_color) # In 4.x, celery multi ignores the global --app option. # Since in 5.0 the --app option is global only we # rearrange the arguments so that the MultiTool will parse them correctly. args = sys.argv[1:] args = args[args.index('multi'):] + args[:args.index('multi')] return cmd.execute_from_commandline(args)
|
__label__pos
| 0.98025 |
Take the 2-minute tour ×
Ask Ubuntu is a question and answer site for Ubuntu users and developers. It's 100% free, no registration required.
Can the workspace switcher be changed so that it works like it used to in 10?
When switching workspaces in Unity you need to use both the horizontal arrow keys, but if you want to get to workspace 3 and 4 then you must use the vertical arrow keys. This is annoying.
Can I change it so that I can access all workspaces by hitting the horizontal arrow keys only?
share|improve this question
add comment
1 Answer 1
up vote 4 down vote accepted
Have a look at this topic for changing the layout of the workspaces: How can I reduce or increase the number of workspaces in Unity?
Default is 2 by 2 but you can set the vertical virtual size to 1 and the horizontal virtual size to 4 and you can use cntrl+alt+arrow left+right to go through 4 workspaces.
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.99492 |
eduarboy eduarboy - 1 year ago 353
Android Question
retrofit expected begin_array but was begin_object at line 1 column 2 path $
I'm learning retrofit following a youtube video
but right now I'm stuck. It shows me an error "retrofit expected begin_array but was begin_object at line 1 column 2 path $"
I'm trying to get json data from this site.
http://servicio-monkydevs.rhcloud.com/clientes/
Here is my code
MainActivity.java
resultadoTextView = (TextView) findViewById(R.id.Resultado);
Retrofit restAdapter = new Retrofit.Builder()
.baseUrl("http://servicio-monkydevs.rhcloud.com")
.addConverterFactory(GsonConverterFactory.create())
.build();
ClienteService service = restAdapter.create(ClienteService.class);
Call<Cliente> call = service.getCliente();
call.enqueue(new Callback<Cliente>() {
@Override
public void onResponse(Call<Cliente> call, Response<Cliente> response) {
if(response.isSuccessful()) {
resultadoTextView.setText(call.toString());
}else{
resultadoTextView.setText("algo paso");
}
}
@Override
public void onFailure(Call<Cliente> call, Throwable t) {
resultadoTextView.setText(t.getMessage());
}
});
ClientService.java
public interface ClienteService {
@GET("/clientes")
Call<Cliente> getCliente();
}
Client.java
public class Cliente {
private int id;
private String name;
private String username;
private String email;
private String phone;
private String website;
private String photo;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPhone() {
return phone;
}
public void setPhone(String phone) {
this.phone = phone;
}
public String getWebsite() {
return website;
}
public void setWebsite(String website) {
this.website = website;
}
public String getPhoto() {
return photo;
}
public void setPhoto(String photo) {
this.photo = photo;
}
@Override
public String toString() {
return "Cliente{" +
"id=" + id +
", name='" + name + '\'' +
", username='" + username + '\'' +
", email='" + email + '\'' +
", phone='" + phone + '\'' +
", website='" + website + '\'' +
", photo='" + photo + '\'' +
'}';
}}
What am I doing wrong?
UPDATE
I made these changes
public class Cliente {
@SerializedName("id")
private int id;
@SerializedName("name")
private String name;
@SerializedName("username")
private String username;
@SerializedName("email")
private String email;
@SerializedName("phone")
private String phone;
@SerializedName("website")
private String website;
@SerializedName("photo")
private String photo;
...
And this in the interface
public interface ClienteService {
@GET("/clientes")
Call<List<Cliente>> getCliente();
}
And this in the MainActivity as you say
Call<List<Cliente>> call = service.getCliente();
call.enqueue(new Callback<List<Cliente>>() {
@Override
public void onResponse(Call<List<Cliente>> call, Response<List<Cliente>> response) {
if(response.isSuccessful()) {
resultadoTextView.setText(call.toString());
}else{
resultadoTextView.setText("algo paso");
}
}
@Override
public void onFailure(Call<List<Cliente>> call, Throwable t) {
resultadoTextView.setText(t.getMessage());
}
});
But now it shows me this error:
"retrofit2.executorCallAdapterFactory$ExecutorCallbackCall@6a3dd44"
It shows me this in this line
...
if(response.isSuccessful()) {
resultadoTextView.setText(call.toString()); <-- HERE
}else{
...
Answer Source
As you Can see the Given REST API url returning an array of Object , that is ArrayList but in your retrofit api service the return type is Only Cliente. So change your ClientService.java to the below
public interface ClienteService {
@GET("/clientes")
Call<List<Cliente>> getCliente();
}
And change the Call.enque() method to this
Call<List<Cliente>> call = service.getCliente();
call.enqueue(new Callback<Cliente>() {
@Override
public void onResponse(Call<List<Cliente> call, Response<List<Cliente>> response) {
if(response.isSuccessful()) {
// your code to get data from the list
}else{
}
}
@Override
public void onFailure(Call<List<Cliente>> call, Throwable t) {
resultadoTextView.setText(t.getMessage());
}
});
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.980745 |
6
In LaTex I build the following macro "kum" for translating arabic to german. The result are two minipages, left one with german, right one with arabic (the other two minipages were for design-issues). The macro "arab" is not shown, it just choose an arabic font.
\newcommand{\kum}[2]{
\begin{minipage}[t]{0.445\textwidth} \small
#1
\end{minipage}
\begin{minipage}[t]{0.05\textwidth}
~
\end{minipage}
\begin{minipage}[t]{0.445\textwidth}
\begin{arab}[voc]
\raggedleft
#2
\end{arab}
\end{minipage}
\begin{minipage}[t]{0.06\textwidth}
~
\end{minipage}
}
Now I've two questions:
1. What is the equivalent of minipages in ConTeXt?
2. How to realize the above LaTeX code in ConTeXt? (both with engine LuaTeX)
6
You can use the paragraphs mechanism (see also manual section 4.10 “Paragraphs in columns”). Here is an example:
\defineparagraphs
[Translation]
[n=3,
distance=2em]
\setupparagraphs
[Translation] [1]
[width=.4\textwidth,
style=small]
\setupparagraphs
[Translation] [2]
[width=.6\textwidth,
align=flushleft]
\starttext
\startTranslation
\input knuth
\Translation
\input tufte
\stopTranslation
\stoptext
result
result
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999999 |
Main Content
dpigen
Generate SystemVerilog DPI component from MATLAB function
Description
example
dpigen fcn -args args generates a SystemVerilog DPI component shared library from MATLAB® function fcn and all the functions that fcn calls.
• .dll for shared libraries on Microsoft® Windows® systems
• .so for shared libraries on Linux® systems
The dpigen function also generates a SystemVerilog package file, which contains the function declarations.
The argument —args args specifies the type of inputs the generated code can accept. The generated DPI component is specialized to the class and size of the inputs. Using this information, dpigen generates a DPI component that emulates the behavior of the MATLAB function.
fcn and —args args are required input arguments. The MATLAB function must be on the MATLAB path or in the current folder.
example
dpigen fcn -args args -testbench tb_name -options options files -c -launchreport -PortsDataType type generates a SystemVerilog DPI component shared library according to the options specified. You can specify zero or more optional arguments, in any order.
• -testbench tb_name also generates a test bench for the SystemVerilog DPI component. The MATLAB test bench must be on the MATLAB path or in the current folder.
• -options options specifies additional options for the compiler and code generation.
• files specifies custom files to include in the generated code.
• -c generates C code only.
• -launchreport generates and opens a code generation report.
• -PortsDataType specifies the SystemVerilog data type to use for ports.
When generating a DPI component, it creates a shared library specific to that host platform. For example, if you use 64-bit MATLAB on Windows, you get a 64-bit DLL, which can be used only with a 64-bit HDL simulator in Windows. For porting the generated component from Windows to Linux, see Port Generated Component and Test Bench to Linux.
Examples
collapse all
Generate a DPI component and test bench for the function fun.m and its associated test bench, fun_tb.m. The dpigen function compiles the component automatically using the default compiler. The -args option specifies that the first input type is a double and the second input type is an int8.
dpigen -testbench fun_tb.m -I E:\HDLTools\ModelSim\10.2c-mw-0\questa_sim\include fun.m
-args {double(0),int8(0)}
### Generating DPI-C Wrapper fun_dpi.c
### Generating DPI-C Wrapper header file fun_dpi.h
### Generating SystemVerilog module package fun_dpi_pkg.sv
### Generating SystemVerilog module fun_dpi.sv
### Generating makefiles for: fun_dpi
### Compiling the DPI Component
### Generating SystemVerilog test bench fun_tb.sv
### Generating test bench simulation script for Mentor Graphics QuestaSim/Modelsim run_tb_mq.do
### Generating test bench simulation script for Cadence Incisive run_tb_incisive.sh
### Generating test bench simulation script for Cadence Xcelium run_tb_xcelium.sh
### Generating test bench simulation script for Synopsys VCS run_tb_vcs.sh
### Generating test bench simulation script for Vivado Simulator run_tb_vivado.bat
Generate a DPI component and a test bench for the function fun.m and its associated test bench, fun_tb.m. To prevent the dipgen function from compiling the library, include the -c option. Send the source code output to 'MyDPIProject'.
dpigen -c -d MyDPIProject -testbench fun_tb.m fun.m -args {double(0),int8(0)}
### Generating DPI-C Wrapper fun_dpi.c
### Generating DPI-C Wrapper header file fun_dpi.h
### Generating SystemVerilog module package fun_dpi_pkg.sv
### Generating SystemVerilog module fun_dpi.sv
### Generating makefiles for: fun_dpi
### Generating SystemVerilog test bench fun_tb.sv
### Generating test bench simulation script for Mentor Graphics ModelSim/QuestaSim run_tb_mq.do
### Generating test bench simulation script for Cadence Incisive run_tb_incisive.sh
### Generating test bench simulation script for Cadence Xcelium run_tb_xcelium.sh
### Generating test bench simulation script for Synopsys VCS run_tb_vcs.sh
### Generating test bench simulation script for Vivado Simulator run_tb_vivado.bat
Input Arguments
collapse all
Name of MATLAB function to generate the DPI component from, specified as a character vector or string scalar. The MATLAB function must be on the MATLAB path or in the current folder.
Data type and size of MATLAB function inputs, specified as a cell array. Specify the input types that the generated DPI component accepts. args is a cell array specifying the type of each function argument. Elements are converted to types using coder.typeof. This argument is required.
This argument has the same functionality as the codegen (MATLAB Coder) function argument args. args applies only to the function, fcn.
Example: -args {double(0),int8(0)}
MATLAB test bench used to generate test bench for generated DPI component, specified as a character vector or string scalar. The dpigen function uses this test bench to generate a SystemVerilog test bench along with data files and execution scripts. The MATLAB test bench must be on the MATLAB path or in the current folder.
The -testbench argument requires a Fixed-Point Designer™ license.
Example: -testbench My_Test_bench.m
Compiler and codegen options, specified as a character vector or string scalar. These options are a subset of the options for codegen (MATLAB Coder). The dpigen function gives precedence to individual command-line options over options specified using a configuration object. If command-line options conflict, the right-most option prevails.
You can specify zero or more optional arguments, in any order. For example:
dpigen -c -d MyDPIProject -testbench fun_tb.m fun.m -args {double(0),int8(0)} -launchreport
Option flagOption value
-I include_path
Specifies the path to folders containing headers and library files needed for codegen, specified as a character vector or string scalar. Add include_path to the beginning of the code generation path.
For example:
-I E:\HDLTools\ModelSim\10.2c-mw-0\questa_sim\include
include_path must not contain spaces, which can lead to code generation failures in certain operating system configurations. If the path contains non 7-bit ASCII characters, such as Japanese characters, dpigen might not find files on this path.
When converting MATLAB code to C/C++ code, dpigen searches the code generation path first.
Alternatively, you can specify the include path with the files input argument.
-config config
Specify a custom configuration object using coder.config('dll'). The configuration object build type must be set as dynamic library. See coder.config (MATLAB Coder).
To avoid using conflicting options, do not combine a configuration object with command-line options. Usually the config object offers more options than the command-line flags.
Note
Not all the options in the config object are compatible with the DPI feature. If you try to use an incompatible option, an error message informs you of which options are not compatible.
-o output
Specify the name of the generated component as a character vector or string scalar. The dpigen function adds a platform-specific extension to this name for the shared library:
• .dll for C/C++ dynamic libraries on Microsoft Windows systems
• .so for C/C++ dynamic libraries on Linux systems
-d dir
Specify the output folder. All generated files are placed in dir. By default, files are placed in ./codegen/dll/<function>.
For example, when dpigen compiles the function fun.m, the generated code is placed in ./codegen/dll/fun.
-globals globals
Specify initial values for global variables in MATLAB files. The global variables in your function are initialized to the values in the cell array GLOBALS. The cell array provides the name and initial value of each global variable.
If you do not provide initial values for global variables using the -globals option, dpigen checks for the variables in the MATLAB global workspace. If you do not supply an initial value, dpigen generates an error.
MATLAB Coder™ and MATLAB each have their own copies of global data. For consistency, synchronize their global data whenever the two products interact. If you do not synchronize the data, their global variables might differ.
Custom files to include in the generated code, each file specified as a character vector or string scalar. The files build along with the MATLAB function specified by fcn. List each file separately, separated by a space. The following extensions are supported.
File TypeDescription
.cCustom C file
.cppCustom C++ file
.hCustom header file (included by all generated files)
.oObject file
.objObject file
.aLibrary file
.soLibrary file
.libLibrary file
In Windows, if your MATLAB function contains matrix or vector output or input arguments, use the files option to specify the library (.lib) that contains the ModelSim® DPI definitions. Otherwise, you must manually modify the generated Makefile (*.mk) and then compile the library separately.
Option to generate C code without compiling the DPI component, specified as the character vector -c. If you do not use the -c option, dpigen tries to compile the DPI component using the default compiler. To select a different compiler, use the -config option and refer to the codegen (MATLAB Coder) documentation for instructions on specifying the different options.
Option to generate and open a code generation report, specified as the character vector -launchreport.
Select the SystemVerilog data type that will be used for ports. Choose from three possible values:
• CompatibleCType – Generate a compatible C type interface for the port.
• BitVector – Generate a bit vector type interface for the port.
• LogicVector – Generate a logic vector type interface for the port.
This table shows the MATLAB data-type in the left column, and the generated SystemVerilog type for each value of PortsDataType.
Generated SystemVerilog Types
MATLABSystemVerilog
Compatible C TypeLogic VectorBit Vector
uint8byte unsignedlogic [7:0] bit [7:0]
uint16shortint unsignedlogic [15:0] bit [15:0]
uint32int unsignedlogic [31:0]bit [31:0]
uint64longint unsignedlogic [63:0]bit [63:0]
int8bytelogic signed [7:0]bit signed [7:0]
int16shortintlogic signed [15:0]bit signed [15:0]
int32intlogic signed [31:0]bit signed [31:0]
int64longintlogic signed [63:0]bit signed [63:0]
logicalbyte unsignedlogic [0:0]bit [0:0]
fi (fixed-point data type)
Depends on the fixed-point word length. If the fixed-point word length is greater than the host word size (for example, 64-bit vs. 32-bit), then this data type cannot be converted to a SystemVerilog data type by MATLAB Coder and you will get an error. If the fixed-point word length is less than or equal to the host word size, MATLAB Coder converts the fixed-point data type to a built-in C type.
logic [n-1:0]
logic signed [n-1:0]
The logic vector length (n) is equal to the wordlength. The sign is inherited from the fixed point type.
bit [n-1:0]
bit signed [n-1:0]
The bit vector length (n) is equal to the wordlength. The sign is inherited from the fixed point type.
singleshortreal
doublereal
complex
The coder flattens complex signals into real and imaginary parts in the SystemVerilog component.
vectors, matrices
arrays
For example, a 4-by-2 matrix in MATLAB is converted into a one-dimensional array of eight elements in SystemVerilog. By default, the coder flattens matrices in column-major order. To change to row-major order, use the -rowmajor option with the dpigen function. For additional information, see Generate Code That Uses Row-Major Array Layout (MATLAB Coder).
structure
The coder flattens structure elements into separate ports in the SystemVerilog component.
enumerated data typesenum
Introduced in R2014b
|
__label__pos
| 0.710971 |
import QtQuick 2.0 import MuseScore 3.0 MuseScore { version: "3.0" description: "This test plugin test note.tieXXX stuff." menuPath: "Plugins.Test Ties" onRun: { console.log("Hello Tie Walker"); if (!curScore) Qt.quit(); console.log("Score name=" + curScore.scoreName) for (var curStaff = 0; curStaff < curScore.nstaves; curStaff++) { walkOneStaff(curStaff); } Qt.quit(); } // onRun function lookUpEnum(enumObject, value) { for (var key in enumObject) { if (enumObject[key] != null && enumObject[key] == value) { return key; } } return value; } function walkOneStaff(staffIdx) { var cursor = curScore.newCursor(); console.log("Score.elementId=" + curScore.elementId) cursor.filter = -1 cursor.voice = 0 cursor.staffIdx = staffIdx cursor.rewind(Cursor.SCORE_START) console.log("###### START STAFF " + cursor.staffIdx + " ######") var chordNum = 0; // Current chord number on staff while (cursor.segment) { var e = cursor.element; if (e) { console.log("Element type=" + lookUpEnum(Element, e.type)) console.log("e=" + e) if (e.type == Element.CHORD) { var chord = e; var notes = e.notes; for (var idx = 0; idx < notes.length; idx++) { var note = notes[idx]; console.log("Note[" + idx + "] of ChordNum " + chordNum) console.log("----note=" + note) console.log("----note.parent=" + note.parent) console.log("----note.firstTiedNote=" + note.firstTiedNote) if (note == note.firstTiedNote) console.log("firstTiedNote is pointing back at ME!") console.log("----note.lastTiedNote=" + note.lastTiedNote) if (note == note.lastTiedNote) console.log("lastTiedNote is pointing back at ME!") if (note.tieBack != null) { var tieback = note.tieBack; console.log("----note has a tieBack:") console.log("--------tieback.startNote=" + tieback.startNote) console.log("--------tieback.endNote=" + tieback.endNote) } else console.log("----note does not have a tieBack.") if (note.tieForward != null) { var tieforward = note.tieForward; console.log("----note has a tieForward:") console.log("--------tieForward.startNote=" + tieforward.startNote) console.log("--------tieForward.endNote=" + tieforward.endNote) } else console.log("----note does not have a tieForward.") } // Note loop for (var gc = 0; gc < chord.graceNotes.length; gc++) { var gchord = chord.graceNotes[gc]; for (var gn = 0; gn < gchord.notes.length; gn++) { var gnote = gchord.notes[gn]; } // Grace note loop } chordNum++ } // Chord processing else { } } //console.log("------------------") cursor.next(); } // Segment walk loop console.log("^^^^^^ END STAFF " + cursor.staffIdx + " ^^^^^^"); } }
|
__label__pos
| 0.994187 |
The Register® — Biting the hand that feeds IT
Feeds
Schmidt: Microsoft will never be as cool as the Gang of Four
Tablet-friendly Windows 8 won't allow Microsoft into Apple, Facebook, Amazon and Google's cosy clique, said Google’s executive chairman. Quoted in a rather thin Wall Street Journal Q&A, Eric Schmidt maintained his belief that Google, Apple, Amazon and Facebook comprise a consumer-pleasing web-conquering “gang of four”. And …
COMMENTS
This topic is closed for new posts.
Page:
Comments
Silver badge
Mushroom
Rage on
"Gang of Four" in computing uniquely refers to Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides, or their wonderful book. Using it any other way is just wrong, which I suppose is what I've come to expect from Herr Schmidt.
13
4
Bronze badge
Devil
Re: Rage on
Typical geek nonsense. :)
First and foremost it's not about 'computing' whatsoever, it's about the technology industry. Secondly it's more specifically about its business world and "Gang of Four" typically - should I say "uniquely"? - refers to Google, Amazon, Apple and Facebook.
It's a common knowledge (except if you are an old, dorky geek who refuses to join the real world and can only brag about his old hacking of that X.25 terminal, not to mention that using these kind of collective naming ('group') is very common (see Big Four in professional tax-cheating, errr, I mean auditing business etc.) :P
3
7
Re: Rage on
To be fair to the original poster, I have heard the term big four used a lot when talking about companies but in tech circles or at least dev circles Gang of Four refers to a specific group of individuals and more commonly a specific book.
I guess the corporate GOF would rather appear more like wise gurus than the corporate pirates that they really are.
2
0
WTF?
Re: Rage on
But it did say that the gang of four mentioned were cool.
So I thought that could only mean the stuttering guitars and bonkers Trotskyist invective of 1978 Leeds-based band Gang of Four - chief influences of REM and Nirvana.
OK i thought it might be something to do with Chinese political history, OK, but then I hear that its a reference to some dad-dancing poncy overbloated corporate entities.
Cool - In a way that brands simply fucking aren't.
Rage off.
4
0
Silver badge
Headmaster
Re: Rage on
Those who don't learn from history, and all that: he might be referring to the Chinese leadership who ruled during the Cultural Revolution (http://en.wikipedia.org/wiki/Gang_of_Four).
1
0
Anonymous Coward
Re: "who refuses to join the real world"
Which of course is the world as viewed and defined by you.
1
2
Bronze badge
Happy
Re: Rage on
" Those who don't learn from history, and all that: he might be referring to the Chinese leadership who ruled during the Cultural Revolution (http://en.wikipedia.org/wiki/Gang_of_Four). '
....and those of us who remember the classic Not the Nine O'Clock News parody of aforementioned GoF!
0
0
Bronze badge
Re: Rage on
I thought they were a group of superheroes, you know, the one with the rubber guy and the other one that looked like a friendly turd in swimming trunks.
Oh no, hang on, that was the fantastic four.....
0
0
Silver badge
Re: Rage on
Add another who thought this was about Design Patterns or possibly the Chinese Communist Party, but given that even Eric Schmidt's sublime postmodernism would struggle to call the Chinese GoF "cool", I figured it was about development frameworks.
Looking at the list, it's a pretty arbitrary collection, and Apple is odd-man-out, as they make the client terminals, not the web applications. And if it's about web-scale services, then where are Citrix, Akamai, VMWare or hosting companies like Rackspace?
Schmidt's comment on Windows Phone ("haven't used it") reminded me of the story from Nokia's collapse, where the newly-appointed Stephen Elop asked the company's phone engineers how many of them had an iPhone. Only one or two hands. And yet, this was the product they were trying to compete with...
I'm not saying that Windows Phone is going to succed the same way -- far from it. But if your company's future success depends on mobile platforms, and you make a mobile platform, it's dangerously arrogant to ignore competing platforms, no matter how weak or insignificant you think they might be (Nokia and SonyEricsson both considered iPhone to be an overpriced featurephone, and no threat...).
"Dangerously arrogant", and "Eric Schmidt"... that's getting to be a cliché.
2
0
Anonymous Coward
Something to hide?
"Schmidt also repeated his view that a only people with something to hide would worry about losing their privacy online."
Go on then Schmidt, release your complete internet trail to us minions to pore over, in fact why not throw in your whole family's for good measure...
7
0
Re: Something to hide?
Go on then Schmidt, release your company's internal accounts so you can be taxed fairly.
13
0
Big Brother
Oxy...
taxed fairly?
A bit like military intelligence?
2
4
Black Helicopters
Re: Something to hide?
And what's wrong with having something to hide?
You don't have to be doing illegal things to what your privacy.
3
0
Anonymous Coward
Re: Something to hide?
Schmidt, Zuckerberg, et al. like to pat themselves on the back for being WYSIWYG. The more appropriate term for their ilk is WYSIATI.
What You See Is All There Is
1
0
Anonymous Coward
Re: Something to hide?
WYSITE?
What you see is tax evasion? Just musing..
Oh, and as for having nothing to hide, where is the shareholder notification that they have been hit with a notice from TWENTYSEVEN separate EU countries concurrently to correct their privacy policy? Just curious..
0
0
Anonymous Coward
Re: Something to hide?
In particular, I'd like to hide my privates.
0
0
Bronze badge
Holmes
Re: Something to hide?
He didn't say that he didn't have something to hide.
Phil.
0
0
I don't care about "cool";
what matters to me is "does it work well?".
3
0
Bronze badge
Headmaster
Grammar
"the Redmond giant is still a personae non gratae."
Should be "persona non grata" Singular.
Your Latin is in the plural and the sentence would then be "the Redmond giants are still personae non gratae."
As there is fortunately only one Microsoft this is incorrect.
7
0
(Written by Reg staff) Bronze badge
Re: Grammar
Technically, yes. You're right - I double-checked the spelling and for some reason used the plural. It's been fixed.
C.
0
0
This post has been deleted by its author
Anonymous Coward
Re: Grammar
Technically correct is the best kind of correct. Ask an Engineer.
8
0
h3
Bronze badge
Re: Grammar
Technically correct is the only kind of correct.
2
0
Silver badge
Re: Grammar
> Technically correct is the only kind of correct.
But implies deception.
"Do you have any homework to do?"
No (I left it in my bag in the bedroom).
1
0
Meh
Schmidt
does come over as a bit of a dick.
I especially think that he's taken the wrong tack with his statements about privacy - google should be doing all they can to reassure us that they will take special care of our data if they want us to be their ongoing users so they can still exploit us in the future.
Anyway, what is this windows 8 thing - some new iphone app?
0
0
Anonymous Coward
Re: Schmidt
Schmidt comes over as a bit of a dick
There, fixed that for you. It has a very simple cause: he is using the technique of reinforcement to make us believe something that is blatant bullshit. You have a legal right to privacy that Schmidt cannot invade without breaking the law. Which he is doing, all over Europe, every day, and he is fully aware of that.
That's why he needs to spin like Murdoch - he knows most regulators in Europe are on to him and it is going to *SERIOUSLY* hurt when they start taking action. He has already been warned by the EU Article 29 working group that things need to change, so the clock is ticking.
3
0
Gold badge
WTF?
Odd "Gang of Four" there.
A shiny toy maker, a bullshit aggregator, a department store cum data centre and an advertising giant.
I'm sure there's room for a bunch of fuckwit monkey dancers in that eclectic mix.
7
0
Bronze badge
WTF?
Re: Odd "Gang of Four" there.
Yep. And IBM & MS are out of the scalable computer business apparently? Funny...hadn't gotten the news.
Just one more arse running a "tech" company.
2
0
Silver badge
Re: Odd "Gang of Four" there.
I suppose one could argue that he is referring to a cultural revolution of sorts. Still it is indeed odd when you consider what happened to the original sìrén bāng shortly after Mao's death. Schmidt had better hope that Jobs wasn't this gang's Mao.
2
0
Bronze badge
FAIL
OS aside . . .
. . . the Microsoft Surface concept is exactly the sort of thing I would like to see--something with a thin tablet form factor and detachable keyboard (and mouse, ideally) which runs an OS that will run the same apps as my desktop (which the Surface RT won't, I realize, but the Surface Pro will). To my mind, that's less "confused" and more "functional."
Now, whether Microsoft's implementation of this concept is any good is something I have yet to investigate, but the fact that both Eric and Tim are willing to write it off without even trying it is a sign of a blind spot that Microsoft may be able to exploit.
3
0
Silver badge
Re: OS aside . . .
It is confused because the software design doesn't match up the hardware design.
RT doesn't run "normal" windows software, but is portable, Pro runs the software but isn't very portable.
It isn't that its bad, but it isn't a stand-alone product and is unlikely to sell in the way IOS & Android devices do.
1
1
Gold badge
Platforms that scale
"We had never in our industry seen four network platforms that scale."
What is a "network platform" and what does it mean to say that "it scales"? Can anyone provide definitions for both that make it clear why the chosen four meet the criteria and no-one else does?
Otherwise I'm tempted to say that there is but a single platform here (the internet) and four applications sitting on top of it which "scale" only in the sense that their functions are embarrassingly parallel and can therefore be scaled by throwing dollars at server farms.
3
0
Re: Platforms that scale
" "We had never in our industry seen four network platforms that scale."
What is a "network platform" and what does it mean to say that "it scales"? Can anyone provide definitions for both that make it clear why the chosen four meet the criteria and no-one else does?"
I would assume he meant "four network platforms OF that scale." - that's how I read it anyway.
Not that I don't fully agree with the rest of your sentiment.
1
0
Bronze badge
Re: Platforms that scale
Thumbs up for "embarrassingly parallel ". Things that have any shared state are no so 'cool', obviously. You just can't add cheap hardware and bluntly scale 'em.
At some point the the shared state transfer exceeds the interconnection bandwidth or the latency kicks in... The entire cloud nonsense is a truly annoying fad.
2
0
Anonymous Coward
Doesn't Microsoft own shares in Facebook anyway?
2
0
Silver badge
I might be wrong but I think Microsoft needs to rename things asap. This is how I think public perception stands at the moment:
Internet Explorer 10 = 10th incarnation of an old browser
Windows 8 = 8th incarnation of an old operating system
Windows Phone = Phone that runs an old operating system
Microsoft Surface = Tablet made by an old software company that made excel (sounds fun!)
I am not even sure anyone can remember (myself included) which IE is the latest. "Oh there's a 10 now? Im not sure which one I am using, 7?")
Why not throw the old names away and come up with new ones? To give the impression hat we've now left the age of Windows and have entered the age of a new Microsoft operating system for example.
Also when entering new modern recreational consumer markets (phones, tablets) perhaps avoid using words (Windows, Microsoft) that could be perceived as being associated with old technology and office/work applications? I thought Microsoft had figured all this out when they launched the XBox instead of the Microsoft Game Console. I thought they were being clever by de-emphasizing the Microsoft association.
2
4
Anonymous Coward
@NomNomNom
Careful what you wish for. Microsoft are really bad at thinking up names,
1
0
This post has been deleted by its author
Anonymous Coward
As far as I can see it..
.. the choice is between Atlantis or Titanic.
Both sank without a trace, and both are subject to a lot of myth. It's just that Microsoft starts with the myth before the sinking, but given that technical people put the root of a tree at the top I think that's acceptable.
Actually, I think it must be Atlantis. Ballmer has trashed the deck chairs, so Titanic is out..
0
2
Anonymous Coward
But...
I don't want cool, I want something which works with what I already have, is fairly tough, not too expensive, will be supported after the first six months and is expandable.
I'm not saying that I'll get that, but it's what I want and pretty much discounts all the current offerings in one way or another.
5
0
Re: But...
"don't want cool, I want something which works with what I already have, is fairly tough, not too expensive, will be supported after the first six months and is expandable."
So what you're saying is you want a new horse when everyone else is buying these new-fangled auto-mo-biles.
1
6
Anonymous Coward
Re: But...
No, he's saying he would prefer to stick with a well engineered German petrol car, instead of buying a trendy electric car.
0
0
Bronze badge
Re: But... petrol cars
German diesels ain't bad either.
0
0
Anonymous Coward
I am sorry, but why is Facebook on that list?
4
0
Schmidt and it?
"However, Schmidt said it was “extremely curious” that Apple was suing Google's Android partners rather than Google itself."
Did Schmidt lose his status as a man and is now genderless?
I too find it funny that Apple is suing Samsung and not Google itself and files a court motion to have Jelly Bean banned along with its lawsuit against Samsung. Even a five year old knows that there is a difference! And why oh WHY is Apple not suing others like Asus and HP and Lenovo for ripping off their designs?
A while ago I forgot what being a sane customer was like and stepped into a Future Shop to look around. And at first glance, I thought that several machines were MacBooks with the way they were designed! On closer inspection, I THEN saw the Lenovo name and the HP name and such. How come we don't hear about these people being hauled before a judge with Apple crying and saying, "They copied our innovative designs???"
1
0
Silver badge
Re: Schmidt and it?
Try reading that as, ' .. Schmidt said that it was ....'. It might help. You may have been interpreting the words as '..Schmidt it said ...' (The word 'that' is often omitted, leaving the reader/listener to rely on inference.)
1
1
Silver badge
Suing HP
And why oh WHY is Apple not suing others like Asus and HP and Lenovo for ripping off their designs?
Not sure how HP is copying Apple (are their ultrabooks Air lookalikes or something?) but the mere idea of suing HP at all is just hard to wrap my head around, considering how their board is doing a far better job of destroying the company than even the most slam dunk lawsuit ever could. Suing HP is like shooting a terminal cancer patient in the leg. There's a chance it could hasten their demise, but they'd probably not even notice versus the pain they're already enduring.
2
0
Anonymous Coward
When has Google been cool? their success in mobile is due to being free or very cheap with little restrictions and ease of pirating apps.
2
6
FAIL
I think you'll find that the "crap iOS map app that replaced Google's superior software on Apple devices..." was infact Apple replacing Apple software. All that was googles was the data. I guess that this presentation isn't a polarising, salacious or melodramatic though.
1
2
Silver badge
They see themselves as countries
Is that an ego-size thing, or an attempt to start paying their taxes ...to themselves?
0
0
Page:
This topic is closed for new posts.
|
__label__pos
| 0.549787 |
otbOGRLayerClassifier.cxx 8.71 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
/*=========================================================================
Program: ORFEO Toolbox
Language: C++
Date: $Date$
Version: $Revision$
Copyright (c) Centre National d'Etudes Spatiales. All rights reserved.
See OTBCopyright.txt for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notices for more information.
=========================================================================*/
#include "otbWrapperApplication.h"
#include "otbWrapperApplicationFactory.h"
#include "otbOGRDataSourceWrapper.h"
#include "otbOGRFeatureWrapper.h"
#include "otbStatisticsXMLFileWriter.h"
#include "itkVariableLengthVector.h"
#include "otbStatisticsXMLFileReader.h"
#include "itkListSample.h"
#include "otbShiftScaleSampleListFilter.h"
29 30
#ifdef OTB_USE_LIBSVM
31
#include "otbLibSVMMachineLearningModel.h"
32
#endif
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
#include <time.h>
namespace otb
{
namespace Wrapper
{
class OGRLayerClassifier : public Application
{
public:
typedef OGRLayerClassifier Self;
typedef Application Superclass;
typedef itk::SmartPointer<Self> Pointer;
typedef itk::SmartPointer<const Self> ConstPointer;
itkNewMacro(Self)
;
itkTypeMacro(OGRLayerClassifier, otb::Application)
;
private:
54
void DoInit() ITK_OVERRIDE
55 56 57 58 59 60 61 62 63 64 65
{
SetName("OGRLayerClassifier");
SetDescription("Classify an OGR layer based on a machine learning model and a list of features to consider.");
SetDocName("OGRLayerClassifier");
SetDocLongDescription("This application will apply a trained machine learning model on the selected feature to get a classification of each geometry contained in an OGR layer. The list of feature must match the list used for training. The predicted label is written in the user defined field for each geometry.");
SetDocLimitations("Experimental. Only shapefiles are supported for now.");
SetDocAuthors("David Youssefi during internship at CNES");
SetDocSeeAlso("ComputeOGRLayersFeaturesStatistics,TrainOGRLayersClassifier");
AddDocTag(Tags::Segmentation);
66
AddParameter(ParameterType_InputVectorData, "inshp", "Name of the input shapefile");
67 68
SetParameterDescription("inshp","Name of the input shapefile");
69 70
AddParameter(ParameterType_InputFilename, "instats", "XML file containing mean and variance of each feature.");
SetParameterDescription("instats", "XML file containing mean and variance of each feature.");
71 72 73 74 75 76 77 78 79 80
AddParameter(ParameterType_OutputFilename, "insvm", "Input model filename.");
SetParameterDescription("insvm", "Input model filename.");
AddParameter(ParameterType_ListView, "feat", "Features");
SetParameterDescription("feat","Features to be calculated");
AddParameter(ParameterType_String,"cfield","Field containing the predicted class.");
SetParameterDescription("cfield","Field containing the predicted class");
81
SetParameterString("cfield","predicted", false);
82
83 84 85 86 87 88 89
// Doc example parameter settings
SetDocExampleParameterValue("inshp", "vectorData.shp");
SetDocExampleParameterValue("instats", "meanVar.xml");
SetDocExampleParameterValue("insvm", "svmModel.svm");
SetDocExampleParameterValue("feat", "perimeter");
SetDocExampleParameterValue("cfield", "predicted");
90 91
}
92
void DoUpdateParameters() ITK_OVERRIDE
93 94
{
if ( HasValue("inshp") )
OTB Bot's avatar
STYLE
OTB Bot committed
95
{
96 97 98
std::string shapefile = GetParameterString("inshp");
otb::ogr::DataSource::Pointer ogrDS;
99
otb::ogr::Layer layer(ITK_NULLPTR, false);
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
OGRSpatialReference oSRS("");
std::vector<std::string> options;
ogrDS = otb::ogr::DataSource::New(shapefile, otb::ogr::DataSource::Modes::Read);
std::string layername = itksys::SystemTools::GetFilenameName(shapefile);
layername = layername.substr(0,layername.size()-4);
layer = ogrDS->GetLayer(0);
otb::ogr::Feature feature = layer.ogr().GetNextFeature();
ClearChoices("feat");
for(int iField=0; iField<feature.ogr().GetFieldCount(); iField++)
{
std::string key, item = feature.ogr().GetFieldDefnRef(iField)->GetNameRef();
key = item;
key.erase(std::remove(key.begin(), key.end(), ' '), key.end());
std::transform(key.begin(), key.end(), key.begin(), tolower);
key="feat."+key;
AddChoice(key,item);
}
120 121 122
}
}
123
void DoExecute() ITK_OVERRIDE
124
{
125 126
#ifdef OTB_USE_LIBSVM
127 128
clock_t tic = clock();
129 130 131
std::string shapefile = GetParameterString("inshp").c_str();
std::string XMLfile = GetParameterString("instats").c_str();
std::string modelfile = GetParameterString("insvm").c_str();
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
typedef double ValueType;
typedef itk::VariableLengthVector<ValueType> MeasurementType;
typedef itk::Statistics::ListSample <MeasurementType> ListSampleType;
typedef otb::StatisticsXMLFileReader<MeasurementType> StatisticsReader;
typedef unsigned int LabelPixelType;
typedef itk::FixedArray<LabelPixelType,1> LabelSampleType;
typedef itk::Statistics::ListSample <LabelSampleType> LabelListSampleType;
typedef otb::Statistics::ShiftScaleSampleListFilter<ListSampleType, ListSampleType> ShiftScaleFilterType;
StatisticsReader::Pointer statisticsReader = StatisticsReader::New();
statisticsReader->SetFileName(XMLfile);
MeasurementType meanMeasurementVector = statisticsReader->GetStatisticVectorByName("mean");
MeasurementType stddevMeasurementVector = statisticsReader->GetStatisticVectorByName("stddev");
OTB Bot's avatar
STYLE
OTB Bot committed
150 151
otb::ogr::DataSource::Pointer source = otb::ogr::DataSource::New(shapefile, otb::ogr::DataSource::Modes::Read);
otb::ogr::Layer layer = source->GetLayer(0);
152 153 154 155 156 157
bool goesOn = true;
otb::ogr::Feature feature = layer.ogr().GetNextFeature();
ListSampleType::Pointer input = ListSampleType::New();
LabelListSampleType::Pointer target = LabelListSampleType::New();
const int nbFeatures = GetSelectedItems("feat").size();
158
input->SetMeasurementVectorSize(nbFeatures);
159 160 161
if(feature.addr())
while(goesOn)
OTB Bot's avatar
STYLE
OTB Bot committed
162 163 164 165 166 167 168 169 170
{
MeasurementType mv; mv.SetSize(nbFeatures);
for(int idx=0; idx < nbFeatures; ++idx)
mv[idx] = feature.ogr().GetFieldAsDouble(GetSelectedItems("feat")[idx]);
input->PushBack(mv);
target->PushBack(feature.ogr().GetFieldAsInteger("class"));
feature = layer.ogr().GetNextFeature();
171
goesOn = feature.addr() != ITK_NULLPTR;
OTB Bot's avatar
STYLE
OTB Bot committed
172
}
173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195
ShiftScaleFilterType::Pointer trainingShiftScaleFilter = ShiftScaleFilterType::New();
trainingShiftScaleFilter->SetInput(input);
trainingShiftScaleFilter->SetShifts(meanMeasurementVector);
trainingShiftScaleFilter->SetScales(stddevMeasurementVector);
trainingShiftScaleFilter->Update();
ListSampleType::Pointer listSample;
LabelListSampleType::Pointer labelListSample;
listSample = trainingShiftScaleFilter->GetOutput();
labelListSample = target;
ListSampleType::Pointer trainingListSample = listSample;
LabelListSampleType::Pointer trainingLabeledListSample = labelListSample;
typedef otb::LibSVMMachineLearningModel<ValueType,LabelPixelType> LibSVMType;
LibSVMType::Pointer libSVMClassifier = LibSVMType::New();
libSVMClassifier->SetInputListSample(trainingListSample);
libSVMClassifier->SetTargetListSample(trainingLabeledListSample);
libSVMClassifier->Load(modelfile);
libSVMClassifier->PredictAll();
OTB Bot's avatar
STYLE
OTB Bot committed
196 197
otb::ogr::DataSource::Pointer source2 = otb::ogr::DataSource::New(shapefile, otb::ogr::DataSource::Modes::Update_LayerUpdate);
otb::ogr::Layer layer2 = source2->GetLayer(0);
198 199 200 201 202 203 204 205 206 207 208
OGRFieldDefn predictedField(GetParameterString("cfield").c_str(), OFTInteger);
layer2.CreateField(predictedField, true);
bool goesOn2 = true;
layer2.ogr().ResetReading();
otb::ogr::Feature feature2 = layer2.ogr().GetNextFeature();
unsigned int count=0;
if(feature2.addr())
while(goesOn2)
OTB Bot's avatar
STYLE
OTB Bot committed
209
{
210
feature2.ogr().SetField(GetParameterString("cfield").c_str(),(int)labelListSample->GetMeasurementVector(count)[0]);
OTB Bot's avatar
STYLE
OTB Bot committed
211 212
layer2.SetFeature(feature2);
feature2 = layer2.ogr().GetNextFeature();
213
goesOn2 = feature2.addr() != ITK_NULLPTR;
OTB Bot's avatar
STYLE
OTB Bot committed
214 215
count++;
}
216 217 218 219 220 221 222 223
const OGRErr err = layer2.ogr().CommitTransaction();
if (err != OGRERR_NONE)
{
itkExceptionMacro(<< "Unable to commit transaction for OGR layer " << layer2.ogr().GetName() << ".");
}
224 225 226 227 228
source2->SyncToDisk();
clock_t toc = clock();
otbAppLogINFO( "Elapsed: "<< ((double)(toc - tic) / CLOCKS_PER_SEC)<<" seconds.");
229
230
#else
231 232
otbAppLogFATAL("Module LIBSVM is not installed. You should consider turning OTB_USE_LIBSVM on during cmake configuration.");
#endif
233
234 235 236 237 238 239 240 241 242
}
};
}
}
OTB_APPLICATION_EXPORT(otb::Wrapper::OGRLayerClassifier)
|
__label__pos
| 0.523187 |
CentOS 7 安装 MySQL 8.0及常见问题
1.添加MySQLYum仓库2.安装8.0版本3.启动MySQL8.04.修改默认的密码5.外网客户端访问6.使用Yum安装其他MySQL产品和组件7.使用Yum更新MySQL8.Connector/J9.MySQL8常见问题9.1修改默认的密码模式9.2添加外网其他账号访问权限9.3使用触发器报错
群晖 NAS Docker 容器镜像加速
1.添加中国镜像加速地址在群晖Docker注册表镜像URL中填写:https://registry.docker-cn.comDocker会要求重启,重启后即可。或者使用命令修改:vi/var/packages/Docker/etc/dockerd.json#/usr/syno/etc/packag
群晖 NAS 之 Docker Nginx 反向代理
近期给NAS配备了一个公网IP(使用公网ip盒子,此处不细说了)诉求:利用Docker容器中的Nginx实现多域名访问不同应用、或者单域名不同文根访问不同应用,规避IP+端口访问。1.准备需要先在Docker中下载Nginx镜像(国内Dockerhub网站支持不好,可再注册表配置中配置阿里云注册表镜
给自己博客添加 Elasticsearch 实现全文搜索
经过调研,决定使用 Docker 来快速部署 Elasticsearch。1.安装 Docker 和 Docker-Compose笔者服务器是 CentOS 7 x64,参考了Get Docker CE for CentOS和Install Docker Compose安装。安装 Docker# I
Docker学习1——CentOS6.8安装
首先查看一下系统版本[root@Sherlock ~]# cat /etc/issue CentOS release 6.8 (Final)Kernel \r on an \m[root@Sherlock ~]# uname -r 2.6.32-642.11.1.el6.x86_64docker 要
|
__label__pos
| 0.586214 |
drill-dev mailing list archives
Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Victoria Markman (JIRA)" <[email protected]>
Subject [jira] [Created] (DRILL-3652) Need to document order of operations with window functions and flatten
Date Fri, 14 Aug 2015 22:38:46 GMT
Victoria Markman created DRILL-3652:
---------------------------------------
Summary: Need to document order of operations with window functions and flatten
Key: DRILL-3652
URL: https://issues.apache.org/jira/browse/DRILL-3652
Project: Apache Drill
Issue Type: Improvement
Components: Documentation
Affects Versions: 1.2.0
Reporter: Victoria Markman
Assignee: Bridget Bevens
In standard SQL, window functions are the last set of operations performed in a query except
for the final order by clause.
Using window function with flatten is a bit confusing, because it appears as an operator in
the query plan and I expected flatten to run first followed by a window function.
This is not what is happening:
{code}
0: jdbc:drill:schema=dfs> select * from `complex.json`;
+----+-----------+----------+
| x | y | z |
+----+-----------+----------+
| 5 | a string | [1,2,3] |
+----+-----------+----------+
1 row selected (0.128 seconds)
0: jdbc:drill:schema=dfs> select sum(x) over(), x , y, flatten(z) from `complex.json`;
+---------+----+-----------+---------+
| EXPR$0 | x | y | EXPR$3 |
+---------+----+-----------+---------+
| 5 | 5 | a string | 1 |
| 5 | 5 | a string | 2 |
| 5 | 5 | a string | 3 |
+---------+----+-----------+---------+
3 rows selected (0.152 seconds)
0: jdbc:drill:schema=dfs> explain plan for select sum(x) over(), x , y, flatten(z) from
`complex.json`;
+------+------+
| text | json |
+------+------+
| 00-00 Screen
00-01 ProjectAllowDup(EXPR$0=[$0], x=[$1], y=[$2], EXPR$3=[$3])
00-02 Project(w0$o0=[$3], x=[$0], y=[$1], EXPR$3=[$4])
00-03 Flatten(flattenField=[$4])
00-04 Project(EXPR$0=[$0], EXPR$1=[$1], EXPR$2=[$2], EXPR$3=[$3], EXPR$5=[$2])
00-05 Project(x=[$1], y=[$2], z=[$3], w0$o0=[$4])
00-06 Window(window#0=[window(partition {} order by [] range between UNBOUNDED
PRECEDING and UNBOUNDED FOLLOWING aggs [SUM($1)])])
00-07 Project(T38¦¦*=[$0], x=[$1], y=[$2], z=[$3])
00-08 Scan(groupscan=[EasyGroupScan [selectionRoot=maprfs:/drill/testdata/subqueries/complex.json,
numFiles=1, columns=[`*`], files=[maprfs:///drill/testdata/subqueries/complex.json]]]
{code}
We should suggest to users to put flatten in a subquery if they want to run window function
on top of the result set returned by flatten.
{code}
0: jdbc:drill:schema=dfs> select x, y, a, sum(x) over() from ( select x , y, flatten(z)
as a from `complex.json`);
+----+-----------+----+---------+
| x | y | a | EXPR$3 |
+----+-----------+----+---------+
| 5 | a string | 1 | 15 |
| 5 | a string | 2 | 15 |
| 5 | a string | 3 | 15 |
+----+-----------+----+---------+
3 rows selected (0.145 seconds)
{code}
I suggest we document this issue in the window function section, perhaps in "Usage notes".
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
Mime
View raw message
|
__label__pos
| 0.947076 |
Twitch.tv API challenge - cross origin issue
using the suggested URL here https://wind-bow.gomix.me/ i’m getting a “No ‘Access-Control-Allow-Origin’ header” error. i tried the solution for jQuery users here https://forum.freecodecamp.com/t/use-the-twitchtv-json-api/19541 but i can’t get the suggested addition to the URL to work. same error when using a question mark or ampersand. i’m using pure JS. i’d rather not hardcode the JSON.
any suggestions?
XMLHttpRequest cannot load https://wind-bow.gomix.me/twitch-api/users/noobs2ninjas&callback=?.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'https://s.codepen.io' is therefore not allowed access.
XMLHttpRequest cannot load https://wind-bow.gomix.me/twitch-api/users/noobs2ninjas?callback=?.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'https://s.codepen.io' is therefore not allowed access.
3 Likes
If you look at the API Read-me Here, there is a section about rate limits involving a client id. Are you using a client id in your code?
Here is a blog post about the switch to client ids in their API.
You will probably have to post a link to your codepen or your code for more help.
i’m not using a client id. i took the wind-bow URL as a solution to that but it only solves the API key requirement?
I just checked, and this works:
$.getJSON('https://wind-bow.gomix.me/twitch-api/users/noobs2ninjas?callback=?', function (data) {
console.log(data);
});
3 Likes
all right there must be something going on with my code then (JS, no jQuery)
var APIReq = new XMLHttpRequest();
var parsed;
var users = ["ESL_SC2", "OgamingSC2", "cretetion", "freecodecamp", "storbeck", "habathcx", "RobotCaleb", "noobs2ninjas"];
APIReq.onreadystatechange = function() {
if (APIReq.readyState === 4 && APIReq.status === 200) {
parsed = JSON.parse(APIReq.responseText);
console.log(parsed);
}
};
for (var i = 0; i < users.length; i++) {
APIReq.open('GET', 'https://wind-bow.gomix.me/twitch-api/users/' + users[i] + '?callback=?', true);
APIReq.send(null);
}
Use jQuery.
If you don’t want to use jQuery, read this:
2 Likes
thanks. so JSONP is the solution to my cross origin issue?
1 Like
Yes.
If you want to do this challenge in JS you’ll have to get twitch API key and make request to the real twitch API
Or you can implement one of the solutions from that stack overflow link.
ok awesome. thanks for your help!
I was having the same problem. thank you for posting this question.
The original Twitch API needs JSONP too.
@jenovs You are right, thanks. It was the old version.
I enabled CORS on GET for https://wind-bow.gomix.me/. Would you help me testing it, when you have time ?
Created a quick mockup. Seems to work. Only getting sporadic 404 error, but I suppose it’s some kind of protection on the server?
@jenovs thanks :heart_decoration: :thumbsup:
The proxy has a rate limiter, needed to not exceed gomix capabilities. The 404 error are probably related to that.
@JSDesign now you can use JS XMLHttpRequest.
nice thanks for updating it :slight_smile:
|
__label__pos
| 0.699825 |
get_cython_type.hpp
Go to the documentation of this file.
1
13 #ifndef MLPACK_BINDINGS_PYTHON_GET_CYTHON_TYPE_HPP
14 #define MLPACK_BINDINGS_PYTHON_GET_CYTHON_TYPE_HPP
15
16 #include <mlpack/prereqs.hpp>
18
19 namespace mlpack {
20 namespace bindings {
21 namespace python {
22
23 template<typename T>
24 inline std::string GetCythonType(
25 util::ParamData& /* d */,
26 const typename boost::disable_if<util::IsStdVector>::type* = 0,
27 const typename boost::disable_if<data::HasSerialize>::type* = 0,
28 const typename boost::disable_if>::type* = 0)
29 {
30 return "unknown";
31 }
32
33 template<>
34 inline std::string GetCythonType(
35 util::ParamData& /* d */,
36 const typename boost::disable_if>::type*,
37 const typename boost::disable_if>::type*,
38 const typename boost::disable_if>::type*)
39 {
40 return "int";
41 }
42
43 template<>
44 inline std::string GetCythonType(
45 util::ParamData& /* d */,
46 const typename boost::disable_if>::type*,
47 const typename boost::disable_if>::type*,
48 const typename boost::disable_if>::type*)
49 {
50 return "double";
51 }
52
53 template<>
54 inline std::string GetCythonType(
55 util::ParamData& /* d */,
56 const typename boost::disable_if>::type*,
57 const typename boost::disable_if>::type*,
58 const typename boost::disable_if>::type*)
59 {
60 return "string";
61 }
62
63 template<>
64 inline std::string GetCythonType(
65 util::ParamData& /* d */,
66 const typename boost::disable_if>::type*,
67 const typename boost::disable_if>::type*,
68 const typename boost::disable_if>::type*)
69 {
70 return "size_t";
71 }
72
73 template<>
74 inline std::string GetCythonType(
75 util::ParamData& /* d */,
76 const typename boost::disable_if>::type*,
77 const typename boost::disable_if>::type*,
78 const typename boost::disable_if>::type*)
79 {
80 return "cbool";
81 }
82
83 template<typename T>
84 inline std::string GetCythonType(
85 util::ParamData& d,
86 const typename boost::enable_if<util::IsStdVector>::type* = 0)
87 {
88 return "vector[" + GetCythonType(d) + "]";
89 }
90
91 template<typename T>
92 inline std::string GetCythonType(
93 util::ParamData& d,
94 const typename boost::enable_if>::type* = 0)
95 {
96 std::string type = "Mat";
97 if (T::is_row)
98 type = "Row";
99 else if (T::is_col)
100 type = "Col";
101
102 return "arma." + type + "[" + GetCythonType(d) + "]";
103 }
104
105 template<typename T>
106 inline std::string GetCythonType(
107 util::ParamData& d,
108 const typename boost::disable_if>::type* = 0,
109 const typename boost::enable_if<data::HasSerialize>::type* = 0)
110 {
111 return d.cppType + "*";
112 }
113
114 } // namespace python
115 } // namespace bindings
116 } // namespace mlpack
117
118 #endif
Linear algebra utility functions, generally performed on matrices or vectors.
std::string GetCythonType< bool >(util::ParamData &, const typename boost::disable_if< util::IsStdVector< bool >>::type *, const typename boost::disable_if< data::HasSerialize< bool >>::type *, const typename boost::disable_if< arma::is_arma_type< bool >>::type *)
The core includes that mlpack expects; standard C++ includes and Armadillo.
std::string GetCythonType(util::ParamData &, const typename boost::disable_if< util::IsStdVector< T >>::type *=0, const typename boost::disable_if< data::HasSerialize< T >>::type *=0, const typename boost::disable_if< arma::is_arma_type< T >>::type *=0)
python
Definition: CMakeLists.txt:6
This structure holds all of the information about a single parameter, including its value (which is s...
Definition: param_data.hpp:52
std::string GetCythonType< int >(util::ParamData &, const typename boost::disable_if< util::IsStdVector< int >>::type *, const typename boost::disable_if< data::HasSerialize< int >>::type *, const typename boost::disable_if< arma::is_arma_type< int >>::type *)
Metaprogramming structure for vector detection.
std::string GetCythonType< double >(util::ParamData &, const typename boost::disable_if< util::IsStdVector< double >>::type *, const typename boost::disable_if< data::HasSerialize< double >>::type *, const typename boost::disable_if< arma::is_arma_type< double >>::type *)
std::string GetCythonType< size_t >(util::ParamData &, const typename boost::disable_if< util::IsStdVector< size_t >>::type *, const typename boost::disable_if< data::HasSerialize< size_t >>::type *, const typename boost::disable_if< arma::is_arma_type< size_t >>::type *)
std::string cppType
The true name of the type, as it would be written in C++.
Definition: param_data.hpp:84
|
__label__pos
| 0.821508 |
The price of a computer is $4500. The computer’s value will decrease in value by 900 each year, 1. If
The price of a computer is $4500. The computer’s value will decrease in value by 900 each year, 1. If a graph is used to model this function, at what point is the y-intercept? 2. What does the y-intercept represent? 3. What is the slope of the line that models this function? 4. Write an equation for the function.
I REALLY
Leave a Reply
Your email address will not be published.
|
__label__pos
| 0.599549 |
Sign up ×
WordPress Development Stack Exchange is a question and answer site for WordPress developers and administrators. It's 100% free, no registration required.
When using Custom Posts and adding a Metabox, I was wondering how I could add a drop-down menu that would populate data that was added by the user.
For example I want to be able to select a 'source' for the post using the drop-down, but under that I want the user to be able to add sources to that same drop-down. I've tried doing this on my own by using taxonomies for the sources and trying to populate the drop-down with those taxonomies, but I probably did it wrong.
Any idea of how I can go about this?
share|improve this question
1 Answer 1
NOTE: It's kind of late, so the code might not be perfect, but it should get you on the right track.
Well, you could theoretically just set up a custom taxonomy and then the user can just use the text box at the bottom and then the check boxes in the Taxonomy selection area.
I'll be assuming you are using the standard Post type for these, but you can adjust it in the register_taxonomy call.
Something like:
register_taxonomy("post_source", array("post"), array("hierarchical" => true, "label" => "Sources", "singular_label" => "Source", "rewrite" => true));
But, if you're set on the drop-down, you'll need to set it up a little differently.
The code below will be using Ajax to send the newly added source to the DB, and, if that is successful, add that source to the dropdown and select it automatically.
Step 1: Create a Table for Storing Sources
You'll want to create a table for storing these so that you can refer to them when the user tries to add a new source and make sure that it isn't already in the database.
And, you'll need them stored somewhere so that you can pull them into the dropdown.
Step 2: Create the Meta Box
//run this function on admin_init
add_action('admin_init','add_post_source_box');
//add source selection box to post editor
function add_post_source_box() {
add_meta_box("post_source_meta", "Post Source", "post_source", "post", "side", "high");
}
Step 3: Populate the Meta Box
//populate the post_source box
function post_source() {
global $post;
$custom = get_post_custom($post->ID);
$post_source = $custom['post_source'][0]; ?>
<script type="text/javascript">
jQuery(function() {
jQuery("#post_source_add_button").click(function() {
var new_source = jQuery("$post_source_add").val();
var add_source = {"ajaxurl":"<?php echo get_bloginfo('wp_url'); ?>/wp-admin/admin-ajax.php"};
if(new_source.length) {
jQuery.post(
add_source.ajaxurl,
{
action: 'add_source',
source: new_source
},
function(response,source_id) {
if(response == 'success') {
$("#post_source_dropdown").append('<option></option>').val(source_id).text(new_source);
$("#post_source_dropdown").find('option').each(function() {
if(jQuery(this).is('selected') {
jQuery(this).attr('selected','false');
} else if(jQuery(this).attr('id') == source_id) {
jQuery(this).attr('selected','true');
}
});
} else {
alert('Error adding source.');
}
}
);
}
});
});
</script>
<table cellpadding="0" cellspacing="5" border="0">
<tr>
<td><label for="post_source_dropdown">Post Source</label></td>
</tr>
<tr>
<td>
<select name="post_source_dropdown" id="post_source_dropdown">
<option value="">Select a Source</option>
<?php //get the various sources already added
$global wpdb;
$sources = $wpdb->get_results("SELECT * FROM wp_post_sources ORDER BY source_name");
foreach($sources as $source) {
if($source->id == $post_source) {
$selected = ' selected';
} else {
$selected = '';
}
echo '<option value="' . $source->id . '"' . $selected . '>' . $source->source_name . '</option>';
} ?>
</td>
</tr>
<tr>
<td><label for="post_source_add">Add New Source</label></td>
</tr>
<tr>
<td>
<input type="text" name="post_source_add" id="post_source_add" value="" />
<br />
<input type="button" id="post_source_add_button" name="post_source_add_button" value="Add Source" />
</td>
</tr>
</table>
<?php }
Step 4: Save the Custom Meta
//run this function when saving posts
add_action('save_post','save_post_source');
//save the source when the user saves the post
function save_post_source() {
$global post;
update_post_meta($post->ID,'post_source',$_POST['post_source_dropdown']);
}
I'd recommend moving the JavaScript into a separate JS file that you include using the admin_enque_scripts action I just wrote it directly in there for this example.
This post has a great breakdown of using Ajax in a plug-in, which I would honestly recommend you turn this into instead of dropping it into the functions.php. That way if you ever want to swap themes, you won't have to worry about remembering to move things from the functions.php of the old theme.
Step 5: Process the New Source via Ajax
I'm going to leave this one up to you for right now because it's late, and getting all of this in here took longer than I expected. Just be sure to send back the response and the ID of the source added to the database.
EDIT: I forgot a really important part, it is outlined in that article I linked, but it should be noted again because it really is the lynch pin of the whole Ajax submission thing.
//run this function when admin-ajax receives the action 'add_source'
add_action('wp_ajax_add_source','add_source_to_db');
//add the new source to the db
function add_source_to_db() {
//get the source from the $_POST array and then add it to the db
//create the response object to send back to the JS function
}
Now you just need to fill in the add_source_to_db function and you're all set.
HINT: You'll want to use the global $wpdb object to insert the new source and then use $wpdb->insert_id to get the ID of the source that was just inserted.
Hopefully what I've provided can get you started.
If you need help with the processing file, let me know and I'll try to provide some code. However, the link regarding the Ajax in plug-ins should have everything you need to get it working.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.954277 |
Apache Tomcat 8.0.43
Package org.apache.tomcat.dbcp.dbcp2
Database Connection Pool API.
See: Description
Package org.apache.tomcat.dbcp.dbcp2 Description
Database Connection Pool API.
Overview in Dialog Form
Q: How do I use the DBCP package?
A: There are two primary ways to access the DBCP pool, as a Driver, or as a DataSource. You'll want to create an instance of PoolingDriver or PoolingDataSource. When using one of these interfaces, you can just use your JDBC objects the way you normally would. Closing a Connection will simply return it to its pool.
Q: But PoolingDriver and PoolingDataSource both expect an ObjectPool as an input. Where do I get one of those?
A: The ObjectPool interface is defined in Commons Pool. You can use one of the provided implementations such as GenericObjectPool or SoftReferenceObjectPool or you can create your own.
Q: Ok, I've found an ObjectPool implementation that I think suits my connection pooling needs. But it wants a PooledObjectFactory. What should I use for that?
A: The DBCP package provides a class for this purpose. It's called PoolableConnectionFactory. It implements the factory and lifecycle methods of PooledObjectFactory for Connections. But it doesn't create the actual database Connections itself, it uses a ConnectionFactory for that. The PoolableConnectionFactory will take Connections created by the ConnectionFactory and wrap them with classes that implement the pooling behaviour.
Several implementations of ConnectionFactory are provided--one that uses DriverManager to create connections (DriverManagerConnectionFactory), one that uses a Driver to create connections (DriverConnectionFactory), one that uses a DataSource to create connections (DataSourceConnectionFactory).
Q: I think I'm starting to get it, but can you walk me though it again?
A: Sure. Let's assume you want to create a DataSource that pools Connections. Let's also assume that those pooled Connections should be obtained from the DriverManager. You'll want to create a PoolingDataSource.
The PoolingDataSource uses an underlying ObjectPool to create and store its Connection.
To create a ObjectPool, you'll need a PooledObjectFactory that creates the actual Connections. That's what PoolableConnectionFactory is for.
To create the PoolableConnectionFactory, you'll need at least two things:
1. A ConnectionFactory from which the actual database Connections will be obtained.
2. An empty and factory-less ObjectPool in which the Connections will be stored.
When you pass an ObjectPool into the PoolableConnectionFactory, it will automatically register itself as the PooledObjectFactory for that pool.
In code, that might look like this:
GenericObjectPool connectionPool = new GenericObjectPool(null);
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory("jdbc:some:connect:string", "username", "password");
PoolableConnectionFactory poolableConnectionFactory = new PoolableConnectionFactory(connectionFactory,connectionPool,null,null,false,true);
PoolingDataSource dataSource = new PoolingDataSource(connectionPool);
To create a PoolingDriver, we do the same thing, except that instead of creating a DataSource on the last line, we create a PoolingDriver, and register the connectionPool with it. E.g.,:
GenericObjectPool connectionPool = new GenericObjectPool(null);
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory("jdbc:some:connect:string", "username", "password");
PoolableConnectionFactory poolableConnectionFactory = new PoolableConnectionFactory(connectionFactory,connectionPool,null,null,false,true);
PoolingDriver driver = new PoolingDriver();
driver.registerPool("example",connectionPool);
Since the PoolingDriver registers itself with the DriverManager when it is created, now you can just go to the DriverManager to create your Connections, like you normally would:
Connection conn = DriverManager.getConnection("jdbc:apache:commons:dbcp:example");
Apache Tomcat 8.0.43
Copyright © 2000-2017 Apache Software Foundation. All Rights Reserved.
|
__label__pos
| 0.75307 |
7%
Welcome! To start off, let’s figure out how long it took our hiker to reach his car. If each straight line takes 5 minutes to walk and each turn takes 5 minutes to take, how long did it take him to get there?
Check out this great sponsored content
Continue with the IQ test afterwards
|
__label__pos
| 0.999256 |
The Practical Client
Exploiting TypeScript Arrays
TypeScript doesn't have the rich set of native collection classes that you're used to in the .NET Framework -- instead, it has just arrays and tuples. Fortunately, you can do quite a lot with them.
Compared to the .NET Framework with its rich toolbox of collections, TypeScript is poverty stricken. Fundamentally, TypeScript supports two kinds of collections: arrays (where all the members are of the same type and are accessed by position) and tuples (where each member can be of a different type). Arrays are probably the more familiar of the two types of collections so I'll look at them in this month's column and come back to tuples next month.
TypeScript Arrays
In TypeScript, arrays are themselves a data type, just like number and string). TypeScript provides quite a lot of ways for you to declare an array…but all of the methods give you the same result.
For example, you can declare an array using a "square bracket syntax" that will look very familiar to JavaScript developers. This example defines an array that can only hold Customer objects using that [] syntax:
var custs: Customer[];
You can also declare an array as an Array of some specific data type using a syntax that looks very much like C# generics. Like my previous example, this code also declares an array that can only hold Customer objects:
var custs: Array<Customer>;
You can also use TypeScript's union types to define an array that can hold several different types. This code defines arrays that can hold either Customer or CustomerNew objects (and only Customer and CustomerNew objects) using both syntaxes:
var custsExpanded: (Customer | CustomerNew)[];
var custsExpanded: Array<Customer | CustomerNew>;
Regardless of how you declare an array, you won't be able to use it until you initialize it. This code provides a very basic initialization:
custs = [];
This example initializes my array with two Customer objects using an array literal:
custs = [new Customer("A123"), new Customer("B456")];
You can also combine your array's declaration with its initialization:
var custs: Customer[]= [new Customer("A123"), new Customer("B456")];
var custs: Array<Customer> = [new Customer("A123"), new Customer("B456")];
I don't often refer to the JavaScript that is generated from my TypeScript code but, just to make the point that it doesn't matter what syntax you use, it's worthwhile to point out that both of those lines of TypeScript generate identical JavaScript:
var custs = [new Customer("A123"), new Customer("B456")];
And, regardless of which syntax you use for declaring an array, you'll find that the resulting objects have exactly the same methods and properties that you'd expect from a JavaScript array: length, pop, push, split, and so on. What you've defined is something that the TypeScript specification refers to as "array-like type" (anything that can be assigned to an array of the any type: any[]).
Controlling Functionality
If you'd prefer to limit what can be done with any particular array, you can define your own collection interface, provided it specifies some subset of the full Array functionality. This can be useful when, for example, you have a method that returns a collection and you want to limit what the client can do with your collection.
This example defines an collection of Customer objects by defining an interface with an indexer. The code then uses that interface to define a collection:
interface IArray {
[position: number]: Customer;
}
var custs: IArray = [new Customer("A123"), new Customer("B456")];
As far as the TypeScript compiler is concerned, this custs collection will have no other members than the indexer defined in IArray (i.e. no length property, no push method, etc.). The only thing that can be done with this array is initialize it and access items in the collection by position using the indexer. That's what this code does:
var cust: Customer
cust = cust[0];
While this controls the operations that can be performed on the collection, it's probably a little extreme. For example, without a length property it's impossible to build a for loop to process this collection (by "impossible" I mean that I couldn't figure out how to do it). However, by adding more members of the array in the interface, you can selectively provide access to the members of the underlying JavaScript array. This example provides my collection with a length property and a push method:
interface IArrayExtended {
[position: number]: Customer;
length: number;
push(item: Customer): number;
};
A collection defined with this interface would support processing in a for loop to (because it has a length property) and also adding items to the array (though the push method), as in this code:
private custs: IArrayExtended;
custs = [new Customer("A123"), new Customer("B456")];
this.custs.push(new Customer("C789"));
for (var i = 0; this.custs.length > i; i++) { }
Creating Custom Collections
You might be tempted to believe that you could leverage this technology to create a collection class with your own custom methods that supports an implicit indexer (I certainly was). As an example, you might want to create a Queue class that you could use like this:
var q: Queue<Customer>;
q = new Queue<Customer>();
q.enqueue(new Customer("A123");
q.enqueue(new Customer("A123");
for (var i = 0; q.length > i; i++)
{
var cust = q[i];
}
You could create a class that would work like this…right up until the code attempts to access the class's indexer (the statement containing the expression q[i]). While it's easy to create an indexer in a class in Visual Basic or C# it's not an option in TypeScript. There is, of course, nothing stopping you from creating a itemAt method that would give you the functionality you want:
for (var i = 0; q.length > i; i++)
{
var cust = q.itemAt(i);
}
Dictionaries
Retrieving items by position is useful. However, most languages also include dictionary collections where the indexer can be passed a key to retrieve the item in the dictionary assigned to that key. TypeScript allows you to define the interface that a dictionary collection requires: An indexer that accepts a string and returns the type of the items in the array. Here's what the interface for a dictionary of Customer objects looks like:
interface IDictionary {
[key: string]: Customer;
};
Unfortunately, the TypeScript compiler will then constrain any other members in a collection that uses that interface to results that can be assigned to that indexer. Effectively, your dictionary can have only one member -- the indexer. This means that you can create your dictionary but you won't be able to define a length method that would allow you loop through the dictionary method to process all the members in a for loop.
Furthermore, a variable defined with this interface can not be assigned to a variable of type any[] so it's not one of TypeScript's "array-like types." It is, instead, an object and must be initialized using an object initializer (e.g. {}). This code uses my interface to create a dictionary for Customer objects, stores a Customer object in it, and then retrieves it:
var custsD: IDictionary;
custsD = {};
var cust: Customer;
custsD["A123"] = new Customer("A123");
cust = custsD["A123"];
However, because the positions in an array don't have to be contiguous, you might be better off to think of arrays in TypeScript dictionaries…just dictionaries with numerical keys. This code, which assigns items to "positions" 0 and 2 works just fine, for example:
this.custs = [];
this.custs[0] = new Customer("A123");
this.custs[3] = new Customer("B456");
Accessing this array using positions 0 through 3 would return a Customer object for positions 1 and 3 and the undefined type for positions 1 and 2. If you wanted to consider a TypeScript array as a dictionary with numeric keys (in this case, with two assigned keys: 0 and 3) that, rather than throwing an error, returns the undefined type when you use a key that isn't assigned, you wouldn't go far wrong. You can, for example, use negative values and strings as your array index for all of the collections I've defined including ones using my IArray interface:
this.custs[1] = new Customer("A123");
this.custs["A123"] = new Customer("A123");
Processing All Members with For-in
In other languages you may be used to using a For…Each construct that retrieves each item in collection and stops when the last item in the collection has been processed. TypeScript has a "syntactically" similar construct in the For-in statement…but it does something different. What the For-in statement does is process all the members of whatever it's given to process (for an object, you'll get a list of all the properties and methods in the object). This is where thinking of an array as a dictionary with numeric keys pays off.
For the collections I've defined in this column, the For-in loop returns the keys of the items in your collection rather than the items themselves. To use For-in with any of the collections (arrays or dictionaries) I've declared, you need to pass the variable used in the For-in loop to the indexer of the collection:
for (var c in this.custs) {
cust = this.custs[c];
}
While more limited than other languages, TypeScript does give you arrays and a limited dictionary. But, as I said, arrays form only one half of the collections available in TypeScript -- the other half are tuples. As you'll see in next month's column, you can leverage tuples to create something more like the dictionary classes you're used to.
About the Author
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.
comments powered by Disqus
Featured
|
__label__pos
| 0.723484 |
Why JavaScript is Eating HTML
js-pattern.png
Why JavaScript is Eating HTML 1
Web development is always changing. One trend in particular has become very popular lately, and it fundamentally goes against the conventional wisdom about how a web page should be made. It is exciting for some but frustrating for others, and the reasons for both are difficult to explain.
A web page is traditionally made up of three separate parts with separate responsibilities: HTML code defines the structure and meaning of the content on a page, CSS code defines its appearance, and JavaScript code defines its behavior. On teams with dedicated designers, HTML/CSS developers and JavaScript developers, this separation of concerns aligns nicely with job roles: Designers determine the visuals and user interactions on a page, HTML and CSS developers reproduce those visuals in a web browser, and JavaScript developers add the user interaction to tie it all together and “make it work.” People can work on one piece without getting involved with all three.
In recent years, JavaScript developers have realized that by defining a page’s structure in JavaScript instead of in HTML (using frameworks such as React), they can simplify the development and maintenance of user interaction code that is otherwise much more complex to build. Of course, when you tell someone that the HTML they wrote needs to be chopped up and mixed in with JavaScript they don’t know anything about, they can (understandably) become frustrated and start asking what the heck we’re getting out of this.
As a JavaScript developer on a cross-functional team, I get this question occasionally and I often have trouble answering it. All of the materials I’ve found on this topic are written for an audience that is already familiar with JavaScript — which is not terribly useful to those who focus on HTML and CSS. But this HTML-in-JS pattern (or something else that provides the same benefits) will likely be around for a while, so I think it’s an important thing that everyone involved in web development should understand.
This article will include code examples for those interested, but my goal is to explain this concept in a way that can be understood without them.
Background: HTML, CSS, and JavaScript
To broaden the audience of this article as much as possible, I want to give a quick background on the types of code involved in creating a web page and their traditional roles. If you have experience with these, you can skip ahead.
HTML is for structure and semantic meaning
HTML (HyperText Markup Language) code defines the structure and meaning of the content on a page. For example, this article’s HTML contains the text you’re reading right now, the fact that it is in a paragraph, and the fact that it comes after a heading and before a CodePen.
Let’s say we want to build a simple shopping list app. We might start with some HTML like this:
We can save this code in a file, open it in a web browser, and the browser will display the rendered result. As you can see, the HTML code in this example represents a section of a page that contains a heading reading “Shopping List (2 items),” a text input box, a button reading “Add Item,” and a list with two items reading “Eggs” and “Butter.” In a traditional website, a user would navigate to an address in their web browser, then the browser would request this HTML from a server, load it and display it. If there are already items in the list, the server could deliver HTML with the items already in place, like they are in this example.
Try to type something in the input box and click the “Add Item” button. You’ll notice nothing happens. The button isn’t connected to any code that can change the HTML, and the HTML can’t change itself. We’ll get to that in a moment.
CSS is for appearance
CSS (Cascading Style Sheets) code defines the appearance of a page. For example, this article’s CSS contains the font, spacing, and color of the text you’re reading.
You may have noticed that our shopping list example looks very plain. There is no way for HTML to specify things like spacing, font sizes, and colors. This is where CSS (Cascading Style Sheets) comes in. On the same page as the HTML above, we could add CSS code to style things up a bit:
As you can see, this CSS changed the font sizes and weights and gave the section a nice background color (designers, please don’t @ me; I know this is still ugly). A developer can write style rules like these and they will be applied consistently to any HTML structure: if we add more <section>, <button> or <ul> elements to this page, they will have the same font changes applied.
The button still doesn’t do anything, though: that’s where JavaScript comes in.
JavaScript is for behavior
JavaScript code defines the behavior of interactive or dynamic elements on a page. For example, the embedded CodePen examples in this article are powered by JavaScript.
Without JavaScript, to make the Add Item button in our example work would require us to use special HTML to make it submit data back to the server (<form action="...">, if you’re curious). Then the browser would discard the entire page and reload an updated version of the entire HTML file. If this shopping list was part of a larger page, anything else the user was doing would be lost. Scrolled down? You’re back at the top. Watching a video? It starts over. This is how all web applications worked for a long time: any time a user interacted with a webpage, it was as if they closed their web browser and opened it again. That’s not a big deal for this simple example, but for a large complex page which could take a while to load, it’s not efficient for either the browser or the server.
If we want anything to change on a webpage without reloading the entire page, we need JavaScript (not to be confused with Java, which is an entirely different language… don’t get me started). Let’s try adding some:
Now when we type some text in the box and click the “Add Item” button, our new item is added to the list and the item count at the top is updated! In a real app, we would also add some code to send the new item to the server in the background so that it will still show up the next time we load the page.
Separating JavaScript from the HTML and CSS makes sense in this simple example. Traditionally, even more complicated interactions would be added this way: HTML is loaded and displayed, and JavaScript runs afterwards to add things to it and change it. As things get more complex, however, we start needing to keep better track of things in our JavaScript.
If we were to keep building this shopping list app, next we’d probably add buttons for editing or removing items from the list. Let’s say we write the JavaScript for a button that removes an item, but we forget to add the code that updates the item total at the top of the page. Suddenly we have a bug: after a user removes an item, the total on the page won’t match the list! Once we notice the bug, we fix it by adding that same totalText.innerHTML line from our “Add Item” code to the “Remove Item” code. Now we have the same code duplicated in more than one place. Later on, let’s say we want to change that code so that instead of “(2 items)” at the top of the page it reads “Items: 2.” We’ll have to make sure we update it in all three places: in the HTML, in the JavaScript for the “Add Item” button, and in the JavaScript for the “Remove Item” button. If we don’t, we’ll have another bug where that text abruptly changes after a user interaction.
In this simple example, we can already see how quickly these things can get messy. There are ways to organize our JavaScript to make this kind of problem easier to deal with, but as things continue to get more complex, we’ll need to keep restructuring and rewriting things to keep up. As long as HTML and JavaScript are kept separate, a lot of effort can be required to make sure everything is kept in sync between them. That’s one of the reasons why new JavaScript frameworks, like React, have gained traction: they are designed to show the relationships between things like HTML and JavaScript. To understand how that works, we first need to understand just a teeny bit of computer science.
Two kinds of programming
The key concept to understand here involves the distinction between two common programming styles. (There are other programming styles, of course, but we’re only dealing with two of them here.) Most programming languages lend themselves to one or the other of these, and some can be used in both ways. It’s important to grasp both in order to understand the main benefit of HTML-in-JS from a JavaScript developer’s perspective.
• Imperative programming: The word “imperative” here implies commanding a computer to do something. A line of imperative code is a lot like an imperative sentence in English: it gives the computer a specific instruction to follow. In imperative programming, we must tell the computer exactly how to do every little thing we need it to do. In web development, this is starting to be considered “the old way” of doing things and it’s what you do with vanilla JavaScript, or libraries like jQuery. The JavaScript in my shopping list example above is imperative code.
• Imperative: “Do X, then do Y, then do Z”.
• Example: When the user selects this element, add the .selected class to it; and when the user de-selects it, remove the .selected class from it.
• Declarative programming: This is a more abstract layer above imperative programming. Instead of giving the computer instructions, we instead “declare” what we want the results to be after the computer does something. Our tools (e.g. React) figure out the how for us automatically. These tools are built with imperative code on the inside that we don’t have to pay attention to from the outside.
• Declarative: “The result should be XYZ. Do whatever you need to do to make that happen.”
• Example: This element has the .selected class if the user has selected it.
HTML is a declarative language
Forget about JavaScript for a moment. Here’s an important fact: HTML on its own is a declarative language. In an HTML file, you can declare something like:
<section>
<h1>Hello</h1>
<p>My name is Mike.</p>
</section>
When a web browser reads this HTML, it will figure out these imperative steps for you and execute them:
1. Create a section element
2. Create a heading element of level 1
3. Set the inner text of the heading element to “Hello”
4. Place the heading element into the section element
5. Create a paragraph element
6. Set the inner text of the paragraph element to “My name is Mike”
7. Place the paragraph element into the section element
8. Place the section element into the document
9. Display the document on the screen
As a web developer, the details of how a browser does these things is irrelevant; all that matters is that it does them. This is a perfect example of the difference between these two kinds of programming. In short, HTML is a declarative abstraction wrapped around a web browser’s imperative display engine. It takes care of the “how” so you only have to worry about the “what.” You can enjoy life writing declarative HTML because the fine people at Mozilla or Google or Apple wrote the imperative code for you when they built your web browser.
JavaScript is an imperative language
We’ve already looked at a simple example of imperative JavaScript in the shopping list example above, and I mentioned how the complexity of an app’s features has ripple effects on the effort required to implement them and the potential for bugs in that implementation. Now let’s look at a slightly more complex feature and see how it can be simplified by using a declarative approach.
Imagine a webpage that contains the following:
• A list of labelled checkboxes, each row of which changes to a different color when it is selected
• Text at the bottom like “1 of 4 selected” that should update when the checkboxes change
• A “Select All” button which should be disabled if all checkboxes are already selected
• A “Select None” button which should be disabled if no checkboxes are selected
Here’s an implementation of this in plain HTML, CSS and imperative JavaScript:
There isn’t much CSS code here because I’m using the wonderful PatternFly design system, which provides most of the CSS for my example. I imported their CSS file in the CodePen settings.
All the small things
In order to implement this feature with imperative JavaScript, we need to give the browser several granular instructions. This is the English-language equivalent to the code in my example above:
• In our HTML, we declare the initial structure of the page:
• There are four row elements, each containing a checkbox. The third box is checked.
• There is some summary text which reads “1 of 4 selected.”
• There is a “Select All” button which is enabled.
• There is a “Select None” button which is disabled.
• In our JavaScript, we write instructions for what to change when each of these events occurs:
• When a checkbox changes from unchecked to checked:
• Find the row element containing the checkbox and add the .selected CSS class to it.
• Find all the checkbox elements in the list and count how many are checked and how many are not checked.
• Find the summary text element and update it with the checked number and the total number.
• Find the “Select None” button element and enable it if it was disabled.
• If all checkboxes are now checked, find the “Select All” button element and disable it.
• When a checkbox changes from checked to unchecked:
• Find the row element containing the checkbox and remove the .selected class from it.
• Find all the checkbox elements in the list and count how many are checked and not checked.
• Find the summary text element and update it with the checked number and the total number.
• Find the “Select All” button element and enable it if it was disabled.
• If all checkboxes are now unchecked, find the “Select None” button element and disable it.
• When the “Select All” button is clicked:
• Find all the checkbox elements in the list and check them all.
• Find all the row elements in the list and add the .selected class to them.
• Find the summary text element and update it.
• Find the “Select All” button and disable it.
• Find the “Select None” button and enable it.
• When the “Select None” button is clicked:
• Find all the checkbox elements in the list and uncheck them all.
• Find all the row elements in the list and remove the .selected class from them.
• Find the summary text element and update it.
• Find the “Select All” button and enable it.
• Find the “Select None” button and disable it.
Wow. That’s a lot, right? Well, we better remember to write code for each and every one of those things. If we forget or screw up any of those instructions, we will end up with a bug where the totals don’t match the checkboxes, or a button is enabled that doesn’t do anything when you click it, or a row ends up with the wrong color, or something else we didn’t think of and won’t find out about until a user complains.
The big problem here is that there is no single source of truth for the state of our app, which in this case is “which checkboxes are checked?” The checkboxes know whether or not they are checked, of course, but, the row styles also have to know, the summary text has to know, and each button has to know. Five copies of this information are stored separately all around the HTML, and when it changes in any of those places the JavaScript developer needs to catch that and write imperative code to keep the others in sync.
This is still only a simple example of one small component of a page. If that sounds like a headache, imagine how complex and fragile an application becomes when you need to write the whole thing this way. For many complex modern web applications, it’s not a scalable solution.
Moving towards a single source of truth
Tools, like React, allow us to use JavaScript in a declarative way. Just as HTML is a declarative abstraction wrapped around the web browser’s display instructions, React is a declarative abstraction wrapped around JavaScript.
Remember how HTML let us focus on the structure of a page and not the details of how the browser displays that structure? Well, when we use React, we can focus on the structure again by defining it based on data stored in a single place. When that source of truth changes, React will update the structure of the page for us automatically. It will take care of the imperative steps behind the scenes, just like the web browser does for HTML. (Although React is used as an example here, this concept is not unique to React and is used by other frameworks, such as Vue.)
Let’s go back to our list of checkboxes from the example above. In this case, the truth we care about is simple: which checkboxes are checked? The other details on the page (e.g. what the summary says, the color of the rows, whether or not the buttons are enabled) are effects derived from that same truth. So, why should they need to have their own copy of this information? They should just use the single source of truth for reference, and everything on the page should “just know” which checkboxes are checked and conduct themselves accordingly. You might say that the row elements, summary text, and buttons should all be able to automatically react to a checkbox being checked or unchecked. (See what I did there?)
Tell me what you want (what you really, really want)
In order to implement this page with React, we can replace the list with a few simple declarations of facts:
• There is a list of true/false values called checkboxValues that represents which boxes are checked.
• Example: checkboxValues = [false, false, true, false]
• This list represents the truth that we have four checkboxes, and that the third one is checked.
• For each value in checkboxValues, there is a row element which:
• has a CSS class called .selected if the value is true, and
• contains a checkbox, which is checked if the value is true.
• There is a summary text element that contains the text “{x} of {y} selected” where {x} is the number of true values in checkboxValues and {y} is the total number of values in checkboxValues.
• There is a “Select All” button that is enabled if there are any false values in checkboxValues.
• There is a “Select None” button that is enabled if there are any true values in checkboxValues.
• When a checkbox is clicked, its corresponding value changes in checkboxValues.
• When the “Select All” button is clicked, it sets all values in checkboxValues to true.
• When the “Select None” button is clicked, it sets all values in checkboxValues to false.
You’ll notice that the last three items are still imperative instructions (“When this happens, do that”), but that’s the only imperative code we need to write. It’s three lines of code, and they all update the single source of truth. The rest of those bullets are declarations (“there is a…”) which are now built right into the definition of the page’s structure. In order to do this, we write our elements in a special JavaScript syntax provided by React called JSX, which resembles HTML but can contain JavaScript logic. That gives us the ability to mix logic like “if” and “for each” with the HTML structure, so the structure can be different depending on the contents of checkboxValues at any given time.
Here’s the same shopping list example as above, this time implemented with React:
JSX is definitely weird. When I first encountered it, it just felt wrong. My initial reaction was, “What the heck is this? HTML doesn’t belong in JavaScript!” I wasn’t alone. That said, it’s not HTML, but rather JavaScript dressed up to look like HTML. It is also quite powerful.
Remember that list of 20 imperative instructions above? Now we have three. For the price of defining our HTML inside our JavaScript, the rest of them come for free. React just does them for us whenever checkboxValues changes.
With this code, it is now impossible for the summary to not match the checkboxes, or for the color of a row to be wrong, or for a button to be enabled when it should be disabled. There is an entire category of bugs which are now impossible for us to have in our app: sources of truth being out of sync. Everything flows down from the single source of truth, and we developers can write less code and sleep better at night. Well, JavaScript developers can, at least…
It’s a trade-off
As web applications become more complex, maintaining the classic separation of concerns between HTML and JavaScript comes at an increasingly painful cost. HTML was originally designed for static documents, and in order to add more complex interactive functionality to those documents, imperative JavaScript has to keep track of more things and become more confusing and fragile.
The upside: predictability, reusability and composition
The ability to use a single source of truth is the most important benefit of this pattern, but the trade-off gives us other benefits, too. Defining elements of our page as JavaScript code means that we can turn chunks of it into reusable components, preventing us from copying and pasting the same HTML in multiple places. If we need to change a component, we can make that change in one place and it will update everywhere in our application (or in many applications, if we’re publishing reusable components to other teams).
We can take these simple components and compose them together like LEGO bricks, creating more complex and useful components, without making them too confusing to work with. And if we’re using components built by others, we can easily update them when they release improvements or fix bugs without having to rewrite our code.
The downside: it’s JavaScript all the way down
All of those benefits do come at a cost. There are good reasons people value keeping HTML and JavaScript separate, and to get these other benefits, we need to combine them into one. As I mentioned before, moving away from simple HTML files complicates the workflow of someone who didn’t need to worry about JavaScript before. It may mean that someone who previously could make changes to an application on their own must now learn additional complex skills to maintain that autonomy.
There can also be technical downsides. For example, some tools like linters and parsers expect regular HTML, and some third-party imperative JavaScript plugins can become harder to work with. Also, JavaScript isn’t the best-designed language; it’s just what we happen to have in our web browsers. Newer tools and features are making it better, but it still has some pitfalls you need to learn about before you can be productive with it.
Another potential problem is that when the semantic structure of a page is broken up into abstract components, it can become easy for developers to stop thinking about what actual HTML elements are being generated at the end. Specific HTML tags like <section> and <aside> have specific semantic meanings that are lost when using generic tags like <div> and <span>, even if they look the same visually on a page. This is especially important for accessibility. For example, these choices will impact how screen reader software behaves for visually impaired users. It might not be the most exciting part, but JavaScript developers should always remember that semantic HTML is the most important part of a web page.
Use it if it helps you, not because it’s “what’s hot right now”
It’s become a trend for developers to reach for frameworks on every single project. Some people are of the mindset that separating HTML and JavaScript is obsolete, but this isn’t true. For a simple static website that doesn’t need much user interaction, it’s not worth the trouble. The more enthusiastic React fans might disagree with me here, but if all your JavaScript is doing is creating a non-interactive webpage, you shouldn’t be using JavaScript. JavaScript doesn’t load as fast as regular HTML, so if you’re not getting a significant developer experience or code reliability improvement, it’s doing more harm than good.
You also don’t have to build your entire website in React! Or Vue! Or Whatever! A lot of people don’t know this because all the tutorials out there show how to use React for the whole thing. If you only have one little complex widget on an otherwise simple website, you can use React for that one component. You don’t always need to worry about webpack or Redux or Gatsby or any of the other crap people will tell you are “best practices” for your React app.
For a sufficiently complex application, declarative programming is absolutely worth the trouble. It is a game changer that has empowered developers the world over to build amazing, robust and reliable software with confidence and without having to sweat the small stuff. Is React in particular the best possible solution to these problems? No. Will it just be replaced by the next thing? Eventually. But declarative programming is not going anywhere, and the next thing will probably just do it better.
What’s this I’ve heard about CSS-in-JS?
I’m not touching that one.
This post was originally posted here
Check out our Starter Sites built with #ToolWeLove including Toolset, Elementor Pro, and Astra Pro.
Share this page
Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on email
Pixallus Website Giveaway
We understand COVID has created shifts in many people’s lives. That’s why we’ve decided to giveaway a fully functional website with all the tools you
Read More »
How to Choose WordPress Hosting
Pixallus WordPress Hosting. Simplified. Enjoy blazing fast WordPress website loading speeds and 99.9% uptime. Whether you’re a freelancer, small business, or a large organization, we
Read More »
sendinblue
|
__label__pos
| 0.751779 |
Percent-off .com Discount Percentage Sales Tax VAT
Contact Us!
Please get in touch with us if you:
1. Have any suggestions
2. Have any questions
3. Have found an error/bug
4. Anything else ...
We can be reached at
40% off 0.10 Dollars
How to calculate 40 % off $0.10. How to figure out percentages off a price. Using this calculator you will find that the amount after the discount is $0.06.
If you are looking for a Percentage Calculator, please click here.
Discount Calculator ?Please change the values of the two first boxes below and get answers to any combination of values.
Inputs
Original Price of the Item: $
Discount Percent (% off): %
Results
Amount Saved (Discount): $
Sale / Discounted Price: $
Using this calculator you can find the discount value and the discounted price of an item. It is helpfull to answer questions like:
how to work out discounts - Step by Step
To calculate discount it is ease by using the following equations:
Now, let's solve the questions stated above:
1) What is 40 percent off $0.10? Find the amount of discount.
Suppose you have a Kohls coupon of $0.10 and you want to know how much you will save for an item if the discount is 40.
Solution:
Replacing the given values in formula (a) we have:
Amount Saved = Original Price x Discount in Percent / 100. So,
Amount Saved = 0.10 x 40 / 100
Amount Saved = 4 / 100
Amount Saved = $0.04 (answer).
In other words, a 40% discount for a item with original price of $0.10 is equal to $0.04 (Amount Saved).
Note that to find the amount saved, just multiply it by the percentage and divide by 100.
2) How much to pay for an item of $0.10 when discounted 40 percent (%)? What is item's sale price?
Suppose you have a L.L. Bean coupon of $0.10 and you want to know the final or sale price if the discount is 40 percent.
Using the formula (b) and replacing the given values:
Sale Price = Original Price - Amount Saved. So,
Sale Price = 0.10 - 0.04
Sale Price = $0.06 (answer).
This means the cost of the item to you is $0.06.
You will pay $0.06 for a item with original price of $0.10 when discounted 40%.
In this example, if you buy an item at $0.10 with 40% discount, you will pay 0.10 - 0.04 = $0.06.
3) 0.04 is what percent off $0.10?
Using the formula (b) and replacing given values:
Amount Saved = Original Price x Discount in Percent /100. So,
0.04 = 0.10 x Discount in Percent / 100
0.04 / 0.10 = Discount in Percent /100
100 x 0.04 / 0.10 = Discount in Percent
4 / 0.10 = Discount in Percent, or
Discount in Percent = 40 (answer).
To find more examples, just choose one at the bottom of this page.
Sample Percent Calculations
|
__label__pos
| 0.90741 |
Vector Addition Calculation
i + j + k
i + j + k
Out put of Vector Addition Calculation
Vector Addition Calculator
Vector Addition Calculator is an online tool for algebraic operation programmed to perform vector addition operation. The concept of the vector addition is used to describe the addition of physical quantities which have both a magnitude and a direction associated with them
Vector Addition
The addition of vectors A and B is a vector C formed by placing the initial point of B on the terminal point of A and then joining the initial point of A to the terminal point of B. The sum is written as C = A + B. A vector quantity has magnitude and direction and it is denoted by a small arrow on its symbol. The symbol without an arrow A indicates the magnitude. The length of the arrow is proportional to the magnitude of the vector. A vector can be specified either by its magnitude and direction or by its components. The vector addition obeys the following properties such as Commutative and Associative Law
Vector Commutative Law A + B = B + A
Vector Associative Law A + (B + A) = (A + B) + C
The formula for Vector Addition C = A + B
Negative Vector
For each vector A there exists a negative vector . This vector has direction opposite to that of vector a but has the same magnitude; it is denoted by -A
The Vector addition is used in many applications of mathematics, physics and other engineering operations. When it comes to calculate the sum of two vectors, this Vector Addition Calculator can help you to find out the resulting vector
Similar Resource
Worksheet for Vector Addition
Math Worksheet to calculate Polynomial Addition
Worksheet for Vector Cross product Calculation
Matrix Addition Worksheet
|
__label__pos
| 0.999911 |
/* * written by Stefan Bebbo Franke * (c) by BebboSoft 2000 * all rights reserved * * WMLScript language for WMLScript Compiler * */ // @see 6.14 8.2 WMLScript 04.11.99 page 55 ALPHA = '_'; START = CompilationUnit; // ==================================================================== // @see 6.14 8.2 WMLScript 04.11.99 page 54ff // all these definitions can be left out, since their code is never used // for debugee purpose they are helpful! Keyword = 'access' 'equiv 'meta' 'while' 'agent' 'extern' 'name' 'url' 'break' 'for' 'path' 'continue' 'function' 'return' 'div' 'header' 'typeof' 'div=' 'http' 'use' 'domain' 'if' 'user' 'else' 'isvalid' 'var' ; KeywordNotUsedByWMLScript = 'delete' 'null' 'in' 'this' 'lib' 'void' 'new' 'with' ; FutureReservedWord = 'case' 'default' 'finally' 'struct' 'catch' 'do' 'import' 'super' 'class' 'enum' 'private' 'switch' 'const' 'export' 'public' 'throw' 'debugger' 'extends' 'sizeof' 'try' ; Punctuator = '=' '>' '<' '==' '<=' '>=' '!=' ',' '!' '~' '?' ':' '.' '&&' '||' '++' '--' '+' '-' '*' '/' '&' '|' '^' '%' '<<' '>>' '>>>' '+=' '-=' '*=' '/=' '&=' '|=' '^=' '%=' '<<=' '>>=' '>>>=' '(' ')' '{' '}' ';' '#' ; // ==================================================================== // real parser entry point CompilationUnit = Pragma* FunctionDeclaration+ ; // ==================================================================== // pragma stuff Pragma = 'use' PragmaDeclaration ';' ; PragmaDeclaration = ExternalCompilationUnitPragma | AccessControlPragma | MetaPragma ; ExternalCompilationUnitPragma = 'url' Identifier StringLiteral ; AccessControlPragma = 'access' AccessControlSpecifier ; AccessControlSpecifier = 'domain' StringLiteral | 'path' StringLiteral | 'domain' StringLiteral 'path' StringLiteral ; MetaPragma = 'meta' MetaSpecifier ; MetaSpecifier = MetaName | MetaHttpEquiv | MetaUserAgent | ; MetaName = 'name' MetaBody ; MetaHttpEquiv = 'http' 'equiv' MetaBody ; MetaUserAgent = 'user' 'agent' MetaBody ; MetaBody = MetaPropertyName MetaContent MetaScheme? ; MetaPropertyName = StringLiteral; MetaContent = StringLiteral; MetaScheme = StringLiteral; // ==================================================================== // Function Declaration stuff FunctionDeclaration = 'extern'? 'function' Identifier '(' FormalParameterList? ')' Block ';'? ; FormalParameterList = Identifier (',' Identifier )* ; Block = '{' Statement* '}' ; Statement = Block | EmptyStatement | VariableStatement | ExpressionStatement | IfStatement | IterationStatement | ContinueStatement | BreakStatement | ReturnStatement ; VariableStatement = 'var' VariableDeclarationList ; VariableDeclarationList = VariableDeclaration (',' VariableDeclaration)* ; VariableDeclaration = Identifier VariableInitializer? ; VariableInitializer = '=' ConditionalExpression ; EmptyStatement = ';' ; ExpressionStatement = Expression ';' ; IfStatement = 'if' '(' Expression ')' Statement ('else' Statement)? ; IterationStatement = WhileStatement | ForStatement ; WhileStatement = 'while' '(' Expression ')' Statement ; ForStatement = 'for' '(' ( 'var' VariableDeclarationList | Expression? ) ';' Expression? ';' Expression? ')' Statement ; ContinueStatement = 'continue' ';' ; BreakStatement = 'break' ';' ; ReturnStatement = 'return' Expression? ';' ; Expression = AssignmentExpression (',' AssignmentExpression)* ; ConditionalExpression = BinaryExpression ( '?' AssignmentExpression ':' AssignmentExpression )? ; AssignmentExpression = Identifier AssignmentOperator AssignmentExpression | ConditionalExpression ; AssignmentOperator = '=' | '*=' | '/=' | '%=' | '+=' | '-=' | '<<=' | '>>=' | '>>>=' | '&=' | '^=' | '|=' | 'div=' ; BinaryExpression = UnaryExpression ( ( '*' | '/' | 'div' | '%' | '+' | '-' | '<<' | '>>' | '>>>' | '<' | '>' | '<=' | '>=' | '==' | '!=' | '&' | '^' | '|' | '&&' | '||' ) BinaryExpression )? ; UnaryExpression = ( '++' | '--' ) Identifier | ( 'typeof' | 'isvalid' | '+' | '-' | '~' | '!' ) UnaryExpression | PostfixExpression ; PostfixExpression = Identifier ( '++' | '--' ) | CallExpression ; CallExpression = LocalScriptFunctionCall | ExternalScriptFunctionCall | LibraryFunctionCall | PrimaryExpression ; LocalScriptFunctionCall = FunctionName Arguments ; ExternalScriptFunctionCall = ExternalScriptName '#' FunctionName Arguments ; LibraryFunctionCall = LibraryName '.' FunctionName Arguments ; FunctionName = Identifier ; ExternalScriptName = Identifier ; LibraryName = Identifier ; Arguments = '(' ArgumentList? ')' ; ArgumentList = AssignmentExpression (',' AssignmentExpression)* ; PrimaryExpression = Identifier | Literal | '(' Expression ')' ; Literal = 'invalid' | 'true' | 'false' | NumericLiteral | StringLiteral ; /* * $Log: wmlscript.bnf,v $ * Revision 1.1.1.1 2005/04/30 12:05:21 bebbo * * */
|
__label__pos
| 0.643775 |
myspace profile view counter
| About BytePile.com | Contact | [email protected] |
Tue 27-September-2022
Mass Storage Services:
RAID Arrays - Storage
SAN & NAS Storage - Services
Tape Systems - Services
Hosting & Web Services:
Shared Hosting
Dedicated Hosting
Collocation Services
Networks & Technology:
BytePile Network Maps
System Hardware
Software Suites
Company:
About BytePile.com
[email protected]
Support:
Definitions & Terms
Frequently Asked Questions
Tech - White Papers:
CAT-6 Tutorial by Lucent (pdf)
Data Speed Table
Dbase Conf. SAN or NAS (pdf)
DSL Types & Categories
Ethernet (UTP) CAT Cables
Fibre Channel Overview
RAID 3 vs. RAID 5 in HPC
RAID Types & Categories
T1 & T3 RJ-48 Cables
The SAN Book 3.0 (7MB pdf)
Legal:
Acceptable Use Policy
Privacy Statement
Service License Agreement
CaseLabs, The Next Generation of Enthusiast Cases
True Crypt - Free Open Source On - The Fly Encryption
Phisical Psience ΦΨ
StatCounter
Definitions & Terms
Logo Dumbbell Nebula
1-10 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Packet:
• A packet is the unit of data that is routed between an origin and a destination on the Internet or any other packet-switched network. When any file (e-mail message, HTML file, Graphics Interchange Format file, Uniform Resource Locator request, and so forth) is sent from one place to another on the Internet, the Transmission Control Protocol (TCP) layer of TCP/IP divides the file into "chunks" of an efficient size for routing. Each of these packets is separately numbered and includes the Internet address of the destination. The individual packets for a given file may travel different routes through the Internet. When they have all arrived, they are reassembled into the original file (by the TCP layer at the receiving end).
• A packet-switching scheme is an efficient way to handle transmissions on a connectionless network such as the Internet. An alternative scheme, circuit-switched , is used for networks allocated for voice connections. In circuit-switching, lines in the network are shared among many users as with packet-switching, but each connection requires the dedication of a particular path for the duration of the connection. "Packet" and "datagram" are similar in meaning. A protocol similar to TCP, the User Datagram Protocol(UDP) uses the term datagram.
PacketHound:
• PacketHound is a product that aims to help an enterprise regulate traffic that might otherwise slow services down for all users of a local area network . PacketHound is intended to address the concern of businesses and universities about the downloading of large files, especially music files in the MP3 format, using Napster, Gnutella , and similar approaches. PacketHound can also monitor and reduce the number of streaming media files that are downloaded by network users. Both businesses and universities are concerned not only about such traffic slowing down the network for other uses, but also about the possibility of being sued by music companies for loss of intellectual property. According to Palisade Systems, the developers of PacketHound, Napster and Gnutella traffic has had a significant effect in performance on a number of university networks.
• Although Gnutella uses a known port number that a network firewall server could screen for, the port number can be changed by a sophisticated user. The makers of PacketHound claim to be the only product that can monitor and block certain traffic based on the characteristics of the request and response flow, although they do not describe their approach in detail. PacketHound customers can also use the product to monitor without blocking and to also permit or block given traffic at different times of the day. PacketHound is not installed in a firewall server but as a separate PC with an Ethernet card and running NetBSD. The machine's presence is said to be transparent to the network. When PacketHound discerns a pattern of traffic that meets the blocking criteria, it returns a reset packet to the requesting machine. The user sees a "Connection reset by host" message."
• The company also sells PacketPup, a downloadable program that lets a company monitor bandwidth usage on a network. A similar product called PacketShaper (from Packeteer) analyzes and classifies applications in use on the network in terms of their bandwidth and other behavior.
Pagejacking:
• Pagejacking is stealing the contents of a Web site by copying some of its pages, putting them on a site that appears to be the legitimate site, and then inviting people to the illegal site by deceptive means - for example, by having the contents indexed by major search engine s whose results in turn link users to the illegal site. By moving enough of a Web site's content as well as the page descriptor information (known as information) within each page, pagejackers can then submit the illegal site to major search engines for indexing. Users of the search engine sites may then receive results from both the illegitimate as well as the legitimate site and can easily be misled to link to the wrong one. Users linking to the illegitimate site may find themselves redirected to a pornographic or other unwanted site. As an additional annoyance, users subjected to pagejacking may also encounter mousetrapping, in which clicking the Back button with the mouse does not lead out of the illegal site but only to the viewing of additional unwanted pages. To escape, the user may need to close the browser or even restart the operating system.
• Web users who enter Web page addresses (known as URLs) directly on their Web browser address line, by selecting it from a bookmark, or by clicking on a properly coded link on another site will not be subject to pagejacking. The problem most typically occurs when clicking site descriptions that result from searches at major search engine sites. Although the practice was not new at the time, the New York Times on September 23, 1999, carried a page one story about an Australian company that had pagejacked a number of corporate sites, adding pornographic links or ads, and mousetrapping users. Australian officials were reported to be considering civil or criminal charges and a U.S. Federal judge in Virginia, where the original Internet site registration company is located, ordered the sites to lose their Web registrations.
Parity:
• In computers, parity (from the Latin paritas: equal or equivalent) refers to a technique of checking whether data has been lost or written over when it's moved from one place in storage to another or when transmitted between computers.
• Here's how it works: An additional binary digit, the parity bit , is added to a group of bits that are moved together. This bit is used only for the purpose of identifying whether the bits being moved arrived successfully. Before the bits are sent, they are counted and if the total number of data bits is even, the parity bit is set to one so that the total number of bits transmitted will form an odd number. If the total number of data bits is already an odd number, the parity bit remains or is set to 0. At the receiving end, each group of incoming bits is checked to see if the group totals to an odd number. If the total is even, a transmission error has occurred and either the transmission is retried or the system halts and an error message is sent to the user.
• The description above describes how parity checking works within a computer. Specifically, the Peripheral Component Interconnect bus and the I/O bus controller use the odd parity method of error checking. Parity bit checking is not an infallible error-checking method since it's possible that two bits could be in error in a transmission, offsetting each other. For transmissions within a personal computer, this possibility is considered extremely remote. In some large computer systems where data integrity is seen as extremely important, three bits are allocated for parity checking.
• Parity checking is also used in communication between modems. Here, parity checking can be selected to be even (a successful transmission will form an even number) or odd. Users may also select no parity , meaning that the modems will not transmit or check a parity bit. When no parity is selected (or defaulted), it's assumed that there are other forms of checking that will detect any errors in transmission. No parity also usually means that the parity bit can be used for data, speeding up transmission. In modem-to-modem communication, the type of parity is coordinated by the sending and receiving modems before the transmission takes place.
Parse /Parser:
• To parse is to analyze something in an orderly way. In linguistics, to parse is to divide words and phrases into different parts in order to understand relationships and meaning. For example, English students are sometimes asked to parse a sentence by dividing it into subject and predicate, and then into dependent phrases, modifiers, and so forth. In general, to parse someone's writing or speech simply means to interpret it.
• In computers, to parse is to divide a computer language statement into parts that can be made useful for the computer. A parser in a program compiler is a program that takes each program statement that a developer has written and divides it into parts (for example, the main command, options, target objects, their attributes, and so forth) that can then be used for developing further actions or for creating the instructions that form an executable program.
• In computer technology, a parser is a program, usually part of a compiler , that receives input in the form of sequential source program instructions, interactive online commands, markup tags, or some other defined interface and breaks them up into parts (for example, the nouns (objects), verbs (methods), and their attributes or options) that can then be managed by other programming (for example, other components in a compiler). A parser may also check to see that all input has been provided that is necessary.
Partition:
• In personal computers, a partition is a logical division of a hard disk created so that you can have different operating system s on the same hard disk or to create the appearance of having separate hard drives for file management, multiple users, or other purposes. A partition is created when you format the hard disk. Typically, a one-partition hard disk is labelled the "C:" drive ("A:" and "B:" are typically reserved for diskette drives). A two-partition hard drive would typically contain "C:" and "D:" drives. (CD-ROM drives typically are assigned the last letter in whatever sequence of letters have been used as a result of hard disk formatting, or typically with a two-partition, the "E:" drive.)
• When you boot an operating system into your computer, a critical part of the process is to give control to the first sector on your hard disk. It includes a partition table that defines how many partitions the hard disk is formatted into, the size of each, and the address where each partition begins. This sector also contains a program that reads in the boot sector for the operating system and gives it control so that the rest of the operating system can be loaded into random access memory.
• Boot virus es can put the wrong information in the partition sector so that your operating system can't be located. For this reason, you should have a back-up version of your partition sector on a diskette known as a bootable floppy.
Passive FTP:
• Passive FTP (sometimes referred to as PASV FTP because it involves the FTP PASV command) is a more secure form of data transfer in which the flow of data is set up and initiated by the File Transfer Program (FTP) client rather than by the FTP server program. Separate FTP client programs, such as WS_FTP Pro, usually allow the user to select passive FTP. Most Web browsers (which act as FTP clients) use passive FTP by default because corporations prefer it as a safety measure. As a general rule, any coprorate firewall server, which exists in order to protect an internal network from the outside world, recognizes input from the outside only in response to user requests that were sent out requesting the input. The use of passive FTP ensures all data flow initiation comes from inside the network rather than from the outside.
• How It Works:
• Using normal or passive FTP, a client begins a session by sending a request to communicate through TCP port 21, the port that is conventionally assigned for this use at the FTP server. This communication is known as the Control Channel connection.Using "normal" FTP communication, the client requestor also includes in the same PORT command packet on the Control Channel a second port number that is to be used when data is to be exchanged; the port-to-port exchange for data is known as the Data Channel. The FTP server then initiates the exchange from its own port 20 to whatever port was designated by the client. However, because the server-initiated communication is no longer controlled by the client and can't be correlated by a firewall to the initial request, the potential exists for uninvited data to arrive from anywhere posing as a normal FTP transfer.
• Using passive FTP, a PASV command is sent instead of a PORT command. Instead of specifying a port that the server can send to, the PASV command asks the server to specify a port it wishes to use for the Data Channel connection. The server replies on the Control Channel with the port number which the client then uses to initiate an exchange on the Data Channel. The server will thus always be responding to client-initiated requests on the Data Channel and the firewall can coorelate these.
PDC - Primary Domain Controller:
• See Primary Domain Controller
PCM - Pulse Code Modulation:
• See Pulse Code Modulation.
Peer-to-Peer/Computing:
• 1) Peer-to-peer is a communications model in which each party has the same capabilities and either party can initiate a communication session. Other models with which it might be contrasted include the client/server model and the master/slave model. In some cases, peer-to-peer communications is implemented by giving each communication node both server and client capabilities. In recent usage, peer-to-peer has come to describe applications in which users can use the Internet to exchange files with each other directly or through a mediating server.
• IBM's Advanced Peer-to-Peer Networking (APPN) is an example of a product that supports the peer-to-peer communication model.
• 2) On the Internet, peer-to-peer (referred to as P2P) is a type of transient Internet network that allows a group of computer users with the same networking program to connect with each other and directly access files from one another's hard drives. Napster and Gnutella are examples of this kind of peer-to-peer software. Corporations are looking at the advantages of using P2P as a way for employees to share files without the expense involved in maintaining a centralized server and as a way for businesses to exchange information with each other directly.
• How Does Internet P2P Work?
• The user must first download and execute a peer-to-peer networking program. (Gnutellanet is currently one of the most popular of these decentralized P2P programs because it allows users to exchange all types of files.) After launching the program, the user enters the IP address of another computer belonging to the network. (Typically, the Web page where the user got the download will list several IP addresses as places to begin). Once the computer finds another network member on-line, it will connect to that user's connection (who has gotten their IP address from another user's connection and so on). Users can choose how many member connections to seek at one time and determine which files they wish to share or password protect.
Perl - Practical Extraction and Reporting Language:
• Perl (Practical Extraction and Reporting Language) is a script programming language that is similar in syntax to the C language and that includes a number of popular UNIX facilities such as SED, awk, and tr. Perl is an interpreted language that can optionally be compiled just before execution into either C code or cross-platform bytecode . When compiled, a Perl program is almost (but not quite) as fast as a fully precompiled C language program. Perl is regarded as a good choice for developing common gateway interface (CGI ) programs because it has good text manipulation facilities (although it also handles binary files). It was invented by Larry Wall.
• In general, Perl is easier to learn and faster to code in than the more structured C and C++ languages. Perl programs can, however, be quite sophisticated. Perl tends to have devoted adherents.
• plug-ins can be installed for some servers (Apache , for example) so that Perl is loaded permanently in memory, thus reducing compile time and resulting in faster execution of CGI Perl scripts.
PGP - Pretty Good Privacy:
• See Pretty Good Privacy.
Photonic Switching:
• See Lambda Switching.
PHP - Personal Home Page:
• In Web programming, PHP is a script language and interpreter that is freely available and used primarily on Linux Web servers. PHP, originally derived from Personal Home Page Tools, now stands for PHP: Hypertext Preprocessor, which the PHP FAQ describes as a "recursive acronym."
• PHP is an alternative to Microsoft's Active Server Page (ASP) technology. As with ASP, the PHP script is embedded within a Web page along with its HTML . Before the page is sent to a user that has requested it, the Web server calls PHP to interpret and perform the operations called for in the PHP script.
• An HTML page that includes a PHP script is typically given a file name suffix of ".php" ".php3," or ".phtml". Like ASP, PHP can be thought of as "dynamic HTML pages," since content will vary based on the results of interpreting the script. PHP is free and offered under an open source license.
Ping:
• To find out the dot address (such as 205.245.172.72) for a given domain name, Windows users can go to their MS DOS prompt screen and enter: ping xxx.yyy where xxx is the second-level domain name like "whatis" and yyy is the top-level domain name like "com").
• Ping is a basic Internet program that lets you verify that a particular IP address exists and can accept requests. The verb ping means the act of using the ping utility or command. Ping is used diagnostically to ensure that a host computer you are trying to reach is actually operating. If, for example, a user can't ping a host, then the user will be unable to use the File Transfer Protocol (FTP ) to send files to that host. Ping can also be used with a host that is operating to see how long it takes to get a response back. Using ping, you can learn the number form of the IP address from the symbolic domain name (see "Tip").
• Loosely, ping means "to get the attention of" or "to check for the presence of" another party online. Ping operates by sending a packet to a designated address and waiting for a response. The computer acronym (for Packet Internet or Inter-Network Groper) was contrived to match the submariners' term for the sound of a returned sonar pulse.
• Ping can also refer to the process of sending a message to all the members of a mailing list requesting an ACK (acknowledgement code). This is done before sending e-mail in order to confirm that all of the addresses are reachable.
PKI - Public Key Infrastructure:
• See Public Key Infrastructure.
PNAP:
Polish Notation:
• Polish notation, also known as prefix notation, is a symbolic logic invented by Polish mathematician Jan Lukasiewicz in the 1920's. When using Polish notation, the instruction (operation) precedes the data (operands). In Polish notation, the order (and only the order) of operations and operands determines the result, making parentheses unnecessary.
• <a> <img></a> The notation for the expression 3(4 +5) could be expressed as
• x 3 + 4 5
• This contrasts with the traditional algebraic methodology for performing mathematical operations, the Order of Operations. (The mnemonic device for remembering the Order of Operations is "Please Excuse My Dear Aunt Sally" - parentheses, exponents, multiplication, division, addition, subtraction). In the expression 3(4+5), you would work inside the parentheses first to add four plus five and then multiply the result by three.
• In the early days of the calculator , the end-user had to write down the results of their intermediate steps when using the algebraic Order of Operations. Not only did this slow things down, it provided an opportunity for the end-user to make errors and sometimes defeated the purpose of using a calculating machine. In the 1960's, engineers at Hewlett-Packard decided that it would be easier for end-users to learn Jan Lukasiewicz' logic system than to try and use the Order of Operations on a calculator. They modified Jan Lukasiewicz's system for a calculator keyboard by placing the instructions (operators) after the data. In homage to Jan Lukasiewicz' Polish logic system, the engineers at Hewlett-Packard called their modification reverse Polish notation (RPN).
• The notation for the expression 3(4+5) would now be expressed as
• 4 5 + 3 x
• or it could be further simplified to
• 3 4 5 + x
• Reverse Polish notation provided a straightforward solution for calculator or computer software mathematics because it treats the instructions (operators) and the data (operands) as "objects" and processes them in a last-in, first-out (LIFO) basis. This is called a "stack method". (Think of a stack of plates. The last plate you put on the stack will be the first plate taken off the stack.)
• Modern calculators with memory functions are sophisticated enough to accommodate the use of the traditional algebraic Order of Operations, but users of RPN calculators like the logic's simplicity and continue to make it profitable for Hewlett-Packard to manufacture RPN calculators. Some of Hewlett Packard's latest calculators are capable of both RPN and algebraic logic.
POP Point-of-Presence:
• A Point-of-Presence (POP) is an access point to the Internet. A POP necessarily has a unique Internet Protocol (IP) address. Your Internet service provider (ISP ) or online service provider (such as AOL) has a point-of-presence on the Internet and probably more than one. The number of POPs that an ISP or OSP has is sometimes used as a measure of its size or growth rate.
• A POP may actually reside in rented space owned by the telecommunications carrier (such as Sprint) to which the ISP is connected. A POP usually includes routers, digital/analog call aggregators, servers, and frequently frame relays or ATM switches.
POP3 - Post Office Protocol 3:
• Post Office Protocol 3 (POP3) is the most recent version of a standard protocol for receiving e-mail. POP3 is a client/server protocol in which e-mail is received and held for you by your Internet server. Periodically, you (or your client e-mail receiver) check your mail-box on the server and download any mail. POP3 is built into the Netmanage suite of Internet products and one of the most popular e-mail products, Eudora. It's also built into the Netscape and Microsoft Internet Explorer browsers.
• An alternative protocol is Internet Message Access Protocol (IMAP ). With IMAP, you view your e-mail at the server as though it was on your client computer. An e-mail message deleted locally is still on the server. E-mail can be kept on and searched at the server.
• POP can be thought of as a "store-and-forward" service. IMAP can be thought of as a remote file server. POP and IMAP deal with the receiving of e-mail and are not to be confused with the Simple Mail Transfer Protocol (SMTP ), a protocol for transferring e-mail across the Internet. You send e-mail with SMTP and a mail handler receives it on your recipient's behalf. Then the mail is read using POP or IMAP. The conventional port number for POP3 is 110
Port 80:
• On a Web server or Hypertext Transfer Protocol daemon , port 80 is the port that the server "listens to" or expects to receive from a Web client, assuming that the default was taken when the server was configured or set up. A port can be specified in the range from 0-65536 on the NCSA server. However, the server administrator configures the server so that only one port number can be recognized. By default, the port number for a Web server is 80. Experimental services may sometimes be run at port 8080.
Port 110:
• See POP3
Port Forwrding:
• Port forwarding, or tunneling, is a way to forward insecure TCP (not UDP) traffic through SSH Secure Shell. For example, you can secure POP3, SMTP, and HTTP connections that would otherwise be insecure. There are two kinds of port forwarding: local and remote forwarding. They are also called outgoing and incoming tunnels, respectively.
Port Mirroring:
• Port mirroring, also known as a roving analysis port, is a method of monitoring network traffic that forwards a copy of each incoming and outgoing packet from one port of a network switch to another port where the packet can be studied. A network administrator uses port mirroring as a diagnostic tool or debugging feature, especially when fending off an attack. It enables the administrator to keep close track of switch performance and alter it if necessary. Port mirroring can be managed locally or remotely.
• An administrator configures port mirroring by assigning a port from which to copy all packets and another port where those packets will be sent. A packet bound for or heading away from the first port will be forwarded onto the second port as well. The administrator places a protocol analyzer on the port receiving the mirrored data to monitor each segment separately. The analyzer captures and evaluates the data without affecting the client on the original port.
• The monitor port may be a port on the same SwitchModule with an attached RMON probe, a port on a different SwitchModule in the same hub, or the SwitchModule processor. Port mirroring can consume significant CPU resources while active. Better choices for long-term monitoring may include a passive tap like an optical probe or an Ethernet repeater.
Port Number:
• A Port Number is a way to identify a specific process to which an Internet or other network message is to be forwarded when it arrives at a server. For the Transmission Control Protocol and the User Datagram Protocol, a port number is a 16-bit integer that is put in the header appended to a message unit. This port number is passed logically between client and server transport layers and physically between the transport layer and the Internet Protocol layer and forwarded on.
• For example, a request from a client (perhaps on behalf of you at your PC) to a server on the Internet may request a file be served from that host's File Transfer Protocol (FTP) server or process. In order to pass your request to the FTP process in the remote server, the Transmission Control Protocol (TCP ) software layer in your computer identifies the port number of 21 (which by convention is associated with an FTP request) in the 16-bit port number integer that is appended to your request. At the server, the TCP layer will read the port number of 21 and forward your request to the FTP program at the server.
• Some services or processes have conventionally assigned permanent port numbers. These are known as well-known port numbers . In other cases, a port number is assigned temporarily (for the duration of the request and its completion) from a range of assigned port numbers. This is called an ephemeral port number.
POSIX - Portable Operating System Interface:
• POSIX (Portable Operating System Interface) is a set of standard operating system interfaces based on the UNIX operating system. The need for standardization arose because enterprises using computers wanted to be able to develop programs that could be moved among different manufacturer's computer systems without having to be recoded. UNIX was selected as the basis for a standard system interface partly because it was "manufacturer-neutral." However, several major versions of UNIX existed so there was a need to develop a common denominator system.
• Informally, each standard in the POSIX set is defined by a decimal following the POSIX. Thus, POSIX.1 is the standard for an application program interface in the C language. POSIX.2 is the standard shell and utility interface (that is to say, the user's command interface with the operating system). These are the main two interfaces, but additional interfaces, such as POSIX.4 for thread management, have been developed or are being developed. The POSIX interfaces were developed under the auspices of the Institute of Electrical and Electronics Engineers (IEEE).
• POSIX.1 and POSIX.2 interfaces are included in a somewhat larger interface known as the X/Open Programming Guide 4.2 (also known as the "Single UNIX Specification" and "UNIX 95"). The Open Group , an industry standards group, owns the UNIX trademark and can thus "brand" operating systems that conform to the interface as "UNIX" systems. IBM's OS/390 is an example of an operating system that includes a branded UNIX interface.
POTS - Plain Old Telephone Service:
• POTS is a term sometimes used in discussion of new telephone technologies in which the question of whether and how existing voice transmission for ordinary phone communication can be accommodated. For example, Asymmetric Digital Subscriber Line and Integrated Services Digital Network connections provide some part of their channels for "plain old telephone service" while providing most of their bandwidth for digital data transmission.
PPP - Point-to-Point Protocol:
• Point-to-Point Protocol (PPP) is a protocol for communication between two computers using a serial interface, typically a personal computer connected by phone line to a server. For example, your Internet server provider may provide you with a PPP connection so that the provider's server can respond to your requests, pass them on to the Internet, and forward your requested Internet responses back to you. PPP uses the Internet protocol (IP ) (and is designed to handle others). It is sometimes considered a member of the TCP/IP suite of protocols. Relative to the Open Systems Interconnection (OSI) reference model, PPP provides layer 2 (data-link layer) service. Essentially, it packages your computer's TCP/IP packets and forwards them to the server where they can actually be put on the Internet.
• PPP is a full-duplex protocol that can be used on various physical media, including twisted pair or fiber optic lines or satellite transmission. It uses a variation of High Speed Data Link Control (HDLC) for packet encapsulation. PPP is usually preferred over the earlier de facto standard Serial Line Internet Protocol (SLIP) because it can handle synchronous as well as asynchronous communication. PPP can share a line with other users and it has error detection that SLIP lacks. Where a choice is possible, PPP is preferred.
PPTP - Point-to-Point Tunneling Protocol:
• Point-to-Point Tunneling Protocol (PPTP) is a protocol (set of communication rules) that allows corporations to extend their own corporate network through private "tunnels" over the public Internet. Effectively, a corporation uses a wide-area network as a single large local area network. A company no longer needs to lease its own lines for wide-area communication but can securely use the public networks. This kind of interconnection is known as a virtual private network (VPN).
• PPTP, a proposed standard sponsored by Microsoft and other companies, and Layer 2 Tunneling Protocol, proposed by Cisco Systems, are among the most likely proposals as the basis for a new Internet Engineering Task Force (IETF) standard. With PPTP, which is an extension of the Internet's Point-to-Point Protocol (PPP), any user of a PC with PPP client support is able to use an independent service provider (ISP) to connect securely to a server elsewhere in the user's company. Also see VPN.
Pretty Good Privacy - PGP:
• Pretty Good Privacy (PGP) is a popular program used to encrypt and decrypt e-mail over the Internet. It can also be used to send an encrypted digital signature that lets the receiver verify the sender's identity and know that the message was not changed en route. Available both as freeware and in a low-cost commercial version, PGP is the most widely used privacy-ensuring program by individuals and is also used by many corporations. Developed by Philip R. Zimmermann in 1991, PGP has become a de facto standard for e-mail security. PGP can also be used to encrypt files being stored so that they are unreadable by other users or intruders.
• How It Works
• PGP uses a variation of the public key system. In this system, each user has a publicly known encryption key and a private key known only to that user. You encrypt a message you send to someone else using their public key. When they receive it, they decrypt it using their private key. Since encrypting an entire message can be time-consuming, PGP uses a faster encryption algorithm to encrypt the message and then uses the public key to encrypt the shorter key that was used to encrypt the entire message. Both the encrypted message and the short key are sent to the receiver who first uses the receiver's private key to decrypt the short key and then uses that key to decrypt the message.
• PGP comes in two public key versions - Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version, for which PGP must pay a license fee to RSA, uses the IDEA algorithm to generate a short key for the entire message and RSA to encrypt the short key. The Diffie-Hellman version uses the CAST algorithm for the short key to encrypt the message and the Diffie-Hellman algorithm to encrypt the short key.
• For sending digital signatures, PGP uses an efficient algorithm that generates a hash (or mathematical summary) from the user's name and other signature information. This hash code is then encrypted with the sender's private key. The receiver uses the sender's public key to decrypt the hash code. If it matches the hash code sent as the digital signature for the message, then the receiver is sure that the message has arrived securely from the stated sender. PGP's RSA version uses the MD5 algorithm to generate the hash code. PGP's Diffie-Hellman version uses the SHA-1 algorithm to generate the hash code.
• To use PGP, you download or purchase it and install it on your computer system. Typically, it contains a user interface that works with your customary e-mail program. You also need to register the public key that your PGP program gives you with a PGP public-key server so that people you exchange messages with will be able to find your public key. Network Associates maintains an LDAP/HTTP public key server that has 300,000 registered public keys. This server has mirror sites around the world.
• Where Can You Use PGP?
• Originally, the U.S. government restricted the exportation of PGP technology. Today, however, PGP encrypted e-mail can be exchanged with users outside the U.S if you have the correct versions of PGP at both ends. Unlike most other encryption products, the international version is just as secure as the domestic version.
• The freely available PGP cannot legally be used for commercial purposes - for that, one must obtain the commercial version from Network Associates (formerly PGP, Inc.). There are several versions of PGP in use. Add-ons can be purchased that allow backwards compatibility for newer RSA versions with older versions. However, the Diffie-Hellman and RSA versions of PGP do not work with each other since they use different algorithms.
Pre-Boot Execution Environment:
• The Preboot Execution Environment (PXE) is an industry standard client/server interface that allows networked computers that are not yet loaded with an operating system to be configured and booted remotely by an administrator. The PXE code is typically delivered with a new computer on a read-only memory chip or boot disk that allows the computer (a client) to communicate with the network server so that the client machine can be remotely configured and its operating system can be remotely booted. PXE provides three things:
• 1) The Dynamic Host Configuration Protocol (DHCP), which allows the client to receive an IP address to gain access to the network servers.
• 2) A set of application program interfaces (API) that are used by the client's Basic Input/Output Operating System (BIOS ) or a downloaded Network Bootstrap Program (NBP) that automates the booting of the operating system and other configuration steps.
• 3) A standard method of initializing the PXE code in the PXE ROM chip or boot disk.
• The PXE process consists of the client notifying the server that it uses PXE. If the server uses PXE, it sends the client a list of boot servers that contain the operating systems available. The client finds the boot server it needs and receives the name of the file to download. The client then downloads the file using Trivial File Transfer Protocol (Trivia File Transfer Protocol ) and executes it, which loads the operating system. If a client is equipped with PXE and the server is not, the server ignores the PXE code preventing disruption in the DHCP and Bootstrap Protocol (BP) operations.
• The advantages of using PXE include:
• * The client machine does not necessarily need an operating system or even a hard disk.
• * The client machine can be rebooted in the event of hardware or software failure. This allows the administrator to diagnose and perhaps fix the problem.
• * Since PXE is vendor-independent, new types of computers can easily be added to the network.
Primary Domain Controler - PDC
• Primary Domain Controller (PDC) and backup domain controller (BDC) are roles that can be assigned to a server in a network of computers that use the Windows NT operating system. Windows NT uses the idea of a domain to manage access to a set of network resources (applications, printers, and so forth) for a group of users. The user need only to log in to the domain to gain access to the resources, which may be located on a number of different servers in the network. One server, known as the primary domain controller, manages the master user database for the domain. One or more other servers are designated as backup domain controllers. The primary domain controller periodically sends copies of the database to the backup domain controllers. A backup domain controller can step in as primary domain controller if the PDC server fails and can also help balance the workload if the network is busy enough.
• In Windows NT, a domain combines some of the advantages of a workgroup (a group of users who exchange access to each others' resources on different computers) and a directory (a group of users who are managed centrally by an administrator). The domain concept not only allows a user to have access to resources that may be on different servers, but it also allows one domain to be given access to another domain in a trust relationship. In this arrangement, the user need only log in to the first domain to also have access to the second domain's resources as well.
• In a Windows NT network, not all servers need to be a PDC or BDC. A server can be designated as a member server whose resources become part of a domain without having a role in the logon process. Setting up and maintaining PDCs and BDCs and domain information is a major activity for the administrator of a Windows NT network. In Windows 2000, the domain controller concept is retained but the PDC and BDC server roles are generally replaced by the Active Directory.
Primary Rate Interface:
• In the Integrated Services Digital Network (ISDN), there are two levels of service: the Basic Rate Interface (BRI), intended for the home and small enterprise, and the Primary Rate Interface (PRI), for larger users. Both rates include a number of B-channels and a D-channel. Each B-channel carries data, voice, and other services. The D-channel carries control and signaling information.
• The Basic Rate Interface consists of two 64 Kbps B-channels and one 16 Kbps D-channel. Thus, a Basic Rate Interface user can have up to 128 Kbps service. The Primary Rate Interface consists of 23 B-channels and one 64 Kpbs D-channel using a T-1 line or 30 B-channels and 1 D-channel using an E1 line. Thus, a Primary Rate Interface user on a T-1 line can have up to 1.544 Mbps service or up to 2.048 Mbps service on an E1 line. PRI uses the Q.931 protocol over the D-channel.
• The Primary Rate Interface channels are carried on a T-carrier system line (in the U.S., Canada, and Japan) or an E-carrier line (in other countries) and are typically used by medium to large enterprises. The 23 (or 30) B-channels can be used flexibly and reassigned when necessary to meet special needs such as videoconferences. The Primary Rate user is hooked up directly to the telephone company central office. For more information, see ISDN.
Proxy Server:
• In an enterprise that uses the Internet, a proxy server is a server that acts as an intermediary between a workstation user and the Internet so that the enterprise can ensure security, administrative control, and caching service. A proxy server is associated with or part of a gateway server that separates the enterprise network from the outside network and a firewall server that protects the enterprise network from outside intrusion.
• A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it passes filtering requirements, the proxy server, assuming it is also a cache server, looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user.
• To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.)
• An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy's cache, which will improve user response time. In fact, there are special servers called cache servers. A proxy can also do logging.
• The functions of proxy, firewall, and caching can be in separate server programs or combined in a single package. Different server programs can be in different computers. For example, a proxy server may in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall.
Private Key:
• In cryptography, a private or secret key is an encryption/decryption key known only to the party or parties that exchange secret messages. In traditional secret key cryptography, a key would be shared by the communicators so that each could encrypt and decrypt messages. The risk in this system is that if either party loses the key or it is stolen, the system is broken. A more recent alternative is to use a combination of public and private keys. In this system, a public key is used together with a private key. See public key infrastructure (PKI) for more information.
Private Port Numbers:
• See Dynamic Port Numbers
Pseudocode:
• Pseudocode (pronounced SOO-doh-kohd) is a detailed yet readable description of what a computer program or algorithm must do, expressed in a formally-styled natural language rather than in a programming language. Pseudocode is sometimes used as a detailed step in the process of developing a program. It allows designers or lead programmers to express the design in great detail and provides programmers a detailed template for the next step of writing code in a specific programming language.
• Because pseudocode is detailed yet readable, it can be inspected by the team of designers and programmers as a way to ensure that actual programming is likely to match design specifications. Catching errors at the pseudocode stage is less costly than catching them later in the development process. Once the pseudocode is accepted, it is rewritten using the vocabulary and syntax of a programming language. Pseudocode is sometimes used in conjunction with computer-aided software engineering-based methodologies. It is possible to write programs that will convert a given pseudocode language into a given programming language.
Public Domain Software:
• Programs that are uncopyrighted because their authors intended to share them with everyone else are in the public domain. The UNIX community has developed a number of such programs over the years. Programs in the public domain can be used without restriction as components of other programs. When reusing such code, it is good to understand its history so that you can be sure it really is in the public domain. Also see shareware, which is programming that is "free" but more or less on a trial basis, and freeware.
Public Key Infrastructure - PKI:
• A Public Key Infrastructure (PKI) enables users of a basically unsecure public network such as the Internet to securely and privately exchange data and money through the use of a public and a private cryptographic key pair that is obtained and shared through a trusted authority. The public key infrastructure provides for a digital certificate that can identify an individual or an organization and directory services that can store and, when necessary, revoke the certificates. Although the components of a PKI are generally understood, a number of different vendor approaches and services are emerging. Meanwhile, an Internet standard for PKI is being worked on.
• The public key infrastructure assumes the use of public key cryptography, which is the most common method on the Internet for authenticating a message sender or encrypting a message. Traditional cryptography has usually involved the creation and sharing of a secret key for the encryption and decryption of messages. This secret or private key system has the significant flaw that if the key is discovered or intercepted by someone else, messages can easily be decrypted. For this reason, public key cryptography and the public key infrastructure is the preferred approach on the Internet. (The private key system is sometimes known as symmetric cryptography and the public key system as asymmetric cryptography.)
• A public key infrastructure consists of:
• * A certificate authority (CA) that issues and verifies digital certificate. A certificate includes the public key or information about the public key
• * A registration authority (RA) that acts as the verifier for the certificate authority before a digital certificate is issued to a requestor
• * One or more directories where the certificates (with their public keys) are held
• * A certificate management system
• How Public and Private Key Cryptography Works
• In public key cryptography, a public and private key are created simultaneously using the same algorithm (a popular one is known as RSA ) by a certificate authority (CA). The private key is given only to the requesting party and the public key is made publicly available (as part of a digital certificate) in a directory that all parties can access. The private key is never shared with anyone or sent across the Internet. You use the private key to decrypt text that has been encrypted with your public key by someone else (who can find out what your public key is from a public directory). Thus, if I send you a message, I can find out your public key (but not your private key) from a central administrator and encrypt a message to you using your public key. When you receive it, you decrypt it with your private key. In addition to encrypting messages (which ensures privacy), you can authenticate yourself to me (so I know that it is really you who sent the message) by using your private key to encrypt a digital certificate. When I receive it, I can use your public key to decrypt it. Here's a table that restates it:
• To do this Use whose Kind of key
• Send an encrypted message Use the receiver's Public key
• Send an encrypted signature Use the sender's Private key
• Decrypt an encrypted message Use the receiver's Private key
• Decrypt an encrypted signature (and authenticate the sender) Use the sender's Public key
• Who Provides the Infrastructure
• A number of products are offered that enable a company or group of companies to implement a PKI. The acceleration of e-commerce and business-to-business commerce over the Internet has increased the demand for PKI solutions. Related ideas are the virtual private network (VPN) and the IP Security (IPsec) standard. Among PKI leaders are:
• * RSA, which has developed the main algorithms used by PKI vendors
• * Verisign, which acts as a certificate authority and sells software that allows a company to create its own certificate authorities
• * GTE CyberTrust, which provides a PKI implementation methodology and consultation service that it plans to vend to other companies for a fixed price
• * Xcert, whose Web Sentry product that checks the revocation status of certificates on a server, using the Online Certificate Status Protocol (OCSP)
• * Netscape, whose Directory Server product is said to support 50 million objects and process 5,000 queries a second; Secure E-Commerce, which allows a company or extranet manager to manage digital certificates; and Meta-Directory, which can connect all corporate directories into a single directory for security management
• For e-mail, the Pretty Good Privacy (PGP ) product lets you encrypt a message to anyone who has a public key. You encrypt it with their public key and they then decrypt it with their private key. PGP users share a directory of public keys that is called a key ring . (If you are sending a message to someone that doesn't have access to the key ring, you can't send them an encrypted message.) As another option, PGP lets you "sign" your note with a digital signature using your private key. The recipient can then get your public key (if they get access to the key ring) and decrypt your signature to see whether it was really you who sent the message.
Python:
• Python is an interpreted, object-oriented programming language similar to Perl, that has gained popularity because of its clear syntax and readability. Python is said to be relatively easy to learn and portable, meaning its statements can be interpreted in a number of operating systems, including UNIX-based systems, Mac OS, MS-DOS, OS/2, and various versions of Microsoft Windows 98 . Python was created by Guido van Rossum, a former resident of the Netherlands, whose favorite comedy group at the time was Monty Python's Flying Circus. The source code is freely available and open for modification and reuse. Python has a significant number of users.
• A notable feature of Python is its indenting of source statements to make the code easier to read. Python offers dynamic data type, ready-made class, and interfaces to many system calls and libraries. It can be extended, using the C or C++ language. Python can be used as the script in Microsoft's Active Server Page (ASP) technology. The scoreboard system for the Melbourne (Australia) Cricket Ground is written in Python. Z Object Publishing Environment, a popular Web application server, is also written in the Python language.
Google
MySQL Database Powered Powered by Apache Full with PHP Modules Powered by Perl linux-logo
Last Update - 10 April 2012 All Rights Reserved. Copyright © 2002 BytePile.com Inc.
|
__label__pos
| 0.809801 |
Advertisement
Two-Tier Architecture
What Does Two-Tier Architecture Mean?
A two-tier architecture is a software architecture in which a presentation layer or interface runs on a client, and a data layer or data structure gets stored on a server. Separating these two components into different locations represents a two-tier architecture, as opposed to a single-tier architecture. Other kinds of multi-tier architectures add additional layers in distributed software design.
Advertisement
Techopedia Explains Two-Tier Architecture
Experts often contrast a two-tier architecture to a three-tier architecture, where a third application or business layer is added that acts as an intermediary between the client or presentation layer and the data layer. This can increase the performance of the system and help with scalability. It can also eliminate many kinds of problems with confusion, which can be caused by multi-user access in two-tier architectures. However, the advanced complexity of three-tier architecture may mean more cost and effort.
An additional note on two-tier architecture is that the word "tier" commonly refers to splitting the two software layers onto two different physical pieces of hardware. Multi-layer programs can be built on one tier, but because of operational preferences, many two-tier architectures use a computer for the first tier and a server for the second tier.
Advertisement
Share this Term
• Facebook
• LinkedIn
• Twitter
Related Reading
Tags
Software DevelopmentHardwareProductivity SoftwareComputer Science
Trending Articles
Go back to top
|
__label__pos
| 0.581119 |
HomeEntertainmentGraphicsAdobe After Effects CS6: Increase the Number of Undo Steps
Adobe After Effects CS6: Increase the Number of Undo Steps
When you work on a large composition, you may need to use the undo function many times to get the perfect effect. Although Adobe After Effects lets you recover up to 32 steps by default, there may be times you will need more than that. Fortunately, it is possible to configure the program to remember even more of your previous actions.
This tutorial applies to Adobe AE CS6. The steps for other versions may vary. Remembering more undo steps will increase the application’s memory usage.
1. Launch Adobe After Effects CS6.
2. Click the Edit button on the toolbar to expand the menu.
3. In the menu, move your mouse over the Preferences option to expand its sub-menu to the right side, and then choose General. Alternatively, you can press Ctrl + Alt + ; if you prefer a shortcut. This action will open the Preferences/General window just like in other Adobe products.
4. In the Preferences window, look on the right side, and find the Levels of Undo option. Click on the number to change it to the desired amount. You can also just move your mouse over it, hold down the left mouse button, and slide it to the right to increase the number. You can set the program to remember up to 99 recent changes you have made to a project.
5. When you are done, click OK to accept the changes, and dismiss the window.
Lê Hoàng
Lê Hoàng
When not creating exciting new Android games, Lê Hoàng is here crafting tutorials, tweaks, and fixes for your enjoyment.
RELATED ARTICLES
LEAVE A REPLY
Please enter your comment!
Please enter your name here
- Advertisment -
Most Popular
Recent Comments
error: Content is protected !!
|
__label__pos
| 0.585108 |
PDA
View Full Version : Inverse proportion..
momoko
06-22-2009, 07:01 AM
Two quantities, x and y, are in inverse proportion. It is known that y = 120 for a particular value of x.
Find the value of y when the value of x is increased by 200%.
The steps I have done thus far:
k = xy, where k is a constant.
When y = 120,
k = 120x
x = k/120
How do I continue? Or is it a totally wrong method? Thanks in advance :)
Subhotosh Khan
06-22-2009, 07:13 AM
Two quantities, x and y, are in inverse proportion. It is known that y = 120 for a particular value of x.
Find the value of y when the value of x is increased by 200%.
The steps I have done thus far:
k = xy, where k is a constant.
When y = 120,
k = 120x
x = k/120
How do I continue? Or is it a totally wrong method? Thanks in advance :)
Inverse proportion means
x is proportional to 1/y. Then,
x = k * (1/y) ............ where k is constant
x*y = k
x[sub:2a6ouq1l]1[/sub:2a6ouq1l] * y[sub:2a6ouq1l]1[/sub:2a6ouq1l] = k
x[sub:2a6ouq1l]1[/sub:2a6ouq1l] * 120 = k .........................(1)
x[sub:2a6ouq1l]2[/sub:2a6ouq1l] * y[sub:2a6ouq1l]2[/sub:2a6ouq1l] = k ..............(2)
the value of x is increased by 200%
My interpretetion of that statement is
x[sub:2a6ouq1l]2[/sub:2a6ouq1l] = 3x[sub:2a6ouq1l]1[/sub:2a6ouq1l] .....................(3)
there is a possibility that the questioneer may have meant to imply x[sub:2a6ouq1l]2[/sub:2a6ouq1l] = 2x[sub:2a6ouq1l]1[/sub:2a6ouq1l].... but in that case the statement should have been the value of x is increased to200%of the original value
using (3) in (1)
3*x[sub:2a6ouq1l]1[/sub:2a6ouq1l] * y[sub:2a6ouq1l]2[/sub:2a6ouq1l] = k ...............(4)
using (1) in (4)
3*x[sub:2a6ouq1l]1[/sub:2a6ouq1l] * y[sub:2a6ouq1l]2[/sub:2a6ouq1l] = x[sub:2a6ouq1l]1[/sub:2a6ouq1l] * 120
y[sub:2a6ouq1l]2[/sub:2a6ouq1l] = 40
|
__label__pos
| 0.50426 |
Typescript variable types
web
mobile
Author: Dawid Adach
-
Streaming now live
HTML, CSS - free educational webinar for your quarantaine.
WATCH NOW
Free live lesson
Learn HTML, CSS & JavaScript fundamentals. Join now, and don't miss the livestream.
JOIN NOW
Type: any
You could ask — what if I don't know what kind of value my variable will have?
1. Let's flush our function code and type ine the following:
2. let myVar;
myVar = 1; //number
myVar = true; //boolean
myVar = 'mdb'; //string
The TypeScript compiler doesn't complain. Why!?
3. Hover over the myVar declaration, what do you see?
4. Any type
TypeScript allows you to use a special type any. It allows you to use dynamic typing but importantly — it requires you to use it consciously.
5. What is actually type of the variable myVar? Let's check it:
6. myVar = 1; //number
console.log(typeof(myVar));
myVar = true; //boolean
console.log(typeof(myVar));
myVar = 'mdb'; //string
console.log(typeof(myVar));
The output:
Dynamic type
7. Now let's add an explicit type, cast after the var declaration:
8. let myVar: number;
Immediately, the compiler will show two errors regarding the second, and third assignments
Error
Other types:
We used already four different types of variables (number, string, boolean and any). Are there any others? Yes, there are.
let a: number; //numeric type
let b: string; //string
let c: boolean; //true or false
let d: any; //any (dynamic)
let e: number[] = [1,2,3]; //array of numbers
let f: string[] = ['a','b','c']; //array of strings
let g: any[] = [true, 1, 'a']; //array of any
As you can see, except for basic types which we already covered you can also create arrays of different types including the any type. There is also one more type of variable — enum but we will cover it in the future.
Previous lesson Next lesson
Spread the word:
Do you need help? Use our support forum
About the author
Dawid Adach
For more than 5 years Dawid worked as an IT Consultant specializing in SOA/EAI/ESB in the banking domain. He gained experience working in countries like Netherlands, Belgium, Poland and India developing enterprise-class systems for the most prestigious companies. Since co-founding mdbootstrap.com & brandflow.net in 2016 he has been using and teaching technologies such as Angular, TypeScript, PHP, AJAX, Mongo, SQL, Hadoop Stack, Virtualization, Automation and many others...
|
__label__pos
| 0.952662 |
JavaScript
超轻量级php框架startmvc
微信小程序swiper左右扩展各显示一半代码实例
更新时间:2020-09-25 19:00:01 作者:startmvc
这篇文章主要介绍了微信小程序swiper左右扩展各显示一半代码实例,文中通过示例代码介绍
这篇文章主要介绍了微信小程序swiper左右扩展各显示一半代码实例,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
实现如下的功能:
动图展示效果
代码如下:
index.wxml
<swiper class='swiper' next-margin='120rpx' bindchange='swiper'>
<block wx:for="{{devices}}" wx:key="" wx:for-item="device">
<swiper-item item-id="{{device.type}}">
<view class="device {{swiperIndex == index ?'active':''}}" data-currentTab="{{index}}">
<view class='device-img'>
<image src='{{device.deviceImg}}'></image>
</view>
</view>
</swiper-item>
</block>
</swiper>
index.js
Page({
data: {
swiperIndex: 0,
// 设置页面
devices: [
{
deviceImg: "../../img/1.jpg"
},
{
deviceImg: "../../img/2.jpg"
},
{
deviceImg: "../../img/3.jpg"
}
],
},
swiper: function (e) {
let that = this;
let index = e.detail.current;
that.setData({
swiperIndex: index
});
}
})
index.wss
/* 轮播图 */
.devices{
width: 638rpx;
}
.swiper{
width: 580rpx;
height: 600rpx;
margin-left:48rpx;
white-space:nowrap;
/* border: solid red 1px; */
}
.device{
width: 410rpx;
height: 94%;
display:inline-block;
margin-left: 20rpx;
background: rosybrown;
transform: scale(0.9);
transition:all .2s ease 0s;
}
.active{
transform: none;
transition: all 0.2s ease-in 0s;
}
.device-img{
width: 100%;
height: 520rpx;
margin-top: 40rpx;
display: flex;
justify-content: center;
align-items: center;
}
.device-img image{
width:350rpx;
height:190rpx;
}
index.json
{
"usingComponents": {}
}
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持脚本之家。
微信小程序 swiper 左右扩展 显示
|
__label__pos
| 0.955521 |
Push Notification Opt in
Hey Community, is there a way to allow for users to opt in to receiving push notifications in the app? And then to only send to users who opted-in? If so, is there a way to do this without requiring them to log in with firebase authentication? Thank you for any ideas and suggestions you might have.
Hey @hhuman - since this component will work even if that app is not open, there’s not block (or combination of blocks) that can handle this programatically.
How many users do you have ?
To begin, it’s probably just easier to handle opt-out requests manually.
|
__label__pos
| 0.906617 |
Deploy Minio on Kubernetes using Kubespray and Ansible
Updated by Linode Written by Sam Foo
Contribute on GitHub
Report an Issue | View File | Edit File
Minio_Banner
What is Minio?
Minio is an open source, S3 compatible object store that can be hosted on a Linode. Deployment on a Kubernetes cluster is supported in both standalone and distributed modes. This guide uses Kubespray to deploy a Kubernetes cluster on three servers running Ubuntu 16.04. Kubespray comes packaged with Ansible playbooks that simplify setup on the cluster. Minio is then installed in standalone mode on the cluster to demonstrate how to create a service.
Before You Begin
1. For demonstration purposes, this guide installs etcd and the Kubernetes master on the same node. High availability clusters will require a different configuration, which is beyond the scope of this guide.
2. Each Linode to be used in the cluster should have a user with sudo privileges.
3. A cluster can be simulated locally using Minikube to get comfortable with Kubernetes clusters.
4. The IP addresses of each node in the cluster and their roles will be represented as kubernetes-master-ip, etcd-ip, and slave-ip
Note
If you do not want to install Ansible and other software locally, consider using another Linode as a jumpbox that will be used to connect with the master node.
Install Ansible
1. Update if needed.
sudo apt-get update
sudo apt-get install software-properties-common
2. Add the Ansible PPA; press enter when prompted.
sudo apt-add-repository ppa:ansible/ansible
Ansible is a simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.
http://ansible.com/
More info: https://launchpad.net/~ansible/+archive/ubuntu/ansible
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmp81pkp_0b/secring.gpg' created
gpg: keyring `/tmp/tmp81pkp_0b/pubring.gpg' created
gpg: requesting key 7BB9C367 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp81pkp_0b/trustdb.gpg: trustdb created
gpg: key 7BB9C367: public key "Launchpad PPA for Ansible, Inc." imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
3. Update again then install Ansible.
sudo apt-get update
sudo apt-get install ansible
Additional Installation
Kubespray exists as a Git repository and requires python-netaddr for network address manipulation.
1. Install Git:
sudo apt install git
2. Install python-netaddr:
sudo apt install python-netaddr
Modify Kubespray Configurations
Kubespray comes with several configuration options not shown in this guide. Refer to the documentation for more information on topics such as networking with Flannel, Helm installation, and large scale deployments.
1. Clone the Kubespray repository from Github then navigate into the repository.
git clone https://github.com/kubernetes-incubator/kubespray.git
cd kubespray
2. Check out a tag for the desired version of Kubespray. This guide is written for version 2.4.0.
git checkout -b tag/v.2.4.0
3. Modify ~/kubespray/ansible.cfg to run Ansible playbooks on hosts as a given user. Replace username with your Unix account username in remote_user=username under [defaults].
~/kubespray/ansible.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[ssh_connection]
pipelining=True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null
#control_path = ~/.ssh/ansible-%%r@%%h:%%p
[defaults]
host_key_checking=False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp
stdout_callback = skippy
library = ./library
callback_whitelist = profile_tasks
roles_path = roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles
deprecation_warnings=False
remote_user=username
4. Copy the example inventory directory and rename it:
cp -r inventory/sample inventory/minio
5. Use Kubespray’s inventory generator to build an inventory of hosts for Ansible. Declare the list of IP addresses for each Linode.
declare -a IPS=(kubernetes-master-ip etcd-ip slave-ip)
CONFIG_FILE=inventory/minio/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Note
Do not use hostnames when declaring $IPS. Only IP addresses are supported by the inventory generator at this time.
6. Example configuration for the cluster in this guide.
~/kubespray/inventory/minio/hosts.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[all]
node1 ansible_host=kubernetes-master-ip ip=kubernetes-master-ip
node2 ansible_host=etcd-ip ip=etcd-ip
node3 ansible_host=slave-ip ip=slave-ip
[kube-master]
node1
[kube-node]
node2
node3
[etcd]
node1
[k8s-cluster:children]
kube-node
kube-master
[calico-rr]
[vault]
node1
node2
node3
7. Uncomment the line docker_dns_servers_strict: false in ~/kubernetes/inventory/minio/group_vars/all.yml
Prepare Hosts for Ansible
Before Ansible can properly run Kubespray’s playbooks, the hosts must have a passwordless sudo user enabled, and swap disabled for Kubernetes. Make sure the specified user exists on each Linode prior to starting these steps. This section shows how to copy SSH keys to each Linode and modify the sudoers file over SSH.
1. Create a private key if you do not have one:
ssh-keygen -b 4096
2. Copy your SSH key to each IP listed in the inventory using the $IPS variable declared earlier and replace username with the username for each of the hosts.
for IP in ${IPS[@]}; do ssh-copy-id username@$IP; done
Create Passwordless Sudo on Nodes
Below is a loop that adds the line username ALL=(ALL:ALL) NOPASSWD: ALL to the last line of the sudoers file. You will be prompted for the password for each server.
for IP in ${IPS[@]}; do ssh -t username@$IP "echo 'username ALL=(ALL:ALL) NOPASSWD: ALL' | sudo EDITOR='tee -a' visudo"; done
Disable swap
Add this snippet below at the end of ~/kubespray/roles/bootstrap-os/tasks/main.yml to disable swap using Ansible.
~/kubespray/roles/bootstrap-os/tasks/main.yml
1
2
3
4
5
6
7
8
- name: Remove swapfile from /etc/fstab
mount:
name: swap
fstype: swap
state: absent
- name: Disable swap
command: swapoff -a
Run Ansible Playbook
Before running the Ansible playbook, make sure firewalls are turned off to avoid unexpected errors.
Run the cluster.yml Ansible playbook. If your private key is named differently or located elsewhere, add --private-key=/path/to/id_rsa to the end.
ansible-playbook -i inventory/minio/hosts.ini cluster.yml -b -v
Caution
This could take up to 20 minutes.
Add or Remove Nodes
1. Navigate into ~/kubespray/inventory/minio/hosts.ini and add the IP address of the new node.
2. Run ssh-copy-id to copy your SSH key to the new node:
ssh-copy-id username@new-node-ip
3. Run the scale.yml Ansible playbook:
ansible-playbook -i inventory/minio/hosts.ini scale.yml -b -v
4. SSH into the Kubernetes master node to list all the available nodes:
kubectl get nodes
5. To remove a node, simply turn off the server and clean up on the master node with:
kubectl delete node <ip-of-node>
Minio on Kubernetes
The commands in this section should be executed from the kubernetes-master Linode.
Create a Persistent Volume
Persistent Volumes(PV) are an abstraction in Kubernetes that represents a unit of storage provisioned in the cluster. A PersistentVolumeClaim(PVC) will allow a pod to consume the storage set aside by a PV. This section creates a PV of 15Gi (gibibytes) then allow Minio to claim 10Gi of space.
1. On the Kubernetes master node, create a file called minio-volume.yaml with the following YAML below. Replace username on the hostPath with the appropriate path.
minio-volume.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
kind: PersistentVolume
apiVersion: v1
metadata:
name: minio-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/username"
2. Create the PV:
kubectl create -f minio-volume.yaml
3. Create a PVC with minio-pvc.yaml:
minio-pvc.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
4. Create the PVC:
kubectl create -f minio-pvc.yaml
Create a Deployment
1. Create a Deployment configuration in minio-deployment.yaml and substitute username on the last line. The access and secret key are in the YAML file.
minio-deployment.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/home/username"
2. Create the Deployment.
kubectl create -f minio-deployment.yaml
Create a Service
1. Create a file for the service called minio-service.yaml
minio-service.yaml
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
2. Deploy the Minio service:
kubectl create -f minio-service.yaml
3. See a list of running services. Under the column PORT(S), you can see that the Minio service is running internally on port 9000, with 30593 exposed externally by the LoadBalancer.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 443/TCP 1d
minio-service LoadBalancer 10.233.28.163 9000:30593/TCP 20m
4. In a browser, navigate to the public IP address of any of the Linodes in the cluster, at the exposed port (30593 in the example above):
Minio Login Screen
5. Minio has similar functionality to S3: file uploads, creating buckets, and storing other data.
Minio Browser
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
Join our Community
Find answers, ask questions, and help others.
comments powered by Disqus
This guide is published under a CC BY-ND 4.0 license.
|
__label__pos
| 0.888865 |
ISP's can now sell your internet activity without your consent! Use this to protect your Data
The US Senate last week Wednesday voted to decide whether to dismiss the broadband rules that states that Internet Service Providers (ISP) are to get permission before selling users data.
The senators took it to a vote with 50 republicans voting for the broadband rules to be dismissed whereas 48 Democrats voted for the rules to be retained. These privacy laws were created by the FCC (U.S Federal Communication Commission) but US Internet Service Providers argued that the existing FCC's rule favored the likes of Facebook, Google, Bing etc while they (the ISPs) were restricted from getting the same privileges.
Also Read: Why you need a VPN protection for your online activities
What Does This mean?
The US President Donald Trump, on Monday signed the privacy rules which means that ISPs won't need your permission to sell your sensitive data. I know some people would be like "but how does this affect me any," below we look at a few.
1. Sharing of your personal Data with Government agencies:
When ISPs log your online activities, the data may be accessed by outside organisations like the Police department, CIA etc when they provide a subpoena, legally telling your ISP to provide any information they have on you.
2. Worse than a Spyware:
Some ISPs are even worse than a spyware. Lets take Verizon which is an American ISP for example. Verizon inserted "Supercookies" into their customer mobile browser so as to keep tracking them even when they were not using the internet provided by Verizon. Thanks to FCC, Verizon were fined and ever since agreed to notify users about cookies and give the option to opt out to avoid Verizon tracking their data.
3. Data Breach:
Data breaches these days has become so common that its an everyday occurrence. Think of the outcome when one of these data breaches happens to your ISP. The blow would be massive, your browser history, bank login credentials used on the web, saved passwords and logins and a whole lot would be at stake. Others might even clone your profile to scam your friends, the risks are endless.
4. Marketing Purposes:
Selling of users data is one other thing that ISPs are notoriously known for. ISPs log users web activities and then sell them off to advertising firms and companies for marketing purposes.
Also Read: How to send a mail anonymously without revealing your email address and identity
5. Feeling of insecurity:
No one likes to be monitored. Would you want your boss to know how actively you are always online looking for a job? or would you want your family to know about the sites that you visit in secret which they would never approve? The feeling you get is different when you know that there's someone in dark who is constantly monitoring your online activities and can decide to do what ever they feel with your data.
How to Fix this?
The easiest way to fix this is to get the service of a VPN (Virtual Private Network) to shield you from the hands of your ISP and Cybercriminals. You can click here for a comprehensive list of VPNs that would protect you and computer device from the hands of your ISP.
Your data should be your own private stuff and not another persons weapon against you or a tool for making money.
;
2 comments:
1. That's true goood post thanks for the notice when are u going to drop the cellc trick for subscribtion of whatsapp
ReplyDelete
2. Yeah, a vpn for example https://www.purevpn.com/ is the ideal solution. it can protect our Internet connection, make us anonymous online and hence protect our privacy
ReplyDelete
Powered by Blogger.
|
__label__pos
| 0.67489 |
WRT the problem of the abstraction bar being set wrong in the Office object model, here are two good “bad” examples, and a good “good” example of where the abstraction bar could be moved to.
Currently, to create a table in Word you have to write this code:
Dim myApp As New Word.Application
myApp.Visible = True
Dim myDoc As Word.Document
Set myDoc = myApp.Documents.Add()
Dim myRange As Word.Range
Set myRange = myDoc.Range(Start:=0, End:=0)
myDoc.Tables.AddRange:=myRange, NumRows:=3, NumColumns:=4
Then you have to write a bunch of tedious code to populate the table by walking a dataset. Ironically, the way we tell customers to create tables from data is not to create a table but to create an empty range and then get a dataset and convert it to a string and add some heading text with tabs embedded and, well you get the picture--far from the programmer’s conceptual model of creating a table. For more see http://support.microsoft.com/default.aspx?scid=kb;EN-US;q261999
Excel isn’t much better when it comes to the abstraction level. For example, imagine a programmer wants to write code that runs against an Excel Workbook containing an expense report that detects when a cell containing the employee’s e-mail address is edited. When the e-mail address is edited, the programmer wants to automatically lookup the employee’s employee ID and put it into another cell in the worksheet. The code a developer would write using the generic API of Excel might look like this.
Public Sub Workbook_OnChange(changedRange As Range)
If (changedRange.Address = "$A$4") Then
Document.Range("$B$1").Value = GetIDFromEMailAlias(changedRange.Value)
End If
End Sub
What you’d really like to do is code in terms of the domain of the problem. For example, the code above written in the language of the domain of the problem being solved rather than the domain of Excel object model trivia—would look like this:
Public Sub EmployeeEmail_OnChange()
Employee.ID = GetIDFromEMailAlias(Employee.Email)
End Sub
|
__label__pos
| 0.715301 |
13
$\begingroup$
I am reading a 1991 conference paper by Geyer which is linked below. In it he seems to elude to a method that can use MCMC for MLE parameter estimation
This excites me since, I have coded BFGS algorithms, GAs and all sorts of these horrible hand wavy lucky-dip methods of finding global minima necessary to extract the estimation of parameters from MLEs.
The reason it excites me is that if we can guarantee convergence of the MCMC to a fixed point (e.g. a sufficient criterion would be satisfying detailed balance) then we can obtain parameters without minimising an MLE.
Conclusion is, therefore, that this provides a generic method to obtain the global minima, modulo constraints imposed above and in the paper. There are a number of algorithms for MCMC e.g. HMC that are well mapped for high dimensional MCMC problems and I would assume that they would outperform traditional gradient descent methods.
Question
1. Am I correct that this paper provides a theoretical basis for the usage of MCMC to obtain parameter estimates from MLEs?
2. Can one use an MCMC algorithm in certain circumstances, as outlined in the paper, to extract parameters from the MLE bypassing the needs for methods like Genetic Algorithms and BFGS etc.
Paper
Geyer, C. J. (1991). Markov chain Monte Carlo maximum likelihood. Computing Science and Statistics: Proc. 23rd Symp. Interface, 156–163.
Abstract
Markov chain Monte Carlo (e.g., the Metropolis algorithm and Gibbs sampler) is a general tool for simulation of complex stochastic processes useful in many types of statistical inference. The basics of Markov chain Monte Carlo are reviewed, including choise of algorithms and variance estimation, and some new methods are introduced. The use of Markov chain Monte Carlo for maximum likelihood estimation is explained and its performance is compared with maximum pseudo likelihood estimation.
Note: Sections 1-6 are boring and you probably know them already if you got this far. In Section 7 he gets to the interesting but of what he terms “Monte Carlo Maximum Likelihood”
More resources
control+f for “Geyer”
$\endgroup$
11
• 1
$\begingroup$ For your reference, R package glmm here uses Monte Carlo to approximate the likelihood in GLMMs. The package is written by Geyer's student. In addition `R' package 'mcemGLM' here estimates MLE for GLMMs using Monte Carlo EM. The package is written by a student in the same department as Geyer. $\endgroup$ Commented Apr 26, 2016 at 16:23
• $\begingroup$ This is very promising. I have always felt that this area of statistics sucked. I mean it seems so backwards that some of the brightest minds in the world are dropping imaginary lemmings to walk to various minima (i.e. Monte Carlo GAs) to solve these problems $\endgroup$ Commented Apr 26, 2016 at 16:26
• 1
$\begingroup$ This paper by Booth and Hobert is considered seminal in the field. Also see this. Not directly related to your question, but still in the same neighborhood. $\endgroup$ Commented Apr 26, 2016 at 16:30
• 1
$\begingroup$ Just for curiosity, if your goal is to optimize a function, why don't you look at modern methods for global, nonconvex stochastic optimization as opposed to a MCMC paper from 1991? $\endgroup$
– lacerbi
Commented Apr 26, 2016 at 16:30
• $\begingroup$ @lacerbi because I'm a theoretical physics postgrad and I didn't even know that entire field existed (thanks!) and secondly because my problem at hand required distribution fitting. I know MCMC really well and I know MLE really well and I just had a feeling they might have a crossover that could be useful hence the paper I discovered $\endgroup$ Commented Apr 26, 2016 at 16:33
2 Answers 2
7
$\begingroup$
If I understand correctly, you are excited about MCMC in the case of multi-modal target functions. Your reasoning is that MCMC methods search the global parameter space, rather than just shooting up the closest mode and stopping.
While theoretically true, in practice, MCMC often behaves somewhat similarly to hill climbing methods: once they find a local mode, they often stay around that mode. Unlike hill climbing methods, there is a positive probability that they will leave the mode, so theoretically it will explore the global space if let run long enough. However, for most samplers, this probability is so extremely small that is unreasonable to run the chain long enough to have any assurance that the sampler will properly explore the global space.
Of course, there are samplers that try to remedy this by attempting to take occasional outlier steps (i.e. see if it can escape the local mode). But I don't think these samplers are going to be at all competitive, in regards to optimization, with standard optimization methods for exploring multi-modal surfaces (i.e. particle swarm, etc.).
$\endgroup$
2
• $\begingroup$ w.r.t. escaping local minima, there are a family of MCMC routines (e.g this) based on Hamiltonian principles (from Physics) that seem reasonably competent at navigating these multi-modal spaces. Looking at your profile, appreciate this is your area of research and in fact my question comes in a similar light to your social "ramblings". I am not familiar with the methods but as an expert do you think that the MCMC method described above would have any merit at all? $\endgroup$ Commented Apr 29, 2016 at 19:01
• $\begingroup$ @AlexanderMcFarlane: not sure I would call myself an "expert" on MCMC, but rather have had some professional exposure (see r-nimble.org, a project I worked on for awhile). So take my advice with a grain of a salt. That said, I wouldn't use generic MCMC methods, such as MH random walks, for what you want. Samplers that aggressively try to explore the limits of the probability space may have more luck (paywall for your link, so no comment for whether it meets the criteria). $\endgroup$
– Cliff AB
Commented Apr 29, 2016 at 19:15
-1
$\begingroup$
MCMC doesn't converge to a fixed point generally. Convergence is to the stationary distribution of a Markov chain. The draws are different, but, loosely, the distribution they are drawn from becomes fixed.
MCMC methods generally suffer from similar issues to other optimisation methods. E.g. it is easy to design chains which rarely escape from local minima. There is a whole literature of tricks to solve such problems for various models.
That said and in response to your second question, here's a quick and dirty way MCMC can be used for parameter estimation:
1. Run the chain, generating parameter samples.
2. Get the likelihood under each sample of the parameters.
3. Compare the MCMC samples' likelihoods to your favourite MLE.
4. If any of the MCMC samples do better, it wasn't really a global MLE.
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.556738 |
/[slime]/slime/swank.lisp
ViewVC logotype
Contents of /slime/swank.lisp
Parent Directory Parent Directory | Revision Log Revision Log
Revision 1.252 - (show annotations)
Tue Oct 19 05:57:29 2004 UTC (9 years, 6 months ago) by heller
Branch: MAIN
Changes since 1.251: +4 -3 lines
(define-printer-variables): NIL is not a valid docstring. Reported by
Alain Picard.
1 ;;; -*- Mode: lisp; outline-regexp: ";;;;;*"; indent-tabs-mode: nil -*-;;;
2 ;;;
3 ;;; This code has been placed in the Public Domain. All warranties
4 ;;; are disclaimed.
5 ;;;
6 ;;;; swank.lisp
7 ;;;
8 ;;; This file defines the "Swank" TCP server for Emacs to talk to. The
9 ;;; code in this file is purely portable Common Lisp. We do require a
10 ;;; smattering of non-portable functions in order to write the server,
11 ;;; so we have defined them in `swank-backend.lisp' and implemented
12 ;;; them separately for each Lisp implementation. These extensions are
13 ;;; available to us here via the `SWANK-BACKEND' package.
14
15 (defpackage :swank
16 (:use :common-lisp :swank-backend)
17 (:export #:startup-multiprocessing
18 #:start-server
19 #:create-swank-server
20 #:create-server
21 #:ed-in-emacs
22 #:print-indentation-lossage
23 #:swank-debugger-hook
24 ;; These are user-configurable variables:
25 #:*communication-style*
26 #:*log-events*
27 #:*use-dedicated-output-stream*
28 #:*configure-emacs-indentation*
29 #:*readtable-alist*
30 #:*globally-redirect-io*
31 #:*global-debugger*
32 ;; These are re-exported directly from the backend:
33 #:buffer-first-change
34 #:frame-source-location-for-emacs
35 #:restart-frame
36 #:sldb-step
37 #:sldb-break
38 #:sldb-break-on-return
39 #:profiled-functions
40 #:profile-report
41 #:profile-reset
42 #:unprofile-all
43 #:profile-package
44 #:default-directory
45 #:set-default-directory
46 #:quit-lisp
47 ))
48
49 (in-package #:swank)
50
51 ;;;; Top-level variables, constants, macros
52
53 (defconstant cl-package (find-package :cl)
54 "The COMMON-LISP package.")
55
56 (defconstant keyword-package (find-package :keyword)
57 "The KEYWORD package.")
58
59 (defvar *canonical-packge-nicknames*
60 '(("COMMON-LISP-USER" . "CL-USER"))
61 "Canonical package names to use instead of shortest name/nickname.")
62
63 (defvar *auto-abbreviate-dotted-packages* t
64 "Automatically abbreviate dotted package names to their last component when T.")
65
66 (defvar *swank-io-package*
67 (let ((package (make-package :swank-io-package :use '())))
68 (import '(nil t quote) package)
69 package))
70
71 (defconstant default-server-port 4005
72 "The default TCP port for the server (when started manually).")
73
74 (defvar *swank-debug-p* t
75 "When true, print extra debugging information.")
76
77 ;;; The `DEFSLIMEFUN' macro defines a function that Emacs can call via
78 ;;; RPC.
79
80 (defmacro defslimefun (name arglist &body rest)
81 "A DEFUN for functions that Emacs can call by RPC."
82 `(progn
83 (defun ,name ,arglist ,@rest)
84 ;; see <http://www.franz.com/support/documentation/6.2/doc/pages/variables/compiler/s_cltl1-compile-file-toplevel-compatibility-p_s.htm>
85 (eval-when (:compile-toplevel :load-toplevel :execute)
86 (export ',name :swank))))
87
88 (declaim (ftype (function () nil) missing-arg))
89 (defun missing-arg ()
90 "A function that the compiler knows will never to return a value.
91 You can use (MISSING-ARG) as the initform for defstruct slots that
92 must always be supplied. This way the :TYPE slot option need not
93 include some arbitrary initial value like NIL."
94 (error "A required &KEY or &OPTIONAL argument was not supplied."))
95
96 ;;;; Hooks
97 ;;;
98 ;;; We use Emacs-like `add-hook' and `run-hook' utilities to support
99 ;;; simple indirection. The interface is more CLish than the Emacs
100 ;;; Lisp one.
101
102 (defmacro add-hook (place function)
103 "Add FUNCTION to the list of values on PLACE."
104 `(pushnew ,function ,place))
105
106 (defun run-hook (functions &rest arguments)
107 "Call each of FUNCTIONS with ARGUMENTS."
108 (dolist (function functions)
109 (apply function arguments)))
110
111 (defvar *new-connection-hook* '()
112 "This hook is run each time a connection is established.
113 The connection structure is given as the argument.
114 Backend code should treat the connection structure as opaque.")
115
116 (defvar *connection-closed-hook* '()
117 "This hook is run when a connection is closed.
118 The connection as passed as an argument.
119 Backend code should treat the connection structure as opaque.")
120
121 (defvar *pre-reply-hook* '()
122 "Hook run (without arguments) immediately before replying to an RPC.")
123
124 ;;;; Connections
125 ;;;
126 ;;; Connection structures represent the network connections between
127 ;;; Emacs and Lisp. Each has a socket stream, a set of user I/O
128 ;;; streams that redirect to Emacs, and optionally a second socket
129 ;;; used solely to pipe user-output to Emacs (an optimization).
130 ;;;
131
132 (defstruct (connection
133 (:conc-name connection.)
134 (:print-function print-connection))
135 ;; Raw I/O stream of socket connection.
136 (socket-io (missing-arg) :type stream :read-only t)
137 ;; Optional dedicated output socket (backending `user-output' slot).
138 ;; Has a slot so that it can be closed with the connection.
139 (dedicated-output nil :type (or stream null))
140 ;; Streams that can be used for user interaction, with requests
141 ;; redirected to Emacs.
142 (user-input nil :type (or stream null))
143 (user-output nil :type (or stream null))
144 (user-io nil :type (or stream null))
145 ;; In multithreaded systems we delegate certain tasks to specific
146 ;; threads. The `reader-thread' is responsible for reading network
147 ;; requests from Emacs and sending them to the `control-thread'; the
148 ;; `control-thread' is responsible for dispatching requests to the
149 ;; threads that should handle them; the `repl-thread' is the one
150 ;; that evaluates REPL expressions. The control thread dispatches
151 ;; all REPL evaluations to the REPL thread and for other requests it
152 ;; spawns new threads.
153 reader-thread
154 control-thread
155 repl-thread
156 ;; Callback functions:
157 ;; (SERVE-REQUESTS <this-connection>) serves all pending requests
158 ;; from Emacs.
159 (serve-requests (missing-arg) :type function)
160 ;; (READ) is called to read and return one message from Emacs.
161 (read (missing-arg) :type function)
162 ;; (SEND OBJECT) is called to send one message to Emacs.
163 (send (missing-arg) :type function)
164 ;; (CLEANUP <this-connection>) is called when the connection is
165 ;; closed.
166 (cleanup nil :type (or null function))
167 ;; Cache of macro-indentation information that has been sent to Emacs.
168 ;; This is used for preparing deltas to update Emacs's knowledge.
169 ;; Maps: symbol -> indentation-specification
170 (indentation-cache (make-hash-table :test 'eq) :type hash-table)
171 ;; The list of packages represented in the cache:
172 (indentation-cache-packages '()))
173
174 (defun print-connection (conn stream depth)
175 (declare (ignore depth))
176 (print-unreadable-object (conn stream :type t :identity t)))
177
178 (defvar *connections* '()
179 "List of all active connections, with the most recent at the front.")
180
181 (defvar *emacs-connection* nil
182 "The connection to Emacs currently in use.")
183
184 (defvar *swank-state-stack* '()
185 "A list of symbols describing the current state. Used for debugging
186 and to detect situations where interrupts can be ignored.")
187
188 (defun default-connection ()
189 "Return the 'default' Emacs connection.
190 This connection can be used to talk with Emacs when no specific
191 connection is in use, i.e. *EMACS-CONNECTION* is NIL.
192
193 The default connection is defined (quite arbitrarily) as the most
194 recently established one."
195 (first *connections*))
196
197 (defslimefun state-stack ()
198 "Return the value of *SWANK-STATE-STACK*."
199 *swank-state-stack*)
200
201 (define-condition slime-protocol-error (error)
202 ((condition :initarg :condition :reader slime-protocol-error.condition))
203 (:report (lambda (condition stream)
204 (format stream "~A" (slime-protocol-error.condition condition)))))
205
206 (add-hook *new-connection-hook* 'notify-backend-of-connection)
207 (defun notify-backend-of-connection (connection)
208 (emacs-connected (connection.user-io connection)))
209
210 ;;;; Helper macros
211
212 (defmacro with-io-redirection ((connection) &body body)
213 "Execute BODY I/O redirection to CONNECTION.
214 If *REDIRECT-IO* is true then all standard I/O streams are redirected."
215 `(if *redirect-io*
216 (call-with-redirected-io ,connection (lambda () ,@body))
217 (progn ,@body)))
218
219 (defmacro with-connection ((connection) &body body)
220 "Execute BODY in the context of CONNECTION."
221 `(let ((*emacs-connection* ,connection))
222 (catch 'slime-toplevel
223 (with-io-redirection (*emacs-connection*)
224 (let ((*debugger-hook* #'swank-debugger-hook))
225 ,@body)))))
226
227 (defmacro without-interrupts (&body body)
228 `(call-without-interrupts (lambda () ,@body)))
229
230 (defmacro destructure-case (value &rest patterns)
231 "Dispatch VALUE to one of PATTERNS.
232 A cross between `case' and `destructuring-bind'.
233 The pattern syntax is:
234 ((HEAD . ARGS) . BODY)
235 The list of patterns is searched for a HEAD `eq' to the car of
236 VALUE. If one is found, the BODY is executed with ARGS bound to the
237 corresponding values in the CDR of VALUE."
238 (let ((operator (gensym "op-"))
239 (operands (gensym "rand-"))
240 (tmp (gensym "tmp-")))
241 `(let* ((,tmp ,value)
242 (,operator (car ,tmp))
243 (,operands (cdr ,tmp)))
244 (case ,operator
245 ,@(loop for (pattern . body) in patterns collect
246 (if (eq pattern t)
247 `(t ,@body)
248 (destructuring-bind (op &rest rands) pattern
249 `(,op (destructuring-bind ,rands ,operands
250 ,@body)))))
251 ,@(if (eq (caar (last patterns)) t)
252 '()
253 `((t (error "destructure-case failed: ~S" ,tmp))))))))
254
255 (defmacro with-temp-package (var &body body)
256 "Execute BODY with VAR bound to a temporary package.
257 The package is deleted before returning."
258 `(let ((,var (make-package (gensym "TEMP-PACKAGE-"))))
259 (unwind-protect (progn ,@body)
260 (delete-package ,var))))
261
262 ;;;; TCP Server
263
264 (defparameter *redirect-io* t
265 "When non-nil redirect Lisp standard I/O to Emacs.
266 Redirection is done while Lisp is processing a request for Emacs.")
267
268 (defvar *use-dedicated-output-stream* t)
269 (defvar *communication-style* (preferred-communication-style))
270 (defvar *log-events* nil)
271
272 (defun start-server (port-file &optional (style *communication-style*)
273 dont-close)
274 "Start the server and write the listen port number to PORT-FILE.
275 This is the entry point for Emacs."
276 (setup-server 0 (lambda (port) (announce-server-port port-file port))
277 style dont-close))
278
279 (defun create-server (&key (port default-server-port)
280 (style *communication-style*)
281 dont-close)
282 "Start a SWANK server on PORT running in STYLE.
283 If DONT-CLOSE is true then the listen socket will accept multiple
284 connections, otherwise it will be closed after the first."
285 (setup-server port #'simple-announce-function style dont-close))
286
287 (defun create-swank-server (&optional (port default-server-port)
288 (style *communication-style*)
289 (announce-fn #'simple-announce-function)
290 dont-close)
291 (setup-server port announce-fn style dont-close))
292
293 (defparameter *loopback-interface* "127.0.0.1")
294
295 (defun setup-server (port announce-fn style dont-close)
296 (declare (type function announce-fn))
297 (let* ((socket (create-socket *loopback-interface* port))
298 (port (local-port socket)))
299 (funcall announce-fn port)
300 (ecase style
301 (:spawn
302 (spawn (lambda ()
303 (loop do (serve-connection socket :spawn dont-close)
304 while dont-close))
305 :name "Swank"))
306 ((:fd-handler :sigio)
307 (add-fd-handler socket
308 (lambda ()
309 (serve-connection socket style dont-close))))
310 ((nil)
311 (unwind-protect
312 (loop do (serve-connection socket style dont-close)
313 while dont-close)
314 (close-socket socket))))
315 port))
316
317 (defun serve-connection (socket style dont-close)
318 (let ((client (accept-connection socket)))
319 (unless dont-close
320 (close-socket socket))
321 (let ((connection (create-connection client style)))
322 (run-hook *new-connection-hook* connection)
323 (push connection *connections*)
324 (serve-requests connection))))
325
326 (defun serve-requests (connection)
327 "Read and process all requests on connections."
328 (funcall (connection.serve-requests connection) connection))
329
330 (defun announce-server-port (file port)
331 (with-open-file (s file
332 :direction :output
333 :if-exists :overwrite
334 :if-does-not-exist :create)
335 (format s "~S~%" port))
336 (simple-announce-function port))
337
338 (defun simple-announce-function (port)
339 (when *swank-debug-p*
340 (format *debug-io* "~&;; Swank started at port: ~D.~%" port)))
341
342 (defun open-streams (connection)
343 "Return the 4 streams for IO redirection:
344 DEDICATED-OUTPUT INPUT OUTPUT IO"
345 (multiple-value-bind (output-fn dedicated-output)
346 (make-output-function connection)
347 (let ((input-fn
348 (lambda ()
349 (with-connection (connection)
350 (with-simple-restart (abort-read
351 "Abort reading input from Emacs.")
352 (read-user-input-from-emacs))))))
353 (multiple-value-bind (in out) (make-fn-streams input-fn output-fn)
354 (let ((out (or dedicated-output out)))
355 (let ((io (make-two-way-stream in out)))
356 (mapc #'make-stream-interactive (list in out io))
357 (values dedicated-output in out io)))))))
358
359 (defun make-output-function (connection)
360 "Create function to send user output to Emacs.
361 This function may open a dedicated socket to send output. It
362 returns two values: the output function, and the dedicated
363 stream (or NIL if none was created)."
364 (if *use-dedicated-output-stream*
365 (let ((stream (open-dedicated-output-stream
366 (connection.socket-io connection))))
367 (values (lambda (string)
368 (write-string string stream)
369 (force-output stream))
370 stream))
371 (values (lambda (string)
372 (with-connection (connection)
373 (with-simple-restart
374 (abort "Abort sending output to Emacs.")
375 (send-to-emacs `(:read-output ,string)))))
376 nil)))
377
378 (defun open-dedicated-output-stream (socket-io)
379 "Open a dedicated output connection to the Emacs on SOCKET-IO.
380 Return an output stream suitable for writing program output.
381
382 This is an optimized way for Lisp to deliver output to Emacs."
383 (let* ((socket (create-socket *loopback-interface* 0))
384 (port (local-port socket)))
385 (encode-message `(:open-dedicated-output-stream ,port) socket-io)
386 (accept-connection socket)))
387
388 (defun handle-request (connection)
389 "Read and process one request. The processing is done in the extend
390 of the toplevel restart."
391 (assert (null *swank-state-stack*))
392 (let ((*swank-state-stack* '(:handle-request))
393 (*debugger-hook* nil))
394 (with-connection (connection)
395 (with-simple-restart (abort "Abort handling SLIME request.")
396 (read-from-emacs)))))
397
398 (defun current-socket-io ()
399 (connection.socket-io *emacs-connection*))
400
401 (defun close-connection (c &optional condition)
402 (let ((cleanup (connection.cleanup c)))
403 (when cleanup
404 (funcall cleanup c)))
405 (close (connection.socket-io c))
406 (when (connection.dedicated-output c)
407 (close (connection.dedicated-output c)))
408 (setf *connections* (remove c *connections*))
409 (run-hook *connection-closed-hook* c)
410 (when condition
411 (format *debug-io* "~&;; Connection to Emacs lost.~%;; [~A]~%" condition)))
412
413 (defmacro with-reader-error-handler ((connection) &body body)
414 `(handler-case (progn ,@body)
415 (slime-protocol-error (e)
416 (close-connection ,connection e))))
417
418 (defun simple-break ()
419 (with-simple-restart (continue "Continue from interrupt.")
420 (let ((*debugger-hook* #'swank-debugger-hook))
421 (invoke-debugger
422 (make-condition 'simple-error
423 :format-control "Interrupt from Emacs")))))
424
425 ;;;;;; Thread based communication
426
427 (defvar *active-threads* '())
428
429 (defun read-loop (control-thread input-stream connection)
430 (with-reader-error-handler (connection)
431 (loop (send control-thread (decode-message input-stream)))))
432
433 (defun dispatch-loop (socket-io connection)
434 (let ((*emacs-connection* connection))
435 (loop (with-simple-restart (abort "Restart dispatch loop.")
436 (loop (dispatch-event (receive) socket-io))))))
437
438 (defun repl-thread (connection)
439 (let ((thread (connection.repl-thread connection)))
440 (if (thread-alive-p thread)
441 thread
442 (setf (connection.repl-thread connection)
443 (spawn (lambda () (repl-loop connection))
444 :name "new-repl-thread")))))
445
446 (defun find-worker-thread (id)
447 (etypecase id
448 ((member t)
449 (car *active-threads*))
450 ((member :repl-thread)
451 (repl-thread *emacs-connection*))
452 (fixnum
453 (find-thread id))))
454
455 (defun interrupt-worker-thread (id)
456 (let ((thread (or (find-worker-thread id)
457 (repl-thread *emacs-connection*))))
458 (interrupt-thread thread #'simple-break)))
459
460 (defun thread-for-evaluation (id)
461 "Find or create a thread to evaluate the next request."
462 (let ((c *emacs-connection*))
463 (etypecase id
464 ((member t)
465 (spawn (lambda () (handle-request c)) :name "worker"))
466 ((member :repl-thread)
467 (repl-thread c))
468 (fixnum
469 (find-thread id)))))
470
471 (defun dispatch-event (event socket-io)
472 "Handle an event triggered either by Emacs or within Lisp."
473 (log-event "DISPATCHING: ~S~%" event)
474 (destructure-case event
475 ((:emacs-rex form package thread-id id)
476 (let ((thread (thread-for-evaluation thread-id)))
477 (push thread *active-threads*)
478 (send thread `(eval-for-emacs ,form ,package ,id))))
479 ((:return thread &rest args)
480 (let ((tail (member thread *active-threads*)))
481 (setq *active-threads* (nconc (ldiff *active-threads* tail)
482 (cdr tail))))
483 (encode-message `(:return ,@args) socket-io))
484 ((:emacs-interrupt thread-id)
485 (interrupt-worker-thread thread-id))
486 (((:debug :debug-condition :debug-activate :debug-return)
487 thread &rest args)
488 (encode-message `(,(car event) ,(thread-id thread) ,@args) socket-io))
489 ((:read-string thread &rest args)
490 (encode-message `(:read-string ,(thread-id thread) ,@args) socket-io))
491 ((:read-aborted thread &rest args)
492 (encode-message `(:read-aborted ,(thread-id thread) ,@args) socket-io))
493 ((:emacs-return-string thread-id tag string)
494 (send (find-thread thread-id) `(take-input ,tag ,string)))
495 (((:read-output :new-package :new-features :ed :%apply :indentation-update)
496 &rest _)
497 (declare (ignore _))
498 (encode-message event socket-io))
499 ))
500
501 (defun spawn-threads-for-connection (connection)
502 (let* ((socket-io (connection.socket-io connection))
503 (control-thread (spawn (lambda ()
504 (let ((*debugger-hook* nil))
505 (dispatch-loop socket-io connection)))
506 :name "control-thread")))
507 (setf (connection.control-thread connection) control-thread)
508 (let ((reader-thread (spawn (lambda ()
509 (let ((*debugger-hook* nil))
510 (read-loop control-thread socket-io
511 connection)))
512 :name "reader-thread"))
513 (repl-thread (spawn (lambda () (repl-loop connection))
514 :name "repl-thread")))
515 (setf (connection.reader-thread connection) reader-thread)
516 (setf (connection.repl-thread connection) repl-thread)
517 connection)))
518
519 (defun cleanup-connection-threads (connection)
520 (kill-thread (connection.control-thread connection))
521 (kill-thread (connection.repl-thread connection)))
522
523 (defun repl-loop (connection)
524 (with-connection (connection)
525 (loop (handle-request connection))))
526
527 (defun process-available-input (stream fn)
528 (loop while (and (open-stream-p stream)
529 (listen stream))
530 do (funcall fn)))
531
532 ;;;;;; Signal driven IO
533
534 (defun install-sigio-handler (connection)
535 (let ((client (connection.socket-io connection)))
536 (flet ((handler ()
537 (cond ((null *swank-state-stack*)
538 (with-reader-error-handler (connection)
539 (process-available-input
540 client (lambda () (handle-request connection)))))
541 ((eq (car *swank-state-stack*) :read-next-form))
542 (t (process-available-input client #'read-from-emacs)))))
543 (add-sigio-handler client #'handler)
544 (handler))))
545
546 (defun deinstall-sigio-handler (connection)
547 (remove-sigio-handlers (connection.socket-io connection)))
548
549 ;;;;;; SERVE-EVENT based IO
550
551 (defun install-fd-handler (connection)
552 (let ((client (connection.socket-io connection)))
553 (flet ((handler ()
554 (cond ((null *swank-state-stack*)
555 (with-reader-error-handler (connection)
556 (process-available-input
557 client (lambda () (handle-request connection)))))
558 ((eq (car *swank-state-stack*) :read-next-form))
559 (t (process-available-input client #'read-from-emacs)))))
560 (encode-message '(:use-sigint-for-interrupt) client)
561 (setq *debugger-hook*
562 (lambda (c h)
563 (with-reader-error-handler (connection)
564 (block debugger
565 (with-connection (connection)
566 (swank-debugger-hook c h)
567 (return-from debugger))
568 (abort)))))
569 (add-fd-handler client #'handler)
570 (handler))))
571
572 (defun deinstall-fd-handler (connection)
573 (remove-fd-handlers (connection.socket-io connection)))
574
575 ;;;;;; Simple sequential IO
576
577 (defun simple-serve-requests (connection)
578 (let ((socket-io (connection.socket-io connection)))
579 (encode-message '(:use-sigint-for-interrupt) socket-io)
580 (with-reader-error-handler (connection)
581 (loop (handle-request connection)))))
582
583 (defun read-from-socket-io ()
584 (let ((event (decode-message (current-socket-io))))
585 (log-event "DISPATCHING: ~S~%" event)
586 (destructure-case event
587 ((:emacs-rex form package thread id)
588 (declare (ignore thread))
589 `(eval-for-emacs ,form ,package ,id))
590 ((:emacs-interrupt thread)
591 (declare (ignore thread))
592 '(simple-break))
593 ((:emacs-return-string thread tag string)
594 (declare (ignore thread))
595 `(take-input ,tag ,string)))))
596
597 (defun send-to-socket-io (event)
598 (log-event "DISPATCHING: ~S~%" event)
599 (flet ((send (o) (without-interrupts
600 (encode-message o (current-socket-io)))))
601 (destructure-case event
602 (((:debug-activate :debug :debug-return :read-string :read-aborted)
603 thread &rest args)
604 (declare (ignore thread))
605 (send `(,(car event) 0 ,@args)))
606 ((:return thread &rest args)
607 (declare (ignore thread))
608 (send `(:return ,@args)))
609 (((:read-output :new-package :new-features :debug-condition
610 :indentation-update :ed :%apply)
611 &rest _)
612 (declare (ignore _))
613 (send event)))))
614
615 (defun initialize-streams-for-connection (connection)
616 (multiple-value-bind (dedicated in out io) (open-streams connection)
617 (setf (connection.dedicated-output connection) dedicated
618 (connection.user-io connection) io
619 (connection.user-output connection) out
620 (connection.user-input connection) in)
621 connection))
622
623 (defun create-connection (socket-io style)
624 (initialize-streams-for-connection
625 (ecase style
626 (:spawn
627 (make-connection :socket-io socket-io
628 :read #'read-from-control-thread
629 :send #'send-to-control-thread
630 :serve-requests #'spawn-threads-for-connection
631 :cleanup #'cleanup-connection-threads))
632 (:sigio
633 (make-connection :socket-io socket-io
634 :read #'read-from-socket-io
635 :send #'send-to-socket-io
636 :serve-requests #'install-sigio-handler
637 :cleanup #'deinstall-sigio-handler))
638 (:fd-handler
639 (make-connection :socket-io socket-io
640 :read #'read-from-socket-io
641 :send #'send-to-socket-io
642 :serve-requests #'install-fd-handler
643 :cleanup #'deinstall-fd-handler))
644 ((nil)
645 (make-connection :socket-io socket-io
646 :read #'read-from-socket-io
647 :send #'send-to-socket-io
648 :serve-requests #'simple-serve-requests)))))
649
650
651 ;;;; IO to Emacs
652 ;;;
653 ;;; This code handles redirection of the standard I/O streams
654 ;;; (`*standard-output*', etc) into Emacs. The `connection' structure
655 ;;; contains the appropriate streams, so all we have to do is make the
656 ;;; right bindings.
657
658 ;;;;; Global I/O redirection framework
659 ;;;
660 ;;; Optionally, the top-level global bindings of the standard streams
661 ;;; can be assigned to be redirected to Emacs. When Emacs connects we
662 ;;; redirect the streams into the connection, and they keep going into
663 ;;; that connection even if more are established. If the connection
664 ;;; handling the streams closes then another is chosen, or if there
665 ;;; are no connections then we revert to the original (real) streams.
666 ;;;
667 ;;; It is slightly tricky to assign the global values of standard
668 ;;; streams because they are often shadowed by dynamic bindings. We
669 ;;; solve this problem by introducing an extra indirection via synonym
670 ;;; streams, so that *STANDARD-INPUT* is a synonym stream to
671 ;;; *CURRENT-STANDARD-INPUT*, etc. We never shadow the "current"
672 ;;; variables, so they can always be assigned to affect a global
673 ;;; change.
674
675 (defvar *globally-redirect-io* nil
676 "When non-nil globally redirect all standard streams to Emacs.")
677
678 (defmacro setup-stream-indirection (stream-var)
679 "Setup redirection scaffolding for a global stream variable.
680 Supposing (for example) STREAM-VAR is *STANDARD-INPUT*, this macro:
681
682 1. Saves the value of *STANDARD-INPUT* in a variable called
683 *REAL-STANDARD-INPUT*.
684
685 2. Creates *CURRENT-STANDARD-INPUT*, initially with the same value as
686 *STANDARD-INPUT*.
687
688 3. Assigns *STANDARD-INPUT* to a synonym stream pointing to
689 *CURRENT-STANDARD-INPUT*.
690
691 This has the effect of making *CURRENT-STANDARD-INPUT* contain the
692 effective global value for *STANDARD-INPUT*. This way we can assign
693 the effective global value even when *STANDARD-INPUT* is shadowed by a
694 dynamic binding."
695 (let ((real-stream-var (prefixed-var "REAL" stream-var))
696 (current-stream-var (prefixed-var "CURRENT" stream-var)))
697 `(progn
698 ;; Save the real stream value for the future.
699 (defvar ,real-stream-var ,stream-var)
700 ;; Define a new variable for the effective stream.
701 ;; This can be reassigned.
702 (defvar ,current-stream-var ,stream-var)
703 ;; Assign the real binding as a synonym for the current one.
704 (setq ,stream-var (make-synonym-stream ',current-stream-var)))))
705
706 (eval-when (:compile-toplevel :load-toplevel :execute)
707 (defun prefixed-var (prefix variable-symbol)
708 "(PREFIXED-VAR \"FOO\" '*BAR*) => SWANK::*FOO-BAR*"
709 (let ((basename (subseq (symbol-name variable-symbol) 1)))
710 (intern (format nil "*~A-~A" prefix basename) :swank))))
711
712 ;;;;; Global redirection setup
713
714 (setup-stream-indirection *standard-output*)
715 (setup-stream-indirection *error-output*)
716 (setup-stream-indirection *trace-output*)
717 (setup-stream-indirection *standard-input*)
718 (setup-stream-indirection *debug-io*)
719 (setup-stream-indirection *query-io*)
720 (setup-stream-indirection *terminal-io*)
721
722 (defparameter *standard-output-streams*
723 '(*standard-output* *error-output* *trace-output*)
724 "The symbols naming standard output streams.")
725
726 (defparameter *standard-input-streams*
727 '(*standard-input*)
728 "The symbols naming standard input streams.")
729
730 (defparameter *standard-io-streams*
731 '(*debug-io* *query-io* *terminal-io*)
732 "The symbols naming standard io streams.")
733
734 (defun globally-redirect-io-to-connection (connection)
735 "Set the standard I/O streams to redirect to CONNECTION.
736 Assigns *CURRENT-<STREAM>* for all standard streams."
737 (dolist (o *standard-output-streams*)
738 (set (prefixed-var "CURRENT" o)
739 (connection.user-output connection)))
740 ;; FIXME: If we redirect standard input to Emacs then we get the
741 ;; regular Lisp top-level trying to read from our REPL.
742 ;;
743 ;; Perhaps the ideal would be for the real top-level to run in a
744 ;; thread with local bindings for all the standard streams. Failing
745 ;; that we probably would like to inhibit it from reading while
746 ;; Emacs is connected.
747 ;;
748 ;; Meanwhile we just leave *standard-input* alone.
749 #+NIL
750 (dolist (i *standard-input-streams*)
751 (set (prefixed-var "CURRENT" i)
752 (connection.user-input connection)))
753 (dolist (io *standard-io-streams*)
754 (set (prefixed-var "CURRENT" io)
755 (connection.user-io connection))))
756
757 (defun revert-global-io-redirection ()
758 "Set *CURRENT-<STREAM>* to *REAL-<STREAM>* for all standard streams."
759 (dolist (stream-var (append *standard-output-streams*
760 *standard-input-streams*
761 *standard-io-streams*))
762 (set (prefixed-var "CURRENT" stream-var)
763 (symbol-value (prefixed-var "REAL" stream-var)))))
764
765 ;;;;; Global redirection hooks
766
767 (defvar *global-stdio-connection* nil
768 "The connection to which standard I/O streams are globally redirected.
769 NIL if streams are not globally redirected.")
770
771 (defun maybe-redirect-global-io (connection)
772 "Consider globally redirecting to a newly-established CONNECTION."
773 (when (and *globally-redirect-io* (null *global-stdio-connection*))
774 (setq *global-stdio-connection* connection)
775 (globally-redirect-io-to-connection connection)))
776
777 (defun update-redirection-after-close (closed-connection)
778 "Update redirection after a connection closes."
779 (when (eq *global-stdio-connection* closed-connection)
780 (if (and (default-connection) *globally-redirect-io*)
781 ;; Redirect to another connection.
782 (globally-redirect-io-to-connection (default-connection))
783 ;; No more connections, revert to the real streams.
784 (progn (revert-global-io-redirection)
785 (setq *global-stdio-connection* nil)))))
786
787 (add-hook *new-connection-hook* 'maybe-redirect-global-io)
788 (add-hook *connection-closed-hook* 'update-redirection-after-close)
789
790 ;;;;; Redirection during requests
791 ;;;
792 ;;; We always redirect the standard streams to Emacs while evaluating
793 ;;; an RPC. This is done with simple dynamic bindings.
794
795 (defun call-with-redirected-io (connection function)
796 "Call FUNCTION with I/O streams redirected via CONNECTION."
797 (declare (type function function))
798 (let* ((io (connection.user-io connection))
799 (in (connection.user-input connection))
800 (out (connection.user-output connection))
801 (*standard-output* out)
802 (*error-output* out)
803 (*trace-output* out)
804 (*debug-io* io)
805 (*query-io* io)
806 (*standard-input* in)
807 (*terminal-io* io))
808 (funcall function)))
809
810 (defvar *log-io* *terminal-io*)
811
812 (defun log-event (format-string &rest args)
813 "Write a message to *terminal-io* when *log-events* is non-nil.
814 Useful for low level debugging."
815 (when *log-events*
816 (apply #'format *log-io* format-string args)))
817
818 (defun read-from-emacs ()
819 "Read and process a request from Emacs."
820 (apply #'funcall (funcall (connection.read *emacs-connection*))))
821
822 (defun read-from-control-thread ()
823 (receive))
824
825 (defun decode-message (stream)
826 "Read an S-expression from STREAM using the SLIME protocol.
827 If a protocol error occurs then a SLIME-PROTOCOL-ERROR is signalled."
828 (let ((*swank-state-stack* (cons :read-next-form *swank-state-stack*)))
829 (flet ((next-byte () (char-code (read-char stream t))))
830 (handler-case
831 (let* ((length (logior (ash (next-byte) 16)
832 (ash (next-byte) 8)
833 (next-byte)))
834 (string (make-string length))
835 (pos (read-sequence string stream)))
836 (assert (= pos length) ()
837 "Short read: length=~D pos=~D" length pos)
838 (let ((form (read-form string)))
839 (log-event "READ: ~A~%" string)
840 form))
841 (serious-condition (c)
842 (error (make-condition 'slime-protocol-error :condition c)))))))
843
844 (defun read-form (string)
845 (with-standard-io-syntax
846 (let ((*package* *swank-io-package*))
847 (read-from-string string))))
848
849 (defvar *slime-features* nil
850 "The feature list that has been sent to Emacs.")
851
852 (defun send-to-emacs (object)
853 "Send OBJECT to Emacs."
854 (funcall (connection.send *emacs-connection*) object))
855
856 (defun send-oob-to-emacs (object)
857 (send-to-emacs object))
858
859 (defun send-to-control-thread (object)
860 (send (connection.control-thread *emacs-connection*) object))
861
862 (defun encode-message (message stream)
863 (let* ((string (prin1-to-string-for-emacs message))
864 (length (1+ (length string))))
865 (log-event "WRITE: ~A~%" string)
866 (loop for position from 16 downto 0 by 8
867 do (write-char (code-char (ldb (byte 8 position) length))
868 stream))
869 (write-string string stream)
870 (terpri stream)
871 (force-output stream)))
872
873 (defun prin1-to-string-for-emacs (object)
874 (with-standard-io-syntax
875 (let ((*print-case* :downcase)
876 (*print-readably* nil)
877 (*print-pretty* nil)
878 (*package* *swank-io-package*))
879 (prin1-to-string object))))
880
881 (defun force-user-output ()
882 (force-output (connection.user-io *emacs-connection*))
883 (force-output (connection.user-output *emacs-connection*)))
884
885 (defun clear-user-input ()
886 (clear-input (connection.user-input *emacs-connection*)))
887
888 (defvar *read-input-catch-tag* 0)
889
890 (defun intern-catch-tag (tag)
891 ;; fixnums aren't eq in ABCL, so we use intern to create tags
892 (intern (format nil "~D" tag) :swank))
893
894 (defun read-user-input-from-emacs ()
895 (let ((*read-input-catch-tag* (1+ *read-input-catch-tag*)))
896 (force-output)
897 (send-to-emacs `(:read-string ,(current-thread) ,*read-input-catch-tag*))
898 (let ((ok nil))
899 (unwind-protect
900 (prog1 (catch (intern-catch-tag *read-input-catch-tag*)
901 (loop (read-from-emacs)))
902 (setq ok t))
903 (unless ok
904 (send-to-emacs `(:read-aborted ,(current-thread)
905 *read-input-catch-tag*)))))))
906
907 (defslimefun take-input (tag input)
908 "Return the string INPUT to the continuation TAG."
909 (throw (intern-catch-tag tag) input))
910
911 (defslimefun connection-info ()
912 "Return a list of the form:
913 \(PID IMPLEMENTATION-TYPE IMPLEMENTATION-NAME FEATURES)."
914 (list (getpid)
915 (lisp-implementation-type)
916 (lisp-implementation-type-name)
917 (setq *slime-features* *features*)))
918
919
920 ;;;; Reading and printing
921
922 (defmacro define-special (name doc)
923 "Define a special variable NAME with doc string DOC.
924 This is like defvar, but NAME will not be initialized."
925 `(progn
926 (defvar ,name)
927 (setf (documentation ',name 'variable) ,doc)))
928
929 (define-special *buffer-package*
930 "Package corresponding to slime-buffer-package.
931
932 EVAL-FOR-EMACS binds *buffer-package*. Strings originating from a slime
933 buffer are best read in this package. See also FROM-STRING and TO-STRING.")
934
935 (define-special *buffer-readtable*
936 "Readtable associated with the current buffer")
937
938 (defmacro with-buffer-syntax ((&rest _) &body body)
939 "Execute BODY with appropriate *package* and *readtable* bindings.
940
941 This should be used for code that is conceptionally executed in an
942 Emacs buffer."
943 (destructuring-bind () _
944 `(let ((*package* *buffer-package*))
945 ;; Don't shadow *readtable* unnecessarily because that prevents
946 ;; the user from assigning to it.
947 (if (eq *readtable* *buffer-readtable*)
948 #1=(call-with-syntax-hooks (lambda () ,@body))
949 (let ((*readtable* *buffer-readtable*))
950 #1#)))))
951
952 (defun from-string (string)
953 "Read string in the *BUFFER-PACKAGE*"
954 (with-buffer-syntax ()
955 (let ((*read-suppress* nil))
956 (read-from-string string))))
957
958 ;; FIXME: deal with #\| etc. hard to do portably.
959 (defun tokenize-symbol (string)
960 (let ((package (let ((pos (position #\: string)))
961 (if pos (subseq string 0 pos) nil)))
962 (symbol (let ((pos (position #\: string :from-end t)))
963 (if pos (subseq string (1+ pos)) string)))
964 (internp (search "::" string)))
965 (values symbol package internp)))
966
967 ;; FIXME: Escape chars are ignored
968 (defun casify (string)
969 "Convert string accoring to readtable-case."
970 (ecase (readtable-case *readtable*)
971 (:preserve
972 string)
973 (:upcase
974 (string-upcase string))
975 (:downcase
976 (string-downcase string))
977 (:invert
978 (multiple-value-bind (lower upper) (determine-case string)
979 (cond ((and lower upper) string)
980 (lower (string-upcase string))
981 (upper (string-downcase string))
982 (t string))))))
983
984 (defun parse-symbol (string &optional (package *package*))
985 "Find the symbol named STRING.
986 Return the symbol and a flag indicateing if the symbols was found."
987 (multiple-value-bind (sname pname) (tokenize-symbol string)
988 (find-symbol (casify sname)
989 (cond ((string= pname "") "KEYWORD")
990 (pname (casify pname))
991 (t package)))))
992
993 (defun parse-symbol-or-lose (string &optional (package *package*))
994 (multiple-value-bind (symbol status) (parse-symbol string package)
995 (if status
996 (values symbol status)
997 (error "Unknown symbol: ~A [in ~A]" string package))))
998
999 ;; FIXME: interns the name
1000 (defun parse-package (string)
1001 "Find the package named STRING.
1002 Return the package or nil."
1003 (multiple-value-bind (name pos)
1004 (if (zerop (length string))
1005 (values :|| 0)
1006 (let ((*package* keyword-package))
1007 (ignore-errors (read-from-string string))))
1008 (if (and (or (keywordp name) (stringp name))
1009 (= (length string) pos))
1010 (find-package name))))
1011
1012 (defun to-string (string)
1013 "Write string in the *BUFFER-PACKAGE*."
1014 (with-buffer-syntax ()
1015 (prin1-to-string string)))
1016
1017 (defun guess-package-from-string (name &optional (default-package *package*))
1018 (or (and name
1019 (or (parse-package name)
1020 (find-package (string-upcase name))
1021 (parse-package (substitute #\- #\! name))))
1022 default-package))
1023
1024 (defvar *readtable-alist* (default-readtable-alist)
1025 "An alist mapping package names to readtables.")
1026
1027 (defun guess-buffer-readtable (package-name &optional (default *readtable*))
1028 (let ((package (guess-package-from-string package-name)))
1029 (if package
1030 (or (cdr (assoc (package-name package) *readtable-alist*
1031 :test #'string=))
1032 default)
1033 default)))
1034
1035 (defun valid-operator-name-p (string)
1036 "Test if STRING names a function, macro, or special-operator."
1037 (let ((symbol (parse-symbol string)))
1038 (or (fboundp symbol)
1039 (macro-function symbol)
1040 (special-operator-p symbol))))
1041
1042 (defslimefun arglist-for-echo-area (names)
1043 "Return the arglist for the first function, macro, or special-op in NAMES."
1044 (handler-case
1045 (with-buffer-syntax ()
1046 (let ((name (find-if #'valid-operator-name-p names)))
1047 (if name (format-arglist-for-echo-area (parse-symbol name) name))))
1048 (error (cond)
1049 (format nil "ARGLIST: ~A" cond))))
1050
1051 (defun format-arglist-for-echo-area (symbol name)
1052 "Return SYMBOL's arglist as string for display in the echo area.
1053 Use the string NAME as operator name."
1054 (let ((arglist (arglist symbol)))
1055 (etypecase arglist
1056 ((member :not-available)
1057 nil)
1058 (list
1059 (arglist-to-string (cons name arglist)
1060 (symbol-package symbol))))))
1061
1062 (defun arglist-to-string (arglist package)
1063 "Print the list ARGLIST for display in the echo area.
1064 The argument name are printed without package qualifiers and
1065 pretty printing of (function foo) as #'foo is suppressed."
1066 (etypecase arglist
1067 (null "()")
1068 (cons
1069 (with-output-to-string (*standard-output*)
1070 (with-standard-io-syntax
1071 (let ((*package* package)
1072 (*print-case* :downcase)
1073 (*print-pretty* t)
1074 (*print-circle* nil)
1075 (*print-readably* nil)
1076 (*print-level* 10)
1077 (*print-length* 20))
1078 (pprint-logical-block (nil nil :prefix "(" :suffix ")")
1079 (loop
1080 (let ((arg (pop arglist)))
1081 (etypecase arg
1082 (symbol (princ arg))
1083 (string (princ arg))
1084 (cons (pprint-logical-block (nil nil :prefix "(" :suffix ")")
1085 (princ (car arg))
1086 (write-char #\space)
1087 (pprint-fill *standard-output* (cdr arg) nil))))
1088 (when (null arglist) (return))
1089 (write-char #\space)
1090 (pprint-newline :fill))))))))))
1091
1092 (defun test-print-arglist (list string)
1093 (string= (arglist-to-string list (find-package :swank)) string))
1094
1095 ;; Should work:
1096 (assert (test-print-arglist '(function cons) "(function cons)"))
1097 (assert (test-print-arglist '(quote cons) "(quote cons)"))
1098 (assert (test-print-arglist '(&key (function #'+)) "(&key (function #'+))"))
1099 ;; Expected failure:
1100 ;; (assert (test-print-arglist '(&key ((function f))) "(&key ((function f)))"))
1101
1102 (defslimefun variable-desc-for-echo-area (variable-name)
1103 "Return a short description of VARIABLE-NAME, or NIL."
1104 (with-buffer-syntax ()
1105 (let ((sym (parse-symbol variable-name)))
1106 (if (and sym (boundp sym))
1107 (let ((*print-pretty* nil) (*print-level* 4)
1108 (*print-length* 10) (*print-circle* t))
1109 (format nil "~A => ~A" sym (symbol-value sym)))))))
1110
1111 (defslimefun arglist-for-insertion (name)
1112 (with-buffer-syntax ()
1113 (cond ((valid-operator-name-p name)
1114 (let ((arglist (arglist (parse-symbol name))))
1115 (etypecase arglist
1116 ((member :not-available)
1117 " <not available>")
1118 (list
1119 (arglist-to-string arglist *buffer-package*)))))
1120 (t
1121 " <not available>"))))
1122
1123
1124 ;;;; Evaluation
1125
1126 (defun eval-in-emacs (form)
1127 "Execute FORM in Emacs."
1128 (destructuring-bind (fn &rest args) form
1129 (send-to-emacs `(:%apply ,(string-downcase (string fn)) ,args))))
1130
1131 (defun guess-buffer-package (string)
1132 "Return a package for STRING.
1133 Fall back to the the current if no such package exists."
1134 (or (guess-package-from-string string nil)
1135 *package*))
1136
1137 (defun eval-for-emacs (form buffer-package id)
1138 "Bind *BUFFER-PACKAGE* BUFFER-PACKAGE and evaluate FORM.
1139 Return the result to the continuation ID.
1140 Errors are trapped and invoke our debugger."
1141 (let ((*debugger-hook* #'swank-debugger-hook))
1142 (let (ok result)
1143 (unwind-protect
1144 (let ((*buffer-package* (guess-buffer-package buffer-package))
1145 (*buffer-readtable* (guess-buffer-readtable buffer-package)))
1146 (assert (packagep *buffer-package*))
1147 (assert (readtablep *buffer-readtable*))
1148 (setq result (eval form))
1149 (force-output)
1150 (run-hook *pre-reply-hook*)
1151 (setq ok t))
1152 (force-user-output)
1153 (send-to-emacs `(:return ,(current-thread)
1154 ,(if ok `(:ok ,result) '(:abort))
1155 ,id))))))
1156
1157 (defun format-values-for-echo-area (values)
1158 (with-buffer-syntax ()
1159 (let ((*print-readably* nil))
1160 (cond ((null values) "; No value")
1161 ((and (null (cdr values)) (integerp (car values)))
1162 (let ((i (car values)))
1163 (format nil "~D (#x~X, #o~O, #b~B)" i i i i)))
1164 (t (format nil "~{~S~^, ~}" values))))))
1165
1166 (defslimefun interactive-eval (string)
1167 (with-buffer-syntax ()
1168 (let ((values (multiple-value-list (eval (read-from-string string)))))
1169 (fresh-line)
1170 (force-output)
1171 (format-values-for-echo-area values))))
1172
1173 (defun eval-region (string &optional package-update-p)
1174 "Evaluate STRING and return the result.
1175 If PACKAGE-UPDATE-P is non-nil, and evaluation causes a package
1176 change, then send Emacs an update."
1177 (let (- values)
1178 (unwind-protect
1179 (with-input-from-string (stream string)
1180 (loop for form = (read stream nil stream)
1181 until (eq form stream)
1182 do (progn
1183 (setq - form)
1184 (setq values (multiple-value-list (eval form)))
1185 (force-output))
1186 finally (progn
1187 (fresh-line)
1188 (force-output)
1189 (return (values values -)))))
1190 (when (and package-update-p (not (eq *package* *buffer-package*)))
1191 (send-to-emacs
1192 (list :new-package (package-name *package*) (package-string-for-prompt *package*)))))))
1193
1194 (defun package-string-for-prompt (package)
1195 "Return the shortest nickname (or canonical name) of PACKAGE."
1196 (or (canonical-package-nickname package)
1197 (auto-abbreviated-package-name package)
1198 (shortest-package-nickname package)))
1199
1200 (defun canonical-package-nickname (package)
1201 "Return the canonical package nickname, if any, of PACKAGE."
1202 (cdr (assoc (package-name package) *canonical-packge-nicknames* :test #'string=)))
1203
1204 (defun auto-abbreviated-package-name (package)
1205 "Return an abbreviated 'name' for PACKAGE. N.B. this is not an actual package name or nickname."
1206 (when *auto-abbreviate-dotted-packages*
1207 (let ((last-dot (position #\. (package-name package) :from-end t)))
1208 (when last-dot (subseq (package-name package) (1+ last-dot))))))
1209
1210 (defun shortest-package-nickname (package)
1211 "Return the shortest nickname (or canonical name) of PACKAGE."
1212 (loop for name in (cons (package-name package) (package-nicknames package))
1213 for shortest = name then (if (< (length name) (length shortest))
1214 name
1215 shortest)
1216 finally (return shortest)))
1217
1218
1219 (defslimefun interactive-eval-region (string)
1220 (with-buffer-syntax ()
1221 (format-values-for-echo-area (eval-region string))))
1222
1223 (defslimefun re-evaluate-defvar (form)
1224 (with-buffer-syntax ()
1225 (let ((form (read-from-string form)))
1226 (destructuring-bind (dv name &optional value doc) form
1227 (declare (ignore value doc))
1228 (assert (eq dv 'defvar))
1229 (makunbound name)
1230 (prin1-to-string (eval form))))))
1231
1232 (defun foo (&key ((:x a)) ((y b)))
1233 (cons a b))
1234
1235 (foo 'y 10)
1236
1237
1238 (defmacro define-printer-variables (prefix &body vars)
1239 "Define a group of printer variables.
1240
1241 The elements of VARS can have the form: NAME or (NAME INIT). NAME
1242 must be one of the symbols (pretty circle case escape right-margin
1243 level length). PREFIX and NAME are concatenated, like *PREFIX-NAME*,
1244 to form the names of the actual variable. The new variable is
1245 initialized with INIT or, if INIT was not specified, with the value of
1246 the corresponding printer variable.
1247
1248 At macroexpansion time the names of the created symbols are stored in
1249 the 'printer-variables property of PREFIX."
1250 (let ((valid-names '(level length circle readably pretty
1251 case escape right-margin)))
1252 (labels ((symconc (prefix suffix)
1253 (intern (format nil "*~A-~A*" (string prefix) (string suffix))
1254 :swank))
1255 (parse (var)
1256 (destructuring-bind (name init &optional doc)
1257 (if (consp var) var (list var (symconc 'print var)))
1258 (unless (member name valid-names)
1259 (error "Not a printer variable: ~S" var))
1260 (list name init doc))))
1261 (let* ((bindings (mapcar #'parse vars)))
1262 (setf (get prefix 'printer-variables)
1263 (loop for (name) in bindings
1264 collect `(,(symconc 'print name) ,(symconc prefix name))))
1265 `(progn
1266 ,@(loop for (name init doc) in bindings
1267 collect `(defvar ,(symconc prefix name) ,init
1268 ,@(if doc doc))))))))
1269
1270 (define-printer-variables swank-pprint
1271 circle level length case right-margin escape)
1272
1273 (defmacro with-printer-settings (group &body body)
1274 "Rebind the pringer variables in GROUP and execute body.
1275 See `define-printer-variables'."
1276 (let ((bindings (get group 'printer-variables)))
1277 (when (not bindings) (warn "No printer variables for: ~S" group))
1278 `(let ,bindings ,@body)))
1279
1280 (defun swank-pprint (list)
1281 "Bind some printer variables and pretty print each object in LIST."
1282 (with-buffer-syntax ()
1283 (with-printer-settings swank-pprint
1284 (let ((*print-pretty* t))
1285 (cond ((null list) "; No value")
1286 (t (with-output-to-string (*standard-output*)
1287 (dolist (o list)
1288 (pprint o)
1289 (terpri)))))))))
1290
1291 (defslimefun pprint-eval (string)
1292 (with-buffer-syntax ()
1293 (swank-pprint (multiple-value-list (eval (read-from-string string))))))
1294
1295 (defslimefun set-package (package)
1296 "Set *package* to PACKAGE.
1297 Return its name and the string to use in the prompt."
1298 (let ((p (setq *package* (guess-package-from-string package))))
1299 (list (package-name p) (package-string-for-prompt p))))
1300
1301 (defslimefun listener-eval (string)
1302 (clear-user-input)
1303 (with-buffer-syntax ()
1304 (multiple-value-bind (values last-form) (eval-region string t)
1305 (setq +++ ++ ++ + + last-form
1306 *** ** ** * * (car values)
1307 /// // // / / values)
1308 (cond ((null values) "; No value")
1309 (t
1310 (format nil "~{~S~^~%~}" values))))))
1311
1312 (defslimefun ed-in-emacs (&optional what)
1313 "Edit WHAT in Emacs.
1314
1315 WHAT can be:
1316 A filename (string),
1317 A list (FILENAME LINE [COLUMN]),
1318 A function name (symbol),
1319 nil."
1320 (let ((target
1321 (cond ((and (listp what) (pathnamep (first what)))
1322 (cons (canonicalize-filename (car what)) (cdr what)))
1323 ((pathnamep what)
1324 (canonicalize-filename what))
1325 (t what))))
1326 (send-oob-to-emacs `(:ed ,target))))
1327
1328
1329 ;;;; Debugger
1330
1331 (defun swank-debugger-hook (condition hook)
1332 "Debugger function for binding *DEBUGGER-HOOK*.
1333 Sends a message to Emacs declaring that the debugger has been entered,
1334 then waits to handle further requests from Emacs. Eventually returns
1335 after Emacs causes a restart to be invoked."
1336 (declare (ignore hook))
1337 (flet ((debug-it () (debug-in-emacs condition)))
1338 (cond (*emacs-connection*
1339 (debug-it))
1340 ((default-connection)
1341 (with-connection ((default-connection))
1342 (debug-in-emacs condition))))))
1343
1344 (defvar *global-debugger* t
1345 "Non-nil means the Swank debugger hook will be installed globally.")
1346
1347 (add-hook *new-connection-hook* 'install-debugger)
1348 (defun install-debugger (connection)
1349 (declare (ignore connection))
1350 (when *global-debugger*
1351 (setq *debugger-hook* #'swank-debugger-hook)))
1352
1353 ;;;;; Debugger loop
1354 ;;;
1355 ;;; These variables are dynamically bound during debugging.
1356 ;;;
1357 (defvar *swank-debugger-condition* nil
1358 "The condition being debugged.")
1359
1360 (defvar *sldb-level* 0
1361 "The current level of recursive debugging.")
1362
1363 (defvar *sldb-initial-frames* 20
1364 "The initial number of backtrace frames to send to Emacs.")
1365
1366 (defvar *sldb-restarts* nil
1367 "The list of currenlty active restarts.")
1368
1369 ;; A set of printer variables used in the debugger.
1370 (define-printer-variables sldb-print
1371 (pretty nil)
1372 (level 4)
1373 (length 10)
1374 (circle t)
1375 (readably nil))
1376
1377 (defun debug-in-emacs (condition)
1378 (let ((*swank-debugger-condition* condition)
1379 (*sldb-restarts* (compute-restarts condition))
1380 (*package* (or (and (boundp '*buffer-package*)
1381 (symbol-value '*buffer-package*))
1382 *package*))
1383 (*sldb-level* (1+ *sldb-level*))
1384 (*swank-state-stack* (cons :swank-debugger-hook *swank-state-stack*)))
1385 (force-user-output)
1386 (with-printer-settings sldb-print
1387 (call-with-debugging-environment
1388 (lambda () (sldb-loop *sldb-level*))))))
1389
1390 (defun sldb-loop (level)
1391 (unwind-protect
1392 (catch 'sldb-enter-default-debugger
1393 (send-to-emacs
1394 (list* :debug (current-thread) *sldb-level*
1395 (debugger-info-for-emacs 0 *sldb-initial-frames*)))
1396 (loop (catch 'sldb-loop-catcher
1397 (with-simple-restart (abort "Return to sldb level ~D." level)
1398 (send-to-emacs (list :debug-activate (current-thread)
1399 *sldb-level*))
1400 (handler-bind ((sldb-condition #'handle-sldb-condition))
1401 (read-from-emacs))))))
1402 (send-to-emacs `(:debug-return ,(current-thread) ,level))))
1403
1404 (defun handle-sldb-condition (condition)
1405 "Handle an internal debugger condition.
1406 Rather than recursively debug the debugger (a dangerous idea!), these
1407 conditions are simply reported."
1408 (let ((real-condition (original-condition condition)))
1409 (send-to-emacs `(:debug-condition ,(current-thread)
1410 ,(princ-to-string real-condition))))
1411 (throw 'sldb-loop-catcher nil))
1412
1413 (defun safe-condition-message (condition)
1414 "Safely print condition to a string, handling any errors during
1415 printing."
1416 (let ((*print-pretty* t))
1417 (handler-case
1418 (format-sldb-condition condition)
1419 (error (cond)
1420 ;; Beware of recursive errors in printing, so only use the condition
1421 ;; if it is printable itself:
1422 (format nil "Unable to display error condition~@[: ~A~]"
1423 (ignore-errors (princ-to-string cond)))))))
1424
1425 (defun debugger-condition-for-emacs ()
1426 (list (safe-condition-message *swank-debugger-condition*)
1427 (format nil " [Condition of type ~S]"
1428 (type-of *swank-debugger-condition*))
1429 (condition-references *swank-debugger-condition*)
1430 (condition-extras *swank-debugger-condition*)))
1431
1432 (defun format-restarts-for-emacs ()
1433 "Return a list of restarts for *swank-debugger-condition* in a
1434 format suitable for Emacs."
1435 (loop for restart in *sldb-restarts*
1436 collect (list (princ-to-string (restart-name restart))
1437 (princ-to-string restart))))
1438
1439 (defun frame-for-emacs (n frame)
1440 (let* ((label (format nil " ~D: " n))
1441 (string (with-output-to-string (stream)
1442 (princ label stream)
1443 (print-frame frame stream))))
1444 (subseq string (length label))))
1445
1446 ;;;;; SLDB entry points
1447
1448 (defslimefun sldb-break-with-default-debugger ()
1449 "Invoke the default debugger by returning from our debugger-loop."
1450 (throw 'sldb-enter-default-debugger nil))
1451
1452 (defslimefun backtrace (start end)
1453 "Return a list ((I FRAME) ...) of frames from START to END.
1454 I is an integer describing and FRAME a string."
1455 (loop for frame in (compute-backtrace start end)
1456 for i from start
1457 collect (list i (frame-for-emacs i frame))))
1458
1459 (defslimefun debugger-info-for-emacs (start end)
1460 "Return debugger state, with stack frames from START to END.
1461 The result is a list:
1462 (condition ({restart}*) ({stack-frame}*)
1463 where
1464 condition ::= (description type [extra])
1465 restart ::= (name description)
1466 stack-frame ::= (number description)
1467 extra ::= (:references
1468 condition---a pair of strings: message, and type. If show-source is
1469 not nil it is a frame number for which the source should be displayed.
1470
1471 restart---a pair of strings: restart name, and description.
1472
1473 stack-frame---a number from zero (the top), and a printed
1474 representation of the frame's call.
1475
1476 Below is an example return value. In this case the condition was a
1477 division by zero (multi-line description), and only one frame is being
1478 fetched (start=0, end=1).
1479
1480 ((\"Arithmetic error DIVISION-BY-ZERO signalled.
1481 Operation was KERNEL::DIVISION, operands (1 0).\"
1482 \"[Condition of type DIVISION-BY-ZERO]\")
1483 ((\"ABORT\" \"Return to Slime toplevel.\")
1484 (\"ABORT\" \"Return to Top-Level.\"))
1485 ((0 \"(KERNEL::INTEGER-/-INTEGER 1 0)\")))"
1486 (list (debugger-condition-for-emacs)
1487 (format-restarts-for-emacs)
1488 (backtrace start end)))
1489
1490 (defun nth-restart (index)
1491 (nth index *sldb-restarts*))
1492
1493 (defslimefun invoke-nth-restart (index)
1494 (invoke-restart-interactively (nth-restart index)))
1495
1496 (defslimefun sldb-abort ()
1497 (invoke-restart (find 'abort *sldb-restarts* :key #'restart-name)))
1498
1499 (defslimefun sldb-continue ()
1500 (continue))
1501
1502 (defslimefun throw-to-toplevel ()
1503 "Use THROW to abort an RPC from Emacs.
1504 If we are not evaluating an RPC then ABORT instead."
1505 (ignore-errors (throw 'slime-toplevel nil))
1506 ;; If we get here then there was no catch. Try aborting as a fallback.
1507 ;; That makes the 'q' command in SLDB safer to use with threads.
1508 (abort))
1509
1510 (defslimefun invoke-nth-restart-for-emacs (sldb-level n)
1511 "Invoke the Nth available restart.
1512 SLDB-LEVEL is the debug level when the request was made. If this
1513 has changed, ignore the request."
1514 (when (= sldb-level *sldb-level*)
1515 (invoke-nth-restart n)))
1516
1517 (defslimefun eval-string-in-frame (string index)
1518 (to-string (eval-in-frame (from-string string) index)))
1519
1520 (defslimefun pprint-eval-string-in-frame (string index)
1521 (swank-pprint
1522 (multiple-value-list
1523 (eval-in-frame (from-string string) index))))
1524
1525 (defslimefun frame-locals-for-emacs (index)
1526 "Return a property list ((&key NAME ID VALUE) ...) describing
1527 the local variables in the frame INDEX."
1528 (let* ((*package* (or (frame-package index) *package*)))
1529 (mapcar (lambda (frame-locals)
1530 (destructuring-bind (&key name id value) frame-locals
1531 (list :name (prin1-to-string name) :id id
1532 :value (to-string value))))
1533 (frame-locals index))))
1534
1535 (defslimefun frame-catch-tags-for-emacs (frame-index)
1536 (mapcar #'to-string (frame-catch-tags frame-index)))
1537
1538 (defslimefun sldb-disassemble (index)
1539 (with-output-to-string (*standard-output*)
1540 (disassemble-frame index)))
1541
1542 (defslimefun sldb-return-from-frame (index string)
1543 (let ((form (from-string string)))
1544 (to-string (multiple-value-list (return-from-frame index form)))))
1545
1546 (defslimefun sldb-break (name)
1547 (with-buffer-syntax ()
1548 (sldb-break-at-start (read-from-string name))))
1549
1550
1551 ;;;; Compilation Commands.
1552
1553 (defvar *compiler-notes* '()
1554 "List of compiler notes for the last compilation unit.")
1555
1556 (defun clear-compiler-notes ()
1557 (setf *compiler-notes* '()))
1558
1559 (defun canonicalize-filename (filename)
1560 (namestring (truename filename)))
1561
1562 (defslimefun compiler-notes-for-emacs ()
1563 "Return the list of compiler notes for the last compilation unit."
1564 (reverse *compiler-notes*))
1565
1566 (defun measure-time-interval (fn)
1567 "Call FN and return the first return value and the elapsed time.
1568 The time is measured in microseconds."
1569 (declare (type function fn))
1570 (let ((before (get-internal-real-time)))
1571 (values
1572 (funcall fn)
1573 (* (- (get-internal-real-time) before)
1574 (/ 1000000 internal-time-units-per-second)))))
1575
1576 (defun record-note-for-condition (condition)
1577 "Record a note for a compiler-condition."
1578 (push (make-compiler-note condition) *compiler-notes*))
1579
1580 (defun make-compiler-note (condition)
1581 "Make a compiler note data structure from a compiler-condition."
1582 (declare (type compiler-condition condition))
1583 (list* :message (message condition)
1584 :severity (severity condition)
1585 :location (location condition)
1586 :references (references condition)
1587 (let ((s (short-message condition)))
1588 (if s (list :short-message s)))))
1589
1590 (defun swank-compiler (function)
1591 (clear-compiler-notes)
1592 (with-simple-restart (abort "Abort SLIME compilation.")
1593 (multiple-value-bind (result usecs)
1594 (handler-bind ((compiler-condition #'record-note-for-condition))
1595 (measure-time-interval function))
1596 (list (to-string result)
1597 (format nil "~,2F" (/ usecs 1000000.0))))))
1598
1599 (defslimefun compile-file-for-emacs (filename load-p)
1600 "Compile FILENAME and, when LOAD-P, load the result.
1601 Record compiler notes signalled as `compiler-condition's."
1602 (with-buffer-syntax ()
1603 (swank-compiler (lambda () (swank-compile-file filename load-p)))))
1604
1605 (defslimefun compile-string-for-emacs (string buffer position directory)
1606 "Compile STRING (exerpted from BUFFER at POSITION).
1607 Record compiler notes signalled as `compiler-condition's."
1608 (with-buffer-syntax ()
1609 (swank-compiler
1610 (lambda ()
1611 (swank-compile-string string :buffer buffer :position position
1612 :directory directory)))))
1613
1614 (defslimefun operate-on-system-for-emacs (system-name operation &rest keywords)
1615 "Compile and load SYSTEM using ASDF.
1616 Record compiler notes signalled as `compiler-condition's."
1617 (swank-compiler
1618 (lambda ()
1619 (apply #'operate-on-system system-name operation keywords))))
1620
1621 (defun asdf-central-registry ()
1622 (when (find-package :asdf)
1623 (symbol-value (find-symbol (string :*central-registry*) :asdf))))
1624
1625 (defslimefun list-all-systems-in-central-registry ()
1626 "Returns a list of all systems in ASDF's central registry."
1627 (loop for dir in (asdf-central-registry)
1628 for defaults = (eval dir)
1629 when defaults
1630 nconc (mapcar #'file-namestring
1631 (directory
1632 (make-pathname :defaults defaults
1633 :version :newest
1634 :type "asd"
1635 :name :wild
1636 :case :local)))))
1637
1638 (defun file-newer-p (new-file old-file)
1639 "Returns true if NEW-FILE is newer than OLD-FILE."
1640 (> (file-write-date new-file) (file-write-date old-file)))
1641
1642 (defun requires-compile-p (source-file)
1643 (let ((fasl-file (probe-file (compile-file-pathname source-file))))
1644 (or (not fasl-file)
1645 (file-newer-p source-file fasl-file))))
1646
1647 (defslimefun compile-file-if-needed (filename loadp)
1648 (cond ((requires-compile-p filename)
1649 (compile-file-for-emacs filename loadp))
1650 (loadp
1651 (load (compile-file-pathname filename))
1652 nil)))
1653
1654
1655 ;;;; Loading
1656
1657 (defslimefun load-file (filename)
1658 (to-string (load filename)))
1659
1660 (defslimefun load-file-set-package (filename &optional package)
1661 (load-file filename)
1662 (if package
1663 (set-package package)))
1664
1665
1666 ;;;; Macroexpansion
1667
1668 (defun apply-macro-expander (expander string)
1669 (declare (type function expander))
1670 (with-buffer-syntax ()
1671 (swank-pprint (list (funcall expander (from-string string))))))
1672
1673 (defslimefun swank-macroexpand-1 (string)
1674 (apply-macro-expander #'macroexpand-1 string))
1675
1676 (defslimefun swank-macroexpand (string)
1677 (apply-macro-expander #'macroexpand string))
1678
1679 (defslimefun swank-macroexpand-all (string)
1680 (apply-macro-expander #'macroexpand-all string))
1681
1682 (defslimefun disassemble-symbol (name)
1683 (with-buffer-syntax ()
1684 (with-output-to-string (*standard-output*)
1685 (let ((*print-readably* nil))
1686 (disassemble (fdefinition (from-string name)))))))
1687
1688
1689 ;;;; Basic completion
1690
1691 (defslimefun completions (string default-package-name)
1692 "Return a list of completions for a symbol designator STRING.
1693
1694 The result is the list (COMPLETION-SET
1695 COMPLETED-PREFIX). COMPLETION-SET is the list of all matching
1696 completions, and COMPLETED-PREFIX is the best (partial)
1697 completion of the input string.
1698
1699 If STRING is package qualified the result list will also be
1700 qualified. If string is non-qualified the result strings are
1701 also not qualified and are considered relative to
1702 DEFAULT-PACKAGE-NAME.
1703
1704 The way symbols are matched depends on the symbol designator's
1705 format. The cases are as follows:
1706 FOO - Symbols with matching prefix and accessible in the buffer package.
1707 PKG:FOO - Symbols with matching prefix and external in package PKG.
1708 PKG::FOO - Symbols with matching prefix and accessible in package PKG."
1709 (let ((completion-set (completion-set string default-package-name
1710 #'compound-prefix-match)))
1711 (list completion-set (longest-completion completion-set))))
1712
1713 (defslimefun simple-completions (string default-package-name)
1714 "Return a list of completions for a symbol designator STRING."
1715 (let ((completion-set (completion-set string default-package-name
1716 #'prefix-match-p)))
1717 (list completion-set (longest-common-prefix completion-set))))
1718
1719 ;;;;; Find completion set
1720
1721 (defun completion-set (string default-package-name matchp)
1722 "Return the set of completion-candidates as strings."
1723 (multiple-value-bind (name package-name package internal-p)
1724 (parse-completion-arguments string default-package-name)
1725 (let* ((symbols (and package
1726 (find-matching-symbols name
1727 package
1728 (and (not internal-p)
1729 package-name)
1730 matchp)))
1731 (packs (and (not package-name)
1732 (find-matching-packages name matchp)))
1733 (converter (output-case-converter name))
1734 (strings
1735 (mapcar converter
1736 (nconc (mapcar #'symbol-name symbols) packs))))
1737 (format-completion-set strings internal-p package-name))))
1738
1739 (defun find-matching-symbols (string package external test)
1740 "Return a list of symbols in PACKAGE matching STRING.
1741 TEST is called with two strings. If EXTERNAL is true, only external
1742 symbols are returned."
1743 (let ((completions '())
1744 (converter (output-case-converter string)))
1745 (flet ((symbol-matches-p (symbol)
1746 (and (or (not external)
1747 (symbol-external-p symbol package))
1748 (funcall test string
1749 (funcall converter (symbol-name symbol))))))
1750 (do-symbols (symbol package)
1751 (when (symbol-matches-p symbol)
1752 (push symbol completions))))
1753 (remove-duplicates completions)))
1754
1755 (defun symbol-external-p (symbol &optional (package (symbol-package symbol)))
1756 "True if SYMBOL is external in PACKAGE.
1757 If PACKAGE is not specified, the home package of SYMBOL is used."
1758 (multiple-value-bind (_ status)
1759 (find-symbol (symbol-name symbol) (or package (symbol-package symbol)))
1760 (declare (ignore _))
1761 (eq status :external)))
1762
1763 (defun find-matching-packages (name matcher)
1764 "Return a list of package names matching NAME with MATCHER.
1765 MATCHER is a two-argument predicate."
1766 (let ((to-match (string-upcase name)))
1767 (remove-if-not (lambda (x) (funcall matcher to-match x))
1768 (mapcar (lambda (pkgname)
1769 (concatenate 'string pkgname ":"))
1770 (mapcar #'package-name (list-all-packages))))))
1771
1772 (defun parse-completion-arguments (string default-package-name)
1773 "Parse STRING as a symbol designator.
1774 Return these values:
1775 SYMBOL-NAME
1776 PACKAGE-NAME, or nil if the designator does not include an explicit package.
1777 PACKAGE, the package to complete in
1778 INTERNAL-P, if the symbol is qualified with `::'."
1779 (multiple-value-bind (name package-name internal-p)
1780 (tokenize-symbol string)
1781 (let ((package (carefully-find-package package-name default-package-name)))
1782 (values name package-name package internal-p))))
1783
1784 (defun carefully-find-package (name default-package-name)
1785 "Find the package with name NAME, or DEFAULT-PACKAGE-NAME, or the
1786 *buffer-package*. NAME and DEFAULT-PACKAGE-NAME can be nil."
1787 (let ((string (cond ((equal name "") "KEYWORD")
1788 (t (or name default-package-name)))))
1789 (if string
1790 (guess-package-from-string string nil)
1791 *buffer-package*)))
1792
1793 ;;;;; Format completion results
1794 ;;;
1795 ;;; We try to format results in the case as inputs. If you complete
1796 ;;; `FOO' then your result should include `FOOBAR' rather than
1797 ;;; `foobar'.
1798
1799 (defun format-completion-set (strings internal-p package-name)
1800 "Format a set of completion strings.
1801 Returns a list of completions with package qualifiers if needed."
1802 (mapcar (lambda (string)
1803 (format-completion-result string internal-p package-name))
1804 (sort strings #'string<)))
1805
1806 (defun format-completion-result (string internal-p package-name)
1807 (let ((prefix (cond (internal-p (format nil "~A::" package-name))
1808 (package-name (format nil "~A:" package-name))
1809 (t ""))))
1810 (values (concatenate 'string prefix string)
1811 (length prefix))))
1812
1813 (defun output-case-converter (input)
1814 "Return a function to case convert strings for output.
1815 INPUT is used to guess the preferred case."
1816 (ecase (readtable-case *readtable*)
1817 (:upcase (if (some #'lower-case-p input) #'string-downcase #'identity))
1818 (:invert (lambda (output)
1819 (multiple-value-bind (lower upper) (determine-case output)
1820 (cond ((and lower upper) output)
1821 (lower (string-upcase output))
1822 (upper (string-downcase output))
1823 (t output)))))
1824 (:downcase (if (some #'upper-case-p input) #'string-upcase #'identity))
1825 (:preserve #'identity)))
1826
1827 (defun determine-case (string)
1828 "Return two booleans LOWER and UPPER indicating whether STRING
1829 contains lower or upper case characters."
1830 (values (some #'lower-case-p string)
1831 (some #'upper-case-p string)))
1832
1833
1834 ;;;;; Compound-prefix matching
1835
1836 (defun compound-prefix-match (prefix target)
1837 "Return true if PREFIX is a compound-prefix of TARGET.
1838 Viewing each of PREFIX and TARGET as a series of substrings delimited
1839 by hyphens, if each substring of PREFIX is a prefix of the
1840 corresponding substring in TARGET then we call PREFIX a
1841 compound-prefix of TARGET.
1842
1843 Examples:
1844 \(compound-prefix-match \"foo\" \"foobar\") => t
1845 \(compound-prefix-match \"m--b\" \"multiple-value-bind\") => t
1846 \(compound-prefix-match \"m-v-c\" \"multiple-value-bind\") => NIL"
1847 (declare (type simple-string prefix target))
1848 (loop for ch across prefix
1849 with tpos = 0
1850 always (and (< tpos (length target))
1851 (if (char= ch #\-)
1852 (setf tpos (position #\- target :start tpos))
1853 (char= ch (aref target tpos))))
1854 do (incf tpos)))
1855
1856 (defun prefix-match-p (prefix string)
1857 "Return true if PREFIX is a prefix of STRING."
1858 (not (mismatch prefix string :end2 (min (length string) (length prefix)))))
1859
1860
1861 ;;;;; Extending the input string by completion
1862
1863 (defun longest-completion (completions)
1864 "Return the longest prefix for all COMPLETIONS.
1865 COMPLETIONS is a list of strings."
1866 (untokenize-completion
1867 (mapcar #'longest-common-prefix
1868 (transpose-lists (mapcar #'tokenize-completion completions)))))
1869
1870 (defun tokenize-completion (string)
1871 "Return all substrings of STRING delimited by #\-."
1872 (loop with end
1873 for start = 0 then (1+ end)
1874 until (> start (length string))
1875 do (setq end (or (position #\- string :start start) (length string)))
1876 collect (subseq string start end)))
1877
1878 (defun untokenize-completion (tokens)
1879 (format nil "~{~A~^-~}" tokens))
1880
1881 (defun longest-common-prefix (strings)
1882 "Return the longest string that is a common prefix of STRINGS."
1883 (if (null strings)
1884 ""
1885 (flet ((common-prefix (s1 s2)
1886 (let ((diff-pos (mismatch s1 s2)))
1887 (if diff-pos (subseq s1 0 diff-pos) s1))))
1888 (reduce #'common-prefix strings))))
1889
1890 (defun transpose-lists (lists)
1891 "Turn a list-of-lists on its side.
1892 If the rows are of unequal length, truncate uniformly to the shortest.
1893
1894 For example:
1895 \(transpose-lists '((ONE TWO THREE) (1 2)))
1896 => ((ONE 1) (TWO 2))"
1897 ;; A cute function from PAIP p.574
1898 (if lists (apply #'mapcar #'list lists)))
1899
1900
1901 ;;;;; Completion Tests
1902
1903 (defpackage :swank-completion-test
1904 (:use))
1905
1906 (let ((*readtable* (copy-readtable *readtable*))
1907 (p (find-package :swank-completion-test)))
1908 (intern "foo" p)
1909 (intern "Foo" p)
1910 (intern "FOO" p)
1911 (setf (readtable-case *readtable*) :invert)
1912 (flet ((names (prefix)
1913 (sort (mapcar #'symbol-name
1914 (find-matching-symbols prefix p nil #'prefix-match-p))
1915 #'string<)))
1916 (assert (equal '("FOO") (names "f")))
1917 (assert (equal '("Foo" "foo") (names "F")))
1918 (assert (equal '("Foo") (names "Fo")))
1919 (assert (equal '("foo") (names "FO")))))
1920
1921 ;;;; Fuzzy completion
1922
1923 (defslimefun fuzzy-completions (string default-package-name &optional limit)
1924 "Return an (optionally limited to LIMIT best results) list of
1925 fuzzy completions for a symbol designator STRING. The list will
1926 be sorted by score, most likely match first.
1927
1928 The result is a list of completion objects, where a completion
1929 object is:
1930 (COMPLETED-STRING SCORE (&rest CHUNKS))
1931 where a CHUNK is a description of a matched string of characters:
1932 (OFFSET STRING)
1933 For example, the top result for completing \"mvb\" in a package
1934 that uses COMMON-LISP would be something like:
1935 (\"multiple-value-bind\" 42.391666 ((0 \"mul\") (9 \"v\") (15 \"b\")))
1936
1937 If STRING is package qualified the result list will also be
1938 qualified. If string is non-qualified the result strings are
1939 also not qualified and are considered relative to
1940 DEFAULT-PACKAGE-NAME.
1941
1942 Which symbols are candidates for matching depends on the symbol
1943 designator's format. The cases are as follows:
1944 FOO - Symbols accessible in the buffer package.
1945 PKG:FOO - Symbols external in package PKG.
1946 PKG::FOO - Symbols accessible in package PKG."
1947 (fuzzy-completion-set string default-package-name limit))
1948
1949 (defun fuzzy-completion-set (string default-package-name &optional limit)
1950 "Prepares list of completion objects, sorted by SCORE, of fuzzy
1951 completions of STRING in DEFAULT-PACKAGE-NAME. If LIMIT is set,
1952 only the top LIMIT results will be returned."
1953 (multiple-value-bind (name package-name package internal-p)
1954 (parse-completion-arguments string default-package-name)
1955 (let* ((symbols (and package
1956 (fuzzy-find-matching-symbols name
1957 package
1958 (and (not internal-p)
1959 package-name))))
1960 (packs (and (not package-name)
1961 (fuzzy-find-matching-packages name)))
1962 (converter (output-case-converter name))
1963 (results
1964 (sort (mapcar
1965 #'(lambda (result)
1966 (destructuring-bind (symbol-or-name score chunks) result
1967 (multiple-value-bind (name added-length)
1968 (format-completion-result
1969 (funcall converter
1970 (if (symbolp symbol-or-name)
1971 (symbol-name symbol-or-name)
1972 symbol-or-name))
1973 internal-p package-name)
1974 (list name score
1975 (mapcar
1976 #'(lambda (chunk)
1977 ;; fix up chunk positions to
1978 ;; account for possible added
1979 ;; package identifier
1980 (list (+ added-length (first chunk))
1981 (second chunk)))
1982 chunks)))))
1983 (nconc symbols packs))
1984 #'> :key #'second)))
1985 (when (and limit
1986 (> limit 0)
1987 (< limit (length results)))
1988 (setf (cdr (nthcdr (1- limit) results)) nil))
1989 results)))
1990
1991 (defun fuzzy-find-matching-symbols (string package external)
1992 "Return a list of symbols in PACKAGE matching STRING using the
1993 fuzzy completion algorithm. If EXTERNAL is true, only external
1994 symbols are returned."
1995 (let ((completions '())
1996 (converter (output-case-converter string)))
1997 (flet ((symbol-match (symbol)
1998 (and (or (not external)
1999 (symbol-external-p symbol package))
2000 (compute-highest-scoring-completion
2001 string (funcall converter (symbol-name symbol)) #'char=))))
2002 (do-symbols (symbol package)
2003 (multiple-value-bind (result score) (symbol-match symbol)
2004 (when result
2005 (push (list symbol score result) completions)))))
2006 (remove-duplicates completions :key #'first)))
2007
2008 (defun fuzzy-find-matching-packages (name)
2009 "Return a list of package names matching NAME using the fuzzy
2010 completion algorithm."
2011 (let ((converter (output-case-converter name)))
2012 (loop for package in (list-all-packages)
2013 for package-name = (concatenate 'string
2014 (funcall converter
2015 (package-name package))
2016 ":")
2017 for (result score) = (multiple-value-list
2018 (compute-highest-scoring-completion
2019 name package-name #'char=))
2020 if result collect (list package-name score result))))
2021
2022 (defslimefun fuzzy-completion-selected (original-string completion)
2023 "This function is called by Slime when a fuzzy completion is
2024 selected by the user. It is for future expansion to make
2025 testing, say, a machine learning algorithm for completion scoring
2026 easier.
2027
2028 ORIGINAL-STRING is the string the user completed from, and
2029 COMPLETION is the completion object (see docstring for
2030 SWANK:FUZZY-COMPLETIONS) corresponding to the completion that the
2031 user selected."
2032 (declare (ignore original-string completion))
2033 nil)
2034
2035 ;;;;; Fuzzy completion core
2036
2037 (defparameter *fuzzy-recursion-soft-limit* 30
2038 "This is a soft limit for recursion in
2039 RECURSIVELY-COMPUTE-MOST-COMPLETIONS. Without this limit,
2040 completing a string such as \"ZZZZZZ\" with a symbol named
2041 \"ZZZZZZZZZZZZZZZZZZZZZZZ\" will result in explosive recursion to
2042 find all the ways it can match.
2043
2044 Most natural language searches and symbols do not have this
2045 problem -- this is only here as a safeguard.")
2046
2047 (defun compute-highest-scoring-completion (short full test)
2048 "Finds the highest scoring way to complete the abbreviation
2049 SHORT onto the string FULL, using TEST as a equality function for
2050 letters. Returns two values: The first being the completion
2051 chunks of the high scorer, and the second being the score."
2052 (let* ((scored-results
2053 (mapcar #'(lambda (result)
2054 (cons (score-completion result short full) result))
2055 (compute-most-completions short full test)))
2056 (winner (first (sort scored-results #'> :key #'first))))
2057 (values (rest winner) (first winner))))
2058
2059 (defun compute-most-completions (short full test)
2060 "Finds most possible ways to complete FULL with the letters in SHORT.
2061 Calls RECURSIVELY-COMPUTE-MOST-COMPLETIONS recursively. Returns
2062 a list of (&rest CHUNKS), where each CHUNKS is a description of
2063 how a completion matches."
2064 (let ((*all-chunks* nil))
2065 (declare (special *all-chunks*))
2066 (recursively-compute-most-completions short full test 0 0 nil nil nil t)
2067 *all-chunks*))
2068
2069 (defun recursively-compute-most-completions
2070 (short full test
2071 short-index initial-full-index
2072 chunks current-chunk current-chunk-pos
2073 recurse-p)
2074 "Recursively (if RECURSE-P is true) find /most/ possible ways
2075 to fuzzily map the letters in SHORT onto FULL, with TEST being a
2076 function to determine if two letters match.
2077
2078 A chunk is a list of elements that have matched consecutively.
2079 When consecutive matches stop, it is coerced into a string,
2080 paired with the starting position of the chunk, and pushed onto
2081 CHUNKS.
2082
2083 Whenever a letter matches, if RECURSE-P is true,
2084 RECURSIVELY-COMPUTE-MOST-COMPLETIONS calls itself with a position
2085 one index ahead, to find other possibly higher scoring
2086 possibilities. If there are less than
2087 *FUZZY-RECURSION-SOFT-LIMIT* results in *ALL-CHUNKS* currently,
2088 this call will also recurse.
2089
2090 Once a word has been completely matched, the chunks are pushed
2091 onto the special variable *ALL-CHUNKS* and the function returns."
2092 (declare (special *all-chunks*))
2093 (flet ((short-cur ()
2094 "Returns the next letter from the abbreviation, or NIL
2095 if all have been used."
2096 (if (= short-index (length short))
2097 nil
2098 (aref short short-index)))
2099 (add-to-chunk (char pos)
2100 "Adds the CHAR at POS in FULL to the current chunk,
2101 marking the start position if it is empty."
2102 (unless current-chunk
2103 (setf current-chunk-pos pos))
2104 (push char current-chunk))
2105 (collect-chunk ()
2106 "Collects the current chunk to CHUNKS and prepares for
2107 a new chunk."
2108 (when current-chunk
2109 (push (list current-chunk-pos
2110 (coerce (reverse current-chunk) 'string)) chunks)
2111 (setf current-chunk nil
2112 current-chunk-pos nil))))
2113 ;; If there's an outstanding chunk coming in collect it. Since
2114 ;; we're recursively called on skipping an input character, the
2115 ;; chunk can't possibly continue on.
2116 (when current-chunk (collect-chunk))
2117 (do ((pos initial-full-index (1+ pos)))
2118 ((= pos (length full)))
2119 (let ((cur-char (aref full pos)))
2120 (if (and (short-cur)
2121 (funcall test cur-char (short-cur)))
2122 (progn
2123 (when recurse-p
2124 ;; Try other possibilities, limiting insanely deep
2125 ;; recursion somewhat.
2126 (recursively-compute-most-completions
2127 short full test short-index (1+ pos)
2128 chunks current-chunk current-chunk-pos
2129 (not (> (length *all-chunks*)
2130 *fuzzy-recursion-soft-limit*))))
2131 (incf short-index)
2132 (add-to-chunk cur-char pos))
2133 (collect-chunk))))
2134 (collect-chunk)
2135 ;; If we've exhausted the short characters we have a match.
2136 (if (short-cur)
2137 nil
2138 (let ((rev-chunks (reverse chunks)))
2139 (push rev-chunks *all-chunks*)
2140 rev-chunks))))
2141
2142 ;;; XXX Debugging tool? Not called anywhere. -luke (11/Jul/2004)
2143 (defun compute-completion (short full test)
2144 "Finds the first way to complete FULL with the letters in SHORT.
2145 Calls RECURSIVELY-COMPUTE-MOST-COMPLETIONS non-recursively.
2146 Returns a list of one (&rest CHUNKS), where CHUNKS is a
2147 description of how the completion matched."
2148 (let ((*all-chunks* nil))
2149 (declare (special *all-chunks*))
2150 (recursively-compute-most-completions short full test 0 0 nil nil nil nil)
2151 *all-chunks*))
2152
2153 ;;;;; Fuzzy completion scoring
2154
2155 (defparameter *fuzzy-completion-symbol-prefixes* "*+-%&?<"
2156 "Letters that are likely to be at the beginning of a symbol.
2157 Letters found after one of these prefixes will be scored as if
2158 they were at the beginning of ths symbol.")
2159 (defparameter *fuzzy-completion-symbol-suffixes* "*+->"
2160 "Letters that are likely to be at the end of a symbol.
2161 Letters found before one of these suffixes will be scored as if
2162 they were at the end of the symbol.")
2163 (defparameter *fuzzy-completion-word-separators* "-/."
2164 "Letters that separate different words in symbols. Letters
2165 after one of these symbols will be scores more highly than other
2166 letters.")
2167
2168 (defun score-completion (completion short full)
2169 "Scores the completion chunks COMPLETION as a completion from
2170 the abbreviation SHORT to the full string FULL. COMPLETION is a
2171 list like:
2172 ((0 \"mul\") (9 \"v\") (15 \"b\"))
2173 Which, if SHORT were \"mulvb\" and full were \"multiple-value-bind\",
2174 would indicate that it completed as such (completed letters
2175 capitalized):
2176 MULtiple-Value-Bind
2177
2178 Letters are given scores based on their position in the string.
2179 Letters at the beginning of a string or after a prefix letter at
2180 the beginning of a string are scored highest. Letters after a
2181 word separator such as #\- are scored next highest. Letters at
2182 the end of a string or before a suffix letter at the end of a
2183 string are scored medium, and letters anywhere else are scored
2184 low.
2185
2186 If a letter is directly after another matched letter, and its
2187 intrinsic value in that position is less than a percentage of the
2188 previous letter's value, it will use that percentage instead.
2189
2190 Finally, a small scaling factor is applied to favor shorter
2191 matches, all other things being equal."
2192 (flet ((score-chunk (chunk)
2193 (let ((initial-pos (first chunk))
2194 (str (second chunk)))
2195 (labels ((at-beginning-p (pos)
2196 (= pos 0))
2197 (after-prefix-p (pos)
2198 (and (= pos 1)
2199 (find (aref full 0)
2200 *fuzzy-completion-symbol-prefixes*)))
2201 (word-separator-p (pos)
2202 (find (aref full pos)
2203 *fuzzy-completion-word-separators*))
2204 (after-word-separator-p (pos)
2205 (find (aref full (1- pos))
2206 *fuzzy-completion-word-separators*))
2207 (at-end-p (pos)
2208 (= pos (1- (length full))))
2209 (before-suffix-p (pos)
2210 (and (= pos (- (length full) 2))
2211 (find (aref full (1- (length full)))
2212 *fuzzy-completion-symbol-suffixes*)))
2213 (score-or-percentage-of-previous
2214 (base-score pos chunk-pos)
2215 (if (zerop chunk-pos)
2216 base-score
2217 (max base-score
2218 (* (score-char (1- pos) (1- chunk-pos))
2219 0.85))))
2220 (score-char (pos chunk-pos)
2221 (score-or-percentage-of-previous
2222 (cond ((at-beginning-p pos) 10)
2223 ((after-prefix-p pos) 10)
2224 ((word-separator-p pos) 1)
2225 ((after-word-separator-p pos) 8)
2226 ((at-end-p pos) 6)
2227 ((before-suffix-p pos) 6)
2228 (t 1))
2229 pos chunk-pos)))
2230 (loop for chunk-pos below (length str)
2231 for pos from initial-pos
2232 summing (score-char pos chunk-pos))))))
2233 (let* ((chunk-scores (mapcar #'score-chunk completion))
2234 (length-score
2235 (/ 10 (coerce (1+ (- (length full) (length short)))
2236 'single-float))))
2237 (values
2238 (+ (apply #'+ chunk-scores) length-score)
2239 (list (mapcar #'list chunk-scores completion) length-score)))))
2240
2241 (defun highlight-completion (completion full)
2242 "Given a chunk definition COMPLETION and the string FULL,
2243 HIGHLIGHT-COMPLETION will create a string that demonstrates where
2244 the completion matched in the string. Matches will be
2245 capitalized, while the rest of the string will be lower-case."
2246 (let ((highlit (string-downcase full)))
2247 (dolist (chunk completion)
2248 (setf highlit (string-upcase highlit
2249 :start (first chunk)
2250 :end (+ (first chunk)
2251 (length (second chunk))))))
2252 highlit))
2253
2254 (defun format-fuzzy-completions (winners)
2255 "Given a list of completion objects such as on returned by
2256 FUZZY-COMPLETIONS, format the list into user-readable output."
2257 (let ((max-len
2258 (loop for winner in winners maximizing (length (first winner)))))
2259 (loop for (sym score result) in winners do
2260 (format t "~&~VA score ~8,2F ~A"
2261 max-len (highlight-completion result sym) score result))))
2262
2263
2264 ;;;; Documentation
2265
2266 (defslimefun apropos-list-for-emacs (name &optional external-only
2267 case-sensitive package)
2268 "Make an apropos search for Emacs.
2269 The result is a list of property lists."
2270 (let ((package (if package
2271 (or (find-package (string-to-package-designator package))
2272 (error "No such package: ~S" package)))))
2273 (mapcan (listify #'briefly-describe-symbol-for-emacs)
2274 (sort (remove-duplicates
2275 (apropos-symbols name external-only case-sensitive package))
2276 #'present-symbol-before-p))))
2277
2278 (defun string-to-package-designator (string)
2279 "Return a package designator made from STRING.
2280 Uses READ to case-convert STRING."
2281 (let ((*package* *swank-io-package*))
2282 (read-from-string string)))
2283
2284 (defun briefly-describe-symbol-for-emacs (symbol)
2285 "Return a property list describing SYMBOL.
2286 Like `describe-symbol-for-emacs' but with at most one line per item."
2287 (flet ((first-line (string)
2288 (let ((pos (position #\newline string)))
2289 (if (null pos) string (subseq string 0 pos)))))
2290 (let ((desc (map-if #'stringp #'first-line
2291 (describe-symbol-for-emacs symbol))))
2292 (if desc
2293 (list* :designator (to-string symbol) desc)))))
2294
2295 (defun map-if (test fn &rest lists)
2296 "Like (mapcar FN . LISTS) but only call FN on objects satisfying TEST.
2297 Example:
2298 \(map-if #'oddp #'- '(1 2 3 4 5)) => (-1 2 -3 4 -5)"
2299 (declare (type function test fn))
2300 (apply #'mapcar
2301 (lambda (x) (if (funcall test x) (funcall fn x) x))
2302 lists))
2303
2304 (defun listify (f)
2305 "Return a function like F, but which returns any non-null value
2306 wrapped in a list."
2307 (declare (type function f))
2308 (lambda (x)
2309 (let ((y (funcall f x)))
2310 (and y (list y)))))
2311
2312 (defun present-symbol-before-p (a b)
2313 "Return true if A belongs before B in a printed summary of symbols.
2314 Sorted alphabetically by package name and then symbol name, except
2315 that symbols accessible in the current package go first."
2316 (flet ((accessible (s)
2317 (find-symbol (symbol-name s) *buffer-package*)))
2318 (cond ((and (accessible a) (accessible b))
2319 (string< (symbol-name a) (symbol-name b)))
2320 ((accessible a) t)
2321 ((accessible b) nil)
2322 (t
2323 (string< (package-name (symbol-package a))
2324 (package-name (symbol-package b)))))))
2325
2326 (let ((regex-hash (make-hash-table :test #'equal)))
2327 (defun compiled-regex (regex-string)
2328 (or (gethash regex-string regex-hash)
2329 (setf (gethash regex-string regex-hash)
2330 (if (zerop (length regex-string))
2331 (lambda (s) (check-type s string) t)
2332 (compile nil (nregex:regex-compile regex-string)))))))
2333
2334 (defun apropos-matcher (string case-sensitive package external-only)
2335 (let* ((case-modifier (if case-sensitive #'string #'string-upcase))
2336 (regex (compiled-regex (funcall case-modifier string))))
2337 (lambda (symbol)
2338 (and (not (keywordp symbol))
2339 (if package (eq (symbol-package symbol) package) t)
2340 (if external-only (symbol-external-p symbol) t)
2341 (funcall regex (funcall case-modifier symbol))))))
2342
2343 (defun apropos-symbols (string external-only case-sensitive package)
2344 (let ((result '())
2345 (matchp (apropos-matcher string case-sensitive package external-only)))
2346 (with-package-iterator (next (or package (list-all-packages))
2347 :external :internal)
2348 (loop
2349 (multiple-value-bind (morep symbol) (next)
2350 (cond ((not morep)
2351 (return))
2352 ((funcall matchp symbol)
2353 (push symbol result))))))
2354 result))
2355
2356 (defun call-with-describe-settings (fn)
2357 (let ((*print-readably* nil))
2358 (funcall fn)))
2359
2360 (defmacro with-describe-settings ((&rest _) &body body)
2361 (declare (ignore _))
2362 `(call-with-describe-settings (lambda () ,@body)))
2363
2364 (defun describe-to-string (object)
2365 (with-describe-settings ()
2366 (with-output-to-string (*standard-output*)
2367 (describe object))))
2368
2369 (defslimefun describe-symbol (symbol-name)
2370 (with-buffer-syntax ()
2371 (describe-to-string (parse-symbol-or-lose symbol-name))))
2372
2373 (defslimefun describe-function (name)
2374 (with-buffer-syntax ()
2375 (let ((symbol (parse-symbol-or-lose name)))
2376 (describe-to-string (or (macro-function symbol)
2377 (symbol-function symbol))))))
2378
2379 (defslimefun describe-definition-for-emacs (name kind)
2380 (with-buffer-syntax ()
2381 (with-describe-settings ()
2382 (with-output-to-string (*standard-output*)
2383 (describe-definition (parse-symbol-or-lose name) kind)))))
2384
2385 (defslimefun documentation-symbol (symbol-name &optional default)
2386 (with-buffer-syntax ()
2387 (multiple-value-bind (sym foundp) (parse-symbol symbol-name)
2388 (if foundp
2389 (let ((vdoc (documentation sym 'variable))
2390 (fdoc (documentation sym 'function)))
2391 (or (and (or vdoc fdoc)
2392 (concatenate 'string
2393 fdoc
2394 (and vdoc fdoc '(#\Newline #\Newline))
2395 vdoc))
2396 default))
2397 default))))
2398
2399
2400 ;;;; Package Commands
2401
2402 (defslimefun list-all-package-names (&optional include-nicknames)
2403 "Return a list of all package names.
2404 Include the nicknames if INCLUDE-NICKNAMES is true."
2405 (loop for package in (list-all-packages)
2406 collect (package-name package)
2407 when include-nicknames append (package-nicknames package)))
2408
2409
2410 ;;;; Tracing
2411
2412 ;; Use eval for the sake of portability...
2413 (defun tracedp (fspec)
2414 (member fspec (eval '(trace))))
2415
2416 (defslimefun toggle-trace-fdefinition (fname-string)
2417 (let ((fname (from-string fname-string)))
2418 (cond ((tracedp fname)
2419 (eval `(untrace ,fname))
2420 (format nil "~S is now untraced." fname))
2421 (t
2422 (eval `(trace ,fname))
2423 (format nil "~S is now traced." fname)))))
2424
2425 (defslimefun untrace-all ()
2426 (untrace))
2427
2428
2429 ;;;; Undefing
2430
2431 (defslimefun undefine-function (fname-string)
2432 (let ((fname (from-string fname-string)))
2433 (format nil "~S" (fmakunbound fname))))
2434
2435
2436 ;;;; Profiling
2437
2438 (defun profiledp (fspec)
2439 (member fspec (profiled-functions)))
2440
2441 (defslimefun toggle-profile-fdefinition (fname-string)
2442 (let ((fname (from-string fname-string)))
2443 (cond ((profiledp fname)
2444 (unprofile fname)
2445 (format nil "~S is now unprofiled." fname))
2446 (t
2447 (profile fname)
2448 (format nil "~S is now profiled." fname)))))
2449
2450
2451 ;;;; Source Locations
2452
2453 (defslimefun find-definitions-for-emacs (name)
2454 "Return a list ((DSPEC LOCATION) ...) of definitions for NAME.
2455 DSPEC is a string and LOCATION a source location. NAME is a string."
2456 (multiple-value-bind (sexp error)
2457 (ignore-errors (values (from-string name)))
2458 (cond (error '())
2459 (t (loop for (dspec loc) in (find-definitions sexp)
2460 collect (list (to-string dspec) loc))))))
2461
2462 (defun alistify (list key test)
2463 "Partition the elements of LIST into an alist. KEY extracts the key
2464 from an element and TEST is used to compare keys."
2465 (declare (type function key))
2466 (let ((alist '()))
2467 (dolist (e list)
2468 (let* ((k (funcall key e))
2469 (probe (assoc k alist :test test)))
2470 (if probe
2471 (push e (cdr probe))
2472 (push (cons k (list e)) alist))))
2473 alist))
2474
2475 (defun location-position< (pos1 pos2)
2476 (cond ((and (position-p pos1) (position-p pos2))
2477 (< (position-pos pos1)
2478 (position-pos pos2)))
2479 (t nil)))
2480
2481 (defun partition (list test key)
2482 (declare (type function test key))
2483 (loop for e in list
2484 if (funcall test (funcall key e)) collect e into yes
2485 else collect e into no
2486 finally (return (values yes no))))
2487
2488 (defstruct (xref (:conc-name xref.)
2489 (:type list))
2490 dspec location)
2491
2492 (defun location-valid-p (location)
2493 (eq (car location) :location))
2494
2495 (defun xref-buffer (xref)
2496 (location-buffer (xref.location xref)))
2497
2498 (defun xref-position (xref)
2499 (location-buffer (xref.location xref)))
2500
2501 (defun group-xrefs (xrefs)
2502 "Group XREFS, a list of the form ((DSPEC LOCATION) ...) by location.
2503 The result is a list of the form ((LOCATION . ((DSPEC . LOCATION) ...)) ...)."
2504 (multiple-value-bind (resolved errors)
2505 (partition xrefs #'location-valid-p #'xref.location)
2506 (let ((alist (alistify resolved #'xref-buffer #'equal)))
2507 (append
2508 (loop for (buffer . list) in alist
2509 collect (cons (second buffer)
2510 (mapcar (lambda (xref)
2511 (cons (to-string (xref.dspec xref))
2512 (xref.location xref)))
2513 (sort list #'location-position<
2514 :key #'xref-position))))
2515 (if errors
2516 (list (cons "Unresolved"
2517 (mapcar (lambda (xref)
2518 (cons (to-string (xref.dspec xref))
2519 (xref.location xref)))
2520 errors))))))))
2521
2522 (defslimefun xref (type symbol-name)
2523 (let ((symbol (parse-symbol-or-lose symbol-name *buffer-package*)))
2524 (group-xrefs
2525 (ecase type
2526 (:calls (who-calls symbol))
2527 (:references (who-references symbol))
2528 (:binds (who-binds symbol))
2529 (:sets (who-sets symbol))
2530 (:macroexpands (who-macroexpands symbol))
2531 (:specializes (who-specializes symbol))
2532 (:callers (list-callers symbol))
2533 (:callees (list-callees symbol))))))
2534
2535
2536 ;;;; Inspecting
2537
2538 (defun common-seperated-spec (list &optional (callback (lambda (v) `(:value ,v))))
2539 (butlast
2540 (loop
2541 for i in list
2542 collect (funcall callback i)
2543 collect ", ")))
2544
2545 (defun inspector-princ (list)
2546 "Just like princ-to-string, but don't rewrite (function foo) as
2547 #'foo. Do NOT pass circular lists to this function."
2548 (with-output-to-string (as-string)
2549 (labels ((printer (object)
2550 (typecase object
2551 (null (princ nil as-string))
2552 (cons
2553 (write-char #\( as-string)
2554 (printer (car object))
2555 (loop
2556 for (head . tail) on (cdr object)
2557 do (write-char #\Space as-string)
2558 do (printer head)
2559 unless (listp tail)
2560 do (progn
2561 (write-string " . " as-string)
2562 (printer tail))
2563 and return t)
2564 (write-char #\) as-string))
2565 (t (princ object as-string)))))
2566 (printer list))))
2567
2568 (defmethod inspect-for-emacs ((object cons) (inspector t))
2569 (declare (ignore inspector))
2570 (if (consp (cdr object))
2571 (inspect-for-emacs-list object)
2572 (inspect-for-emacs-simple-cons object)))
2573
2574 (defun inspect-for-emacs-simple-cons (cons)
2575 (values "A cons cell."
2576 (label-value-line*
2577 ('car (car cons))
2578 ('cdr (cdr cons)))))
2579
2580 (defun inspect-for-emacs-list (list)
2581 (let ((maxlen 40))
2582 (multiple-value-bind (length tail) (safe-length list)
2583 (flet ((frob (title list &rest rest)
2584 (values title
2585 (append '("Elements:" (:newline))
2586 (loop for i from 0
2587 for e in list
2588 append (label-value-line i e))
2589 rest))))
2590 (cond ((not length) ; circular
2591 (frob "A circular list."
2592 (cons (car list)
2593 (ldiff (cdr list) list))))
2594 ((and (<= length maxlen) (not tail))
2595 (frob "A proper list." list))
2596 (tail
2597 (frob "An improper list."
2598 (subseq list 0 length)
2599 (list :value tail "tail")))
2600 (t
2601 (frob "A proper list."
2602 (subseq list 0 maxlen)
2603 (list :value (nthcdr maxlen list) "rest"))))))))
2604
2605 (defun safe-length (list)
2606 "Similar to `list-length', but avoid errors on improper lists.
2607 Return two values: the length of the list and the last cdr.
2608 NIL is returned if the list is circular."
2609 (do ((n 0 (+ n 2)) ;Counter.
2610 (fast list (cddr fast)) ;Fast pointer: leaps by 2.
2611 (slow list (cdr slow))) ;Slow pointer: leaps by 1.
2612 (nil)
2613 (cond ((null fast) (return (values n nil)))
2614 ((not (consp fast)) (return (values n fast)))
2615 ((null (cdr fast)) (return (values (1+ n) (cdr fast))))
2616 ((and (eq fast slow) (> n 0)) (return nil))
2617 ((not (consp (cdr fast))) (return (values (1+ n) (cdr fast)))))))
2618
2619 (defmethod inspect-for-emacs ((ht hash-table) (inspector t))
2620 (declare (ignore inspector))
2621 (values "A hash table."
2622 (append
2623 (label-value-line*
2624 ("Count" (hash-table-count ht))
2625 ("Size" (hash-table-size ht))
2626 ("Test" (hash-table-test ht))
2627 ("Rehash size" (hash-table-rehash-size ht))
2628 ("Rehash threshold" (hash-table-rehash-threshold ht)))
2629 '("Contents: " (:newline))
2630 (loop for key being the hash-keys of ht
2631 for value being the hash-values of ht
2632 append `((:value ,key) " = " (:value ,value) (:newline))))))
2633
2634 (defmethod inspect-for-emacs ((array array) (inspector t))
2635 (declare (ignore inspector))
2636 (values "An array."
2637 (append
2638 (label-value-line*
2639 ("Dimensions" (array-dimensions array))
2640 ("Its element type is" (array-element-type array))
2641 ("Total size" (array-total-size array))
2642 ("Fill pointer" (fill-pointer array))
2643 ("Adjustable" (adjustable-array-p array)))
2644 '("Contents:" (:newline))
2645 (let ((darray (make-array (array-total-size array)
2646 :displaced-to array
2647 :displaced-index-offset 0)))
2648 (loop for e across darray
2649 for i from 0
2650 collect (label-value-line i e))))))
2651
2652 (defmethod inspect-for-emacs ((char character) (inspector t))
2653 (declare (ignore inspector))
2654 (values "A character."
2655 (append
2656 (label-value-line*
2657 ("Char code" (char-code char))
2658 ("Lower cased" (char-downcase char))
2659 ("Upper cased" (char-upcase char)))
2660 (if (get-macro-character char)
2661 `("In the current readtable ("
2662 (:value ,*readtable*) ") it is a macro character: "
2663 (:value ,(get-macro-character char)))))))
2664
2665 ;; Shouldn't most of this stuff be done by describe-symbol-for-emacs? -- he
2666 (defmethod inspect-for-emacs ((symbol symbol) (inspector t))
2667 (declare (ignore inspector))
2668 (let ((internal-external (multiple-value-bind (symbol status)
2669 (intern (symbol-name symbol) (symbol-package symbol))
2670 (declare (ignore symbol))
2671 (ecase status
2672 ((:internal :inherited) :internal)
2673 (:external :external))))
2674 (package (when (find-package symbol)
2675 `("It names the package " (:value ,(find-package symbol)) (:newline))))
2676 (class (when (find-class symbol nil)
2677 `("It names the class " (:value ,(find-class symbol) ,(inspector-princ (class-name (find-class symbol))))
2678 " " (:action ,(format nil "[remove name ~S (does not affect class object)]" symbol)
2679 ,(lambda () (setf (find-class symbol) nil)))))))
2680 (values "A symbol."
2681 `("Its name is: " (:value ,(symbol-name symbol))
2682 (:newline)
2683 ;; check to see whether it is a global variable, a
2684 ;; constant, or a symbol macro.
2685 ,@(let ((documentation (when (documentation symbol 'variable)
2686 `((:newline)
2687 "Documentation:"
2688 (:newline)
2689 ,(documentation symbol 'variable)))))
2690 (cond
2691 ((constantp symbol)
2692 `("It is a constant of value: " (:value ,(symbol-value symbol)) ,@documentation))
2693 ((boundp symbol)
2694 `("It is a global variable bound to: " (:value ,(symbol-value symbol)) ,@documentation))
2695 ((nth-value 1 (macroexpand symbol))
2696 `("It is a symbol macro with expansion: " (:value ,(macroexpand symbol))))
2697 (t
2698 `("It is unbound."))))
2699 (:newline)
2700 ,@(if (fboundp symbol)
2701 (append
2702 (if (macro-function symbol)
2703 `("It a macro with macro-function: " (:value ,(macro-function symbol)))
2704 `("It is a function: " (:value ,(symbol-function symbol))))
2705 `(" " (:action "[make funbound]" ,(lambda () (fmakunbound symbol))))
2706 `((:newline))
2707 (when (documentation symbol 'function)
2708 `("Documentation:" (:newline) ,(documentation symbol 'function) (:newline)))
2709 (when (compiler-macro-function symbol)
2710 `("It also names the compiler macro: " (:value ,(compiler-macro-function symbol)) (:newline)))
2711 (when (documentation symbol 'compiler-macro)
2712 `("Documentation:" (:newline) ,(documentation symbol 'compiler-macro) (:newline))))
2713 `("It has no function value." (:newline)))
2714 "It is " ,(case internal-external
2715 (:internal "internal")
2716 (:external "external")) " to the package: " (:value ,(symbol-package symbol))
2717 ,@(when (eql :internal internal-external)
2718 `(" " (:action ,(with-output-to-string (export-label)
2719 (princ "[export from " export-label)
2720 (princ (package-name (symbol-package symbol)) export-label)
2721 (princ "]" export-label))
2722 ,(lambda () (export symbol (symbol-package symbol))))))
2723 (:newline)
2724 "Property list: " (:value ,(symbol-plist symbol))
2725 (:newline)
2726 ,@package
2727 ,@class))))
2728
2729 (defmethod inspect-for-emacs ((f function) (inspector t))
2730 (declare (ignore inspector))
2731 (values "A function."
2732 `("Name: " (:value ,(function-name f)) (:newline)
2733 "Its argument list is: " ,(inspector-princ (arglist f))
2734 (:newline)
2735 ,@(when (documentation f t)
2736 `("Documentation:" (:newline) ,(documentation f t) (:newline))))))
2737
2738 (defun method-specializers-for-inspect (method)
2739 "Return a \"pretty\" list of the method's specializers. Normal
2740 specializers are replaced by the name of the class, eql
2741 specializers are replaced by `(eql ,object)."
2742 (mapcar (lambda (spec)
2743 (typecase spec
2744 (swank-mop:eql-specializer
2745 `(eql ,(swank-mop:eql-specializer-object spec)))
2746 (t (swank-mop:class-name spec))))
2747 (swank-mop:method-specializers method)))
2748
2749 (defun method-for-inspect-value (method)
2750 "Returns a \"pretty\" list describing METHOD. The first element
2751 of the list is the name of generic-function method is
2752 specialiazed on, the second element is the method qualifiers,
2753 the rest of the list is the method's specialiazers (as per
2754 method-specializers-for-inspect)."
2755 (if (swank-mop:method-qualifiers method)
2756 (list*
2757 (swank-mop:generic-function-name (swank-mop:method-generic-function method))
2758 (let ((quals (swank-mop:method-qualifiers method)))
2759 (if (= 1 (length quals))
2760 (first quals)
2761 quals))
2762 (method-specializers-for-inspect method))
2763 (list*
2764 (swank-mop:generic-function-name (swank-mop:method-generic-function method))
2765 (method-specializers-for-inspect method))))
2766
2767 (defmethod inspect-for-emacs ((o standard-object) (inspector t))
2768 (declare (ignore inspector))
2769 (values "An object."
2770 `("Class: " (:value ,(class-of o))
2771 (:newline)
2772 "Slots:" (:newline)
2773 ,@(loop
2774 with direct-slots = (swank-mop:class-direct-slots (class-of o))
2775 for slot in (swank-mop:class-slots (class-of o))
2776 for slot-def = (or (find-if (lambda (a)
2777 ;; find the direct slot
2778 ;; with the same name
2779 ;; as SLOT (an
2780 ;; effective slot).
2781 (eql (swank-mop:slot-definition-name a)
2782 (swank-mop:slot-definition-name slot)))
2783 direct-slots)
2784 slot)
2785 collect `(:value ,slot-def ,(inspector-princ (swank-mop:slot-definition-name slot-def)))
2786 collect " = "
2787 if (slot-boundp o (swank-mop:slot-definition-name slot-def))
2788 collect `(:value ,(slot-value o (swank-mop:slot-definition-name slot-def)))
2789 else
2790 collect "#<unbound>"
2791 collect '(:newline)))))
2792
2793 (defmethod inspect-for-emacs ((gf standard-generic-function) (inspector t))
2794 (declare (ignore inspector))
2795 (values "A generic function."
2796 `("Name: " (:value ,(swank-mop:generic-function-name gf)) (:newline)
2797 "Its argument list is: " ,(inspector-princ (swank-mop:generic-function-lambda-list gf)) (:newline)
2798 "Documentation: " (:newline)
2799 ,(inspector-princ (documentation gf t)) (:newline)
2800 "Its method class is: " (:value ,(swank-mop:generic-function-method-class gf)) (:newline)
2801 "It uses " (:value ,(swank-mop:generic-function-method-combination gf)) " method combination." (:newline)
2802 "Methods: " (:newline)
2803 ,@(loop
2804 for method in (swank-mop:generic-function-methods gf)
2805 collect `(:value ,method ,(inspector-princ
2806 ;; drop the first element (the name of the generic function)
2807 (cdr (method-for-inspect-value method))))
2808 collect " "
2809 collect (let ((meth method))
2810 `(:action "[remove method]" ,(lambda () (remove-method gf meth))))
2811 collect '(:newline)))))
2812
2813 (defmethod inspect-for-emacs ((method standard-method) (inspector t))
2814 (declare (ignore inspector))
2815 (values "A method."
2816 `("Method defined on the generic function " (:value ,(swank-mop:method-generic-function method)
2817 ,(inspector-princ
2818 (swank-mop:generic-function-name
2819 (swank-mop:method-generic-function method))))
2820 (:newline)
2821 ,@(when (documentation method t)
2822 `("Documentation:" (:newline) ,(documentation method t) (:newline)))
2823 "Lambda List: " (:value ,(swank-mop:method-lambda-list method))
2824 (:newline)
2825 "Specializers: " (:value ,(swank-mop:method-specializers method)
2826 ,(inspector-princ (method-specializers-for-inspect method)))
2827 (:newline)
2828 "Qualifiers: " (:value ,(swank-mop:method-qualifiers method))
2829 (:newline)
2830 "Method function: " (:value ,(swank-mop:method-function method)))))
2831
2832 (defmethod inspect-for-emacs ((class standard-class) (inspector t))
2833 (declare (ignore inspector))
2834 (values "A stadard class."
2835 `("Name: " (:value ,(class-name class))
2836 (:newline)
2837 "Super classes: " ,@(common-seperated-spec (swank-mop:class-direct-superclasses class))
2838 (:newline)
2839 "Direct Slots: " ,@(common-seperated-spec (swank-mop:class-direct-slots class)
2840 (lambda (slot)
2841 `(:value ,slot ,(inspector-princ
2842 (swank-mop:slot-definition-name slot)))))
2843 (:newline)
2844 "Effective Slots: " ,@(if (swank-mop:class-finalized-p class)
2845 (common-seperated-spec (swank-mop:class-slots class)
2846 (lambda (slot)
2847 `(:value ,slot ,(inspector-princ
2848 (swank-mop:slot-definition-name slot)))))
2849 '("#<N/A (class not finalized)>"))
2850 (:newline)
2851 ,@(when (documentation class t)
2852 `("Documentation:" (:newline)
2853 ,(documentation class t) (:newline)))
2854 "Sub classes: " ,@(common-seperated-spec (swank-mop:class-direct-subclasses class)
2855 (lambda (sub)
2856 `(:value ,sub ,(inspector-princ (class-name sub)))))
2857 (:newline)
2858 "Precedence List: " ,@(if (swank-mop:class-finalized-p class)
2859 (common-seperated-spec (swank-mop:class-precedence-list class)
2860 (lambda (class)
2861 `(:value ,class ,(inspector-princ (class-name class)))))
2862 '("#<N/A (class not finalized)>"))
2863 (:newline)
2864 ,@(when (swank-mop:specializer-direct-methods class)
2865 `("It is used as a direct specializer in the following methods:" (:newline)
2866 ,@(loop
2867 for method in (swank-mop:specializer-direct-methods class)
2868 collect `(:value ,method ,(inspector-princ (method-for-inspect-value method)))
2869 collect '(:newline))))
2870 "Prototype: " ,(if (swank-mop:class-finalized-p class)
2871 `(:value ,(swank-mop:class-prototype class))
2872 '"#<N/A (class not finalized)>"))))
2873
2874 (defmethod inspect-for-emacs ((slot swank-mop:standard-slot-definition) (inspector t))
2875 (declare (ignore inspector))
2876 (values "A slot."
2877 `("Name: " (:value ,(swank-mop:slot-definition-name slot))
2878 (:newline)
2879 ,@(when (swank-mop:slot-definition-documentation slot)
2880 `("Documentation:" (:newline)
2881 (:value ,(swank-mop:slot-definition-documentation slot))
2882 (:newline)))
2883 "Init args: " (:value ,(swank-mop:slot-definition-initargs slot)) (:newline)
2884 "Init form: " ,(if (swank-mop:slot-definition-initfunction slot)
2885 `(:value ,(swank-mop:slot-definition-initform slot))
2886 "#<unspecified>") (:newline)
2887 "Init function: " (:value ,(swank-mop:slot-definition-initfunction slot))
2888 (:newline))))
2889
2890 (defmethod inspect-for-emacs ((package package) (inspector t))
2891 (declare (ignore inspector))
2892 (let ((internal-symbols '())
2893 (external-symbols '()))
2894 (do-symbols (sym package)
2895 (when (eq package (symbol-package sym))
2896 (push sym internal-symbols)
2897 (multiple-value-bind (symbol status)
2898 (find-symbol (symbol-name sym) package)
2899 (declare (ignore symbol))
2900 (when (eql :external status)
2901 (push sym external-symbols)))))
2902 (setf internal-symbols (sort internal-symbols #'string-lessp)
2903 external-symbols (sort external-symbols #'string-lessp))
2904 (values "A package."
2905 `("Name: " (:value ,(package-name package))
2906 (:newline)
2907 "Nick names: " ,@(common-seperated-spec (sort (package-nicknames package) #'string-lessp))
2908 (:newline)
2909 ,@(when (documentation package t)
2910 `("Documentation:" (:newline)
2911 ,(documentation package t) (:newline)))
2912 "Use list: " ,@(common-seperated-spec (sort (package-use-list package) #'string-lessp :key #'package-name)
2913 (lambda (pack)
2914 `(:value ,pack ,(inspector-princ (package-name pack)))))
2915 (:newline)
2916 "Used by list: " ,@(common-seperated-spec (sort (package-used-by-list package) #'string-lessp :key #'package-name)
2917 (lambda (pack)
2918 `(:value ,pack ,(inspector-princ (package-name pack)))))
2919 (:newline)
2920 ,(if (null external-symbols)
2921 "0 external symbols."
2922 `(:value ,external-symbols ,(format nil "~D external symbol~:P." (length external-symbols))))
2923 (:newline)
2924 ,(if (null internal-symbols)
2925 "0 internal symbols."
2926 `(:value ,internal-symbols ,(format nil "~D internal symbol~:P." (length internal-symbols))))
2927 (:newline)
2928 ,(if (null (package-shadowing-symbols package))
2929 "0 shadowed symbols."
2930 `(:value ,(package-shadowing-symbols package)
2931 ,(format nil "~D shadowed symbol~:P." (length (package-shadowing-symbols package)))))))))
2932
2933 (defmethod inspect-for-emacs ((pathname pathname) (inspector t))
2934 (declare (ignore inspector))
2935 (values (if (wild-pathname-p pathname)
2936 "A wild pathname."
2937 "A pathname.")
2938 (append (label-value-line*
2939 ("Namestring" (namestring pathname))
2940 ("Host" (pathname-host pathname))
2941 ("Device" (pathname-device pathname))
2942 ("Directory" (pathname-directory pathname))
2943 ("Name" (pathname-name pathname))
2944 ("Type" (pathname-type pathname))
2945 ("Version" (pathname-version pathname)))
2946 (unless (or (wild-pathname-p pathname)
2947 (not (probe-file pathname)))
2948 (label-value-line "Truename" (truename pathname))))))
2949
2950 (defmethod inspect-for-emacs ((pathname logical-pathname) (inspector t))
2951 (declare (ignore inspector))
2952 (values "A logical pathname."
2953 (append
2954 (label-value-line*
2955 ("Namestring" (namestring pathname))
2956 ("Physical pathname: " (translate-logical-pathname pathname)))
2957 `("Host: " (pathname-host pathname)
2958 " (" (:value ,(logical-pathname-translations
2959 (pathname-host pathname)))
2960 "other translations)"
2961 (:newline))
2962 (label-value-line*
2963 ("Directory" (pathname-directory pathname))
2964 ("Name" (pathname-name pathname))
2965 ("Type" (pathname-type pathname))
2966 ("Version" (pathname-version pathname))
2967 ("Truename" (if (not (wild-pathname-p pathname))
2968 (probe-file pathname)))))))
2969
2970 (defmethod inspect-for-emacs ((n number) (inspector t))
2971 (declare (ignore inspector))
2972 (values "A number." `("Value: " ,(princ-to-string n))))
2973
2974 (defmethod inspect-for-emacs ((i integer) (inspector t))
2975 (declare (ignore inspector))
2976 (values "A number."
2977 (append
2978 `(,(format nil "Value: ~D = #x~X = #o~O = #b~,,' ,8B = ~E"
2979 i i i i i)
2980 (:newline))
2981 (if (< -1 i char-code-limit)
2982 (label-value-line "Corresponding character" (code-char i)))
2983 (label-value-line "Length" (integer-length i))
2984 (list "As time: "
2985 (multiple-value-bind (sec min hour date month year)
2986 (decode-universal-time i)
2987 (format nil "~4,'0D-~2,'0D-~2,'0DT~2,'0D:~2,'0D:~2,'0DZ"
2988 year month date hour min sec))))))
2989
2990 (defmethod inspect-for-emacs ((c complex) (inspector t))
2991 (declare (ignore inspector))
2992 (values "A complex number."
2993 (label-value-line*
2994 ("Real part" (realpart c))
2995 ("Imaginary part" (imagpart c)))))
2996
2997 (defmethod inspect-for-emacs ((r ratio) (inspector t))
2998 (declare (ignore inspector))
2999 (values "A non-integer ratio."
3000 (label-value-line*
3001 ("Numerator" (numerator r))
3002 ("Denominator" (denominator r))
3003 ("As float" (float r)))))
3004
3005 (defmethod inspect-for-emacs ((f float) (inspector t))
3006 (declare (ignore inspector))
3007 (multiple-value-bind (significand exponent sign) (decode-float f)
3008 (values "A floating point number."
3009 (append
3010 `("Scientific: " ,(format nil "~E" f) (:newline)
3011 "Decoded: "
3012 (:value ,sign) " * "
3013 (:value ,significand) " * "
3014 (:value ,(float-radix f)) "^" (:value ,exponent) (:newline))
3015 (label-value-line "Digits" (float-digits f))
3016 (label-value-line "Precision" (float-precision f))))))
3017
3018 (defvar *inspectee*)
3019 (defvar *inspectee-parts* (make-array 10 :adjustable t :fill-pointer 0))
3020 (defvar *inspectee-actions* (make-array 10 :adjustable t :fill-pointer 0))
3021 (defvar *inspector-stack* '())
3022 (defvar *inspector-history* (make-array 10 :adjustable t :fill-pointer 0))
3023 (declaim (type vector *inspector-history*))
3024 (defvar *inspect-length* 30)
3025
3026 (defun reset-inspector ()
3027 (setq *inspectee* nil
3028 *inspector-stack* nil
3029 *inspectee-parts* (make-array 10 :adjustable t :fill-pointer 0)
3030 *inspectee-actions* (make-array 10 :adjustable t :fill-pointer 0)
3031 *inspector-history* (make-array 10 :adjustable t :fill-pointer 0)))
3032
3033 (defslimefun init-inspector (string)
3034 (with-buffer-syntax ()
3035 (reset-inspector)
3036 (inspect-object (eval (read-from-string string)))))
3037
3038 (defun print-part-to-string (value)
3039 (let ((string (to-string value))
3040 (pos (position value *inspector-history*)))
3041 (if pos
3042 (format nil "#~D=~A" pos string)
3043 string)))
3044
3045 (defun inspector-content-for-emacs (specs)
3046 (loop for part in specs collect
3047 (etypecase part
3048 (null ; XXX encourages sloppy programming
3049 nil)
3050 (string part)
3051 (cons (destructure-case part
3052 ((:newline)
3053 (string #\newline))
3054 ((:value obj &optional str)
3055 (value-part-for-emacs obj str))
3056 ((:action label lambda)
3057 (action-part-for-emacs label lambda)))))))
3058
3059 (defun assign-index (object vector)
3060 (or (position object vector)
3061 (progn (vector-push-extend object vector)
3062 (position object vector))))
3063
3064 (defun value-part-for-emacs (object string)
3065 (list :value
3066 (or string (print-part-to-string object))
3067 (assign-index object *inspectee-parts*)))
3068
3069 (defun action-part-for-emacs (label lambda)
3070 (list :action label (assign-index lambda *inspectee-actions*)))
3071
3072 (defun inspect-object (object &optional (inspector (make-default-inspector)))
3073 (push (setq *inspectee* object) *inspector-stack*)
3074 (unless (find object *inspector-history*)
3075 (vector-push-extend object *inspector-history*))
3076 (let ((*print-pretty* nil) ; print everything in the same line
3077 (*print-circle* t)
3078 (*print-readably* nil))
3079 (multiple-value-bind (title content)
3080 (inspect-for-emacs object inspector)
3081 (list :title title
3082 :type (to-string (type-of object))
3083 :content (inspector-content-for-emacs content)))))
3084
3085 (defslimefun inspector-nth-part (index)
3086 (aref *inspectee-parts* index))
3087
3088 (defslimefun inspect-nth-part (index)
3089 (with-buffer-syntax ()
3090 (inspect-object (inspector-nth-part index))))
3091
3092 (defslimefun inspector-call-nth-action (index &rest args)
3093 (apply (aref *inspectee-actions* index) args)
3094 (inspect-object (pop *inspector-stack*)))
3095
3096 (defslimefun inspector-pop ()
3097 "Drop the inspector stack and inspect the second element. Return
3098 nil if there's no second element."
3099 (with-buffer-syntax ()
3100 (cond ((cdr *inspector-stack*)
3101 (pop *inspector-stack*)
3102 (inspect-object (pop *inspector-stack*)))
3103 (t nil))))
3104
3105 (defslimefun inspector-next ()
3106 "Inspect the next element in the *inspector-history*."
3107 (with-buffer-syntax ()
3108 (let ((position (position *inspectee* *inspector-history*)))
3109 (cond ((= (1+ position) (length *inspector-history*))
3110 nil)
3111 (t (inspect-object (aref *inspector-history* (1+ position))))))))
3112
3113 (defslimefun quit-inspector ()
3114 (reset-inspector)
3115 nil)
3116
3117 (defslimefun describe-inspectee ()
3118 "Describe the currently inspected object."
3119 (with-buffer-syntax ()
3120 (describe-to-string *inspectee*)))
3121
3122 (defslimefun inspect-in-frame (string index)
3123 (with-buffer-syntax ()
3124 (reset-inspector)
3125 (inspect-object (eval-in-frame (from-string string) index))))
3126
3127 (defslimefun inspect-current-condition ()
3128 (with-buffer-syntax ()
3129 (reset-inspector)
3130 (inspect-object *swank-debugger-condition*)))
3131
3132 (defslimefun inspect-frame-var (frame var)
3133 (with-buffer-syntax ()
3134 (reset-inspector)
3135 (inspect-object (frame-var-value frame var))))
3136
3137
3138 ;;;; Thread listing
3139
3140 (defvar *thread-list* ()
3141 "List of threads displayed in Emacs. We don't care a about
3142 synchronization issues (yet). There can only be one thread listing at
3143 a time.")
3144
3145 (defslimefun list-threads ()
3146 "Return a list ((NAME DESCRIPTION) ...) of all threads."
3147 (setq *thread-list* (all-threads))
3148 (loop for thread in *thread-list*
3149 collect (list (thread-name thread)
3150 (thread-status thread))))
3151
3152 (defslimefun quit-thread-browser ()
3153 (setq *thread-list* nil))
3154
3155 (defun nth-thread (index)
3156 (nth index *thread-list*))
3157
3158 (defslimefun debug-nth-thread (index)
3159 (let ((connection *emacs-connection*))
3160 (interrupt-thread (nth-thread index)
3161 (lambda ()
3162 (with-connection (connection)
3163 (simple-break))))))
3164
3165 (defslimefun kill-nth-thread (index)
3166 (kill-thread (nth-thread index)))
3167
3168 (defslimefun start-swank-server-in-thread (index port-file-name)
3169 "Interrupt the INDEXth thread and make it start a swank server.
3170 The server port is written to PORT-FILE-NAME."
3171 (interrupt-thread (nth-thread index)
3172 (lambda ()
3173 (start-server port-file-name nil))))
3174
3175 ;;;; Class browser
3176
3177 (defun mop-helper (class-name fn)
3178 (let ((class (find-class class-name nil)))
3179 (if class
3180 (mapcar (lambda (x) (to-string (class-name x)))
3181 (funcall fn class)))))
3182
3183 (defslimefun mop (type symbol-name)
3184 "Return info about classes using mop.
3185
3186 When type is:
3187 :subclasses - return the list of subclasses of class.
3188 :superclasses - return the list of superclasses of class."
3189 (let ((symbol (parse-symbol symbol-name *buffer-package*)))
3190 (ecase type
3191 (:subclasses
3192 (mop-helper symbol #'swank-mop:class-direct-subclasses))
3193 (:superclasses
3194 (mop-helper symbol #'swank-mop:class-direct-superclasses)))))
3195
3196
3197 ;;;; Automatically synchronized state
3198 ;;;
3199 ;;; Here we add hooks to push updates of relevant information to
3200 ;;; Emacs.
3201
3202 ;;;;; *FEATURES*
3203
3204 (defun sync-features-to-emacs ()
3205 "Update Emacs if any relevant Lisp state has changed."
3206 ;; FIXME: *slime-features* should be connection-local
3207 (unless (eq *slime-features* *features*)
3208 (setq *slime-features* *features*)
3209 (send-to-emacs (list :new-features (mapcar #'symbol-name *features*)))))
3210
3211 (add-hook *pre-reply-hook* 'sync-features-to-emacs)
3212
3213
3214 ;;;;; Indentation of macros
3215 ;;;
3216 ;;; This code decides how macros should be indented (based on their
3217 ;;; arglists) and tells Emacs. A per-connection cache is used to avoid
3218 ;;; sending redundant information to Emacs -- we just say what's
3219 ;;; changed since last time.
3220 ;;;
3221 ;;; The strategy is to scan all symbols, pick out the macros, and look
3222 ;;; for &body-arguments.
3223
3224 (defvar *configure-emacs-indentation* t
3225 "When true, automatically send indentation information to Emacs
3226 after each command.")
3227
3228 (defslimefun update-indentation-information ()
3229 (perform-indentation-update *emacs-connection* t))
3230
3231 ;; This function is for *PRE-REPLY-HOOK*.
3232 (defun sync-indentation-to-emacs ()
3233 "Send any indentation updates to Emacs via CONNECTION."
3234 (when *configure-emacs-indentation*
3235 (let ((fullp (need-full-indentation-update-p *emacs-connection*)))
3236 (perform-indentation-update *emacs-connection* fullp))))
3237
3238 (defun need-full-indentation-update-p (connection)
3239 "Return true if the whole indentation cache should be updated.
3240 This is a heuristic to avoid scanning all symbols all the time:
3241 instead, we only do a full scan if the set of packages has changed."
3242 (set-difference (list-all-packages)
3243 (connection.indentation-cache-packages connection)))
3244
3245 (defun perform-indentation-update (connection force)
3246 "Update the indentation cache in CONNECTION and update Emacs.
3247 If FORCE is true then start again without considering the old cache."
3248 (let ((cache (connection.indentation-cache connection)))
3249 (when force (clrhash cache))
3250 (let ((delta (update-indentation/delta-for-emacs cache force)))
3251 (setf (connection.indentation-cache-packages connection)
3252 (list-all-packages))
3253 (unless (null delta)
3254 (send-to-emacs (list :indentation-update delta))))))
3255
3256 (defun update-indentation/delta-for-emacs (cache &optional force)
3257 "Update the cache and return the changes in a (SYMBOL . INDENT) list.
3258 If FORCE is true then check all symbols, otherwise only check symbols
3259 belonging to the buffer package."
3260 (let ((alist '()))
3261 (flet ((consider (symbol)
3262 (let ((indent (symbol-indentation symbol)))
3263 (when indent
3264 (unless (equal (gethash symbol cache) indent)
3265 (setf (gethash symbol cache) indent)
3266 (push (cons (string-downcase symbol) indent) alist))))))
3267 (if force
3268 (do-all-symbols (symbol)
3269 (consider symbol))
3270 (do-symbols (symbol *buffer-package*)
3271 (when (eq (symbol-package symbol) *buffer-package*)
3272 (consider symbol)))))
3273 alist))
3274
3275 (defun package-names (package)
3276 "Return the name and all nicknames of PACKAGE in a list."
3277 (cons (package-name package) (package-nicknames package)))
3278
3279 (defun cl-symbol-p (symbol)
3280 "Is SYMBOL a symbol in the COMMON-LISP package?"
3281 (eq (symbol-package symbol) cl-package))
3282
3283 (defun known-to-emacs-p (symbol)
3284 "Return true if Emacs has special rules for indenting SYMBOL."
3285 (cl-symbol-p symbol))
3286
3287 (defun symbol-indentation (symbol)
3288 "Return a form describing the indentation of SYMBOL.
3289 The form is to be used as the `common-lisp-indent-function' property
3290 in Emacs."
3291 (if (and (macro-function symbol)
3292 (not (known-to-emacs-p symbol)))
3293 (let ((arglist (arglist symbol)))
3294 (etypecase arglist
3295 ((member :not-available)
3296 nil)
3297 (list
3298 (macro-indentation arglist))))
3299 nil))
3300
3301 (defun macro-indentation (arglist)
3302 (if (well-formed-list-p arglist)
3303 (position '&body (remove '&whole arglist))
3304 nil))
3305
3306 (defun well-formed-list-p (list)
3307 "Is LIST a proper list terminated by NIL?"
3308 (typecase list
3309 (null t)
3310 (cons (well-formed-list-p (cdr list)))
3311 (t nil)))
3312
3313 (defun print-indentation-lossage (&optional (stream *standard-output*))
3314 "Return the list of symbols whose indentation styles collide incompatibly.
3315 Collisions are caused because package information is ignored."
3316 (let ((table (make-hash-table :test 'equal)))
3317 (flet ((name (s) (string-downcase (symbol-name s))))
3318 (do-all-symbols (s)
3319 (setf (gethash (name s) table)
3320 (cons s (symbol-indentation s))))
3321 (let ((collisions '()))
3322 (do-all-symbols (s)
3323 (let* ((entry (gethash (name s) table))
3324 (owner (car entry))
3325 (indent (cdr entry)))
3326 (unless (or (eq s owner)
3327 (equal (symbol-indentation s) indent)
3328 (and (not (fboundp s))
3329 (null (macro-function s))))
3330 (pushnew owner collisions)
3331 (pushnew s collisions))))
3332 (if (null collisions)
3333 (format stream "~&No worries!~%")
3334 (format stream "~&Symbols with collisions:~%~{ ~S~%~}"
3335 collisions))))))
3336
3337 (add-hook *pre-reply-hook* 'sync-indentation-to-emacs)
3338
3339 ;;; Local Variables:
3340 ;;; eval: (font-lock-add-keywords 'lisp-mode '(("(\\(defslimefun\\)\\s +\\(\\(\\w\\|\\s_\\)+\\)" (1 font-lock-keyword-face) (2 font-lock-function-name-face))))
3341 ;;; End:
ViewVC Help
Powered by ViewVC 1.1.5
|
__label__pos
| 0.614514 |
Skip to content
Instantly share code, notes, and snippets.
View conartist6's full-sized avatar
Conrad Buck conartist6
View GitHub Profile
@conartist6
conartist6 / 01.formatted.log
Created April 14, 2024 13:22
CSTML input stream processing
conartist6@hq bablr-cli % echo '"eat()"' | node bin/index.js -l @bablr/language-bablr-vm-instruction -p Call -f
<!cstml 'https://github.com/bablr-lang/language-bablr-vm-instruction'>
<>
<Call>
callee:
<*Identifier>
'eat'
</>
arguments:
<Tuple>
@conartist6
conartist6 / 01.script.js
Last active April 9, 2024 15:53
Successful BABLR template tag embedding
import * as language from '@bablr/language-cstml';
import { buildTag } from 'bablr/enhanceable';
const cstml = buildTag(language, 'Expression');
const tree = cstml`<> ${cstml`<*Word>${cstml`'ok'`}</>`} </>`;
@conartist6
conartist6 / 01.input.cstml
Last active April 1, 2024 17:28
Successful BABLR VM run with language embedding
<> </>
Input: `<> </>`
@conartist6
conartist6 / 0.input.cstml
Last active March 14, 2024 03:56
BABLR two-VM parse log
<></>
@conartist6
conartist6 / index.md
Last active November 29, 2023 15:10
agAST
What is agAST?
agAST is a generalized AST format for javascript. It is the format created for and used by the BABLR VM. It is meant to specify basic aspects of how tools store programs, so that common tools can be used to work on programs written in any programming language.
agAST is meant to be the successor to the existing ESTree specification. Unlike ESTree, agAST has no language-specific opinions.
agAST trees are made up of nodes of the following shape:
let node = {
@conartist6
conartist6 / parse-trampoline.js
Created November 10, 2023 18:35
BABLR-VM minimal parse trampoline
import { buildExpression, effectsFor } from './utils/instruction.js';
import { getCooked } from './utils/token.js';
import { Match } from './match.js';
import { Context } from './context.js';
import { Source } from './source.js';
import { runSync } from './run.js';
import { dispatcher } from './dispatcher.js';
import { transformTokenMatcher } from './transforms.generated.js';
const defer = Symbol('defer');
@conartist6
conartist6 / isobench.js
Created November 3, 2023 16:24
Iterator benchmark
import { IsoBench } from 'iso-bench';
const range = max => ({
*[Symbol.iterator]() {
for (let i = 0; i < max; i++) yield i;
}
});
globalThis.i = 0;
@conartist6
conartist6 / index.js
Last active October 22, 2023 15:17
Private/readonly class properties with weak maps
// To keep the data truly private, just don't export these
const privateMaps = {
secret: new WeakMap(),
constant: new WeakMap(),
}
export class Test {
constructor(secret, constant) {
privateMaps.secret.set(this, secret);
privateMaps.constant.set(this, constant);
@conartist6
conartist6 / index.js
Last active October 13, 2023 16:01
Prettier agAST example
// Node objects are immutable
// Also immutable are: properties, attributes, children, terminals, and any arrays
// Immutable trees can be cached as valid with regard to a particular grammar!
let freeze = (node) => Object.freeze(Object.seal(node));
// Helpers for constructing agAST trees
let t = {
// Tokens are really just nodes with non-ref children
token: (type, str, attributes = {}) => t.node(type, [t.str([str])], {}, attributes),
node: (type, children = [], properties = {}, attributes = {}) =>
|
__label__pos
| 0.894899 |
Instantly share code, notes, and snippets.
Embed
What would you like to do?
Generating IOTA addresses from a seed: API call Get_new_addresses()
# The snippet is a part of the IOTA Developer Essentials project. You can reach it at https://hribek25.github.io/IOTA101/
# Complete description and story behind the snippet is available at: https://hribek25.github.io/IOTA101/Allchapters_python.ipynb.html#0FF479CB6C0A
# Requirement: PyOTA library (!pip install pyota)
import iota
from pprint import pprint
# For this particular task the Node URL is not important as it will not be contacted at all
# However it has to be well-formatted URI
NodeURL = "https://nodes.thetangle.org:443"
MySeed = b"WKQDUZTGFKSSLACUCHHLZRKZBHSDSCEBHKUPDLKFBQALEBKDMFRPUQGZRXAADPG9TSRTZGGBZOFRJCFMM"
api=iota.Iota(NodeURL,
seed = MySeed) # if you do not specify a seed, PyOTA library randomly generates one for you under the hood
# Let's generate 3 addresses using default security level=2.
# It is deterministic function - it always generates same addresses as long as the Seed, Security Level and Index are the same
result = api.get_new_addresses(index=0,
count=3,
security_level=2)
pprint(result)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
|
__label__pos
| 0.705279 |
🎉 Pipedream raises $20M Series A 🎉
Read the blog post and TC Techcrunch article.
Data Stores
Use Pipedream Data Stores to manage state throughout your workflows.
Go to site
Integrate the Data Stores API with the Airtable API
Setup the Data Stores API trigger to run a workflow which integrates with the Airtable API. Pipedream's integration platform allows you to integrate Data Stores and Airtable remarkably fast. Free for developers.
Connect Data Stores
export default defineComponent({
props: {
myDataStore: {
type: "data_store",
},
},
async run({ steps, $ }) {
await this.myDataStore.set("key_here","Any serializable JSON as the value")
return await this.myDataStore.get("key_here")
},
})
Connect Airtable
module.exports = defineComponent({
props: {
airtable: {
type: "app",
app: "airtable",
},
baseId: {
type: "$.airtable.baseId",
appProp: "airtable",
},
tableId: {
type: "$.airtable.tableId",
baseIdProp: "baseId",
},
},
async run({steps, $}) {
const Airtable = require('airtable');
const base = new Airtable({apiKey: this.airtable.$auth.api_key}).base(this.baseId);
const data = []
await base(this.tableId).select({
// pass optional config parameters here
}).eachPage(function page(records, fetchNextPage) {
// This function (`page`) will get called for each page of records.
records.forEach(function(record) {
data.push(record._rawJson)
});
// To fetch the next page of records, call `fetchNextPage`.
// If there are more records, `page` will get called again.
// If there are no more records, `done` will get called.
fetchNextPage();
})
return data
},
})
Community Posts
Airtable Webhooks with Slack + Pipedream
Airtable Webhooks with Slack + Pipedream
We love Airtable here at Pipedream. But Airtable lacks one killer feature — webhooks. We took advantage of Airtable’s Slack notifications and built a way to send webhooks to an HTTP endpoint every time a record is created or updated in a base. It takes 10 minutes to setup and it’s free.
Webhooks workaround using Slack
Webhooks workaround using Slack
This has been an open item in the Airtable support community for a few years, see "Webhooks for Records 393" and here 39, & here 28, and we are excited to offer a free solution to the community. Here is the high level solution: Based on the transitive property (A=B and B=C, then A=C), Airtable supports Slack Notifications, and Slack supports webhooks, so Airtable supports webhooks! You just have to know how to set it up correctly. With a simple integration via Pipedream into Slack, all Airtable users can now get near real time notifications of all updated fields and values in Airtable.
Overcoming Airtable Scripting Limitations with Pipedream
Overcoming Airtable Scripting Limitations with Pipedream
How I used Pipedream — Created a workflow having an API endpoint to convert Airtable’s Attachment Image URL to Base64 encode image (to push it to another API). Similarly, there could be a million other use-cases.
|
__label__pos
| 0.513339 |
PhpStorm 2024.1 Help
Code Inspection: Unused variable
Reports unused arguments, variables, or parameters.
Example (PostgreSQL):
CREATE FUNCTION foo(PARAMUSED INT, PARAMUNUSED INT) RETURNS INT AS $$ BEGIN RETURN PARAMUSED; END $$ LANGUAGE plpgsql;
The PARAMUNUSED parameter is not used in the function and might be deleted.
Suppress an inspection in the editor
1. Place the caret at the highlighted line and press Alt+Enter or click the Intention action icon.
2. Click the arrow next to the inspection you want to suppress and select the necessary suppress action.
Last modified: 11 February 2024
|
__label__pos
| 0.600213 |
Quadratic equation example
How do you write a quadratic equation?
A quadratic function is a polynomial function of degree 2. The vertex form of the quadratic function is: F(x) = a(x – h)^2 + k , where a is a constant and h and k are the x and y values of the vertex (h,k).
What are real life examples of quadratic equations?
Balls, Arrows, Missiles and Stones. When you throw a ball (or shoot an arrow, fire a missile or throw a stone) it goes up into the air, slowing as it travels, then comes down again faster and faster and a Quadratic Equation tells you its position at all times!
What is quadratic equation used for?
As well as being a formula that yields the zeros of any parabola, the quadratic formula can also be used to identify the axis of symmetry of the parabola, and the number of real zeros the quadratic equation contains.
How do you solve a quadratic equation in standard form?
The Quadratic Formula can be used to solve any quadratic equation of the form ax2 + bx + c = 0. The form ax2 + bx + c = 0 is called standard form of a quadratic equation.
What are the 5 examples of quadratic equation?
Examples of Quadratic Equation6x² + 11x – 35 = 0.2x² – 4x – 2 = 0.-4x² – 7x +12 = 0.20x² -15x – 10 = 0.x² -x – 3 = 0.5x² – 2x – 9 = 0.3x² + 4x + 2 = 0.-x² +6x + 18 = 0.
What is a quadratic equation graph?
The graph of a quadratic function is a U-shaped curve called a parabola. The sign on the coefficient a of the quadratic function affects whether the graph opens up or down. If a<0 , the graph makes a frown (opens down) and if a>0 then the graph makes a smile (opens up).
Who uses quadratic equations in real life?
Quadratic equations are actually used in everyday life, as when calculating areas, determining a product’s profit or formulating the speed of an object. Quadratic equations refer to equations with at least one squared variable, with the most standard form being ax² + bx + c = 0.
Why do quadratic equations equal zero?
We use the zero product property when we solve quadratic equations. You may have noticed that we always manipulate quadratic equations to ax2+bx+c=0. This is because factoring the equation gives us two expressions that multiply to zero. We can set each factor equal to zero and solve for x.
How do nurses use the quadratic equation?
Nurses use quadratic equation for calculating dosage of the patients, calculating drip rates, conversion between the systems, drugs titration etc.
How do quadratic equations work?
While factoring may not always be successful, the Quadratic Formula can always find the solution. The Quadratic Formula uses the “a”, “b”, and “c” from “ax2 + bx + c”, where “a”, “b”, and “c” are just numbers; they are the “numerical coefficients” of the quadratic equation they’ve given you to solve.
How do you simplify quadratic equations?
Completing the squarePut the equation into the form ax 2 + bx = – c.Make sure that a = 1 (if a ≠ 1, multiply through the equation by. before proceeding).Using the value of b from this new equation, add. Find the square root of both sides of the equation.Solve the resulting equation.
You might be interested: Michaelis menten equation
What are the 4 ways to solve quadratic equations?
The four methods of solving a quadratic equation are factoring, using the square roots, completing the square and the quadratic formula.
Leave a Reply
Your email address will not be published. Required fields are marked *
Releated
Equation for moment of inertia
How is moment of inertia calculated? Basically, for any rotating object, the moment of inertia can be calculated by taking the distance of each particle from the axis of rotation (r in the equation), squaring that value (that’s the r2 term), and multiplying it times the mass of that particle. What is moment of inertia […]
Depreciation equation
What are the 3 depreciation methods? There are three methods for depreciation: straight line, declining balance, sum-of-the-years’ digits, and units of production. What do you mean by depreciation? Definition: The monetary value of an asset decreases over time due to use, wear and tear or obsolescence. This decrease is measured as depreciation. How do you […]
|
__label__pos
| 1 |
CGI/Perl Guide | Learning Center | Forums | Advertise | Login
Site Search: in
Main Index MAIN
INDEX
Search Posts SEARCH
POSTS
Who's Online WHO'S
ONLINE
Log in LOG
IN
Home: Perl Programming Help: Beginner:
Perl to compare value and to print certain line
eminempark
Novice
May 29, 2013, 6:44 PM
Post #1 of 6 (1194 views)
Perl to compare value and to print certain line Can't Post
For example I have a number.txt
The data is
Code
1 0.058
2 0.029
3 0.053
4 0.047
5 0.012
6 0.064
7 0.088
8 0.220
9 0.083
and the threshold is 0.072.
And I would like only to print out the ID and value if
Code
(1) it is the $cur_val = 1
(2)$cur_val-2*last_val+pre_print_val exceeds threshold. Mathematically is:
$cur_val-2*last_val+pre_print_val > th
(3) it is the previous and second previous value of the printed value.
For example: if I printed the value of 8, i need to print the value of 7 and 6.
The idea of the last printed value is that:
Code
1. if the last printed value == last value, then last printed value == 2nd last printed value.
2. if the last printed value != last value, then last printed value == last printed value.
So, the process are:
Code
1 0.058 (process: $ID = 1, thus print)
2 0.029 (process: abs(0.029-2*0.058+0)=0.087, thus print)
3 0.053 (process: abs(0.053-2*0.029+0.058)=0.053, thus not print)
4 0.047 (process: abs(0.047-2*0.053+0.029)=0.030, thus not print)
5 0.012 (process: abs(0.012-2*0.047+0.029)=0.053, thus not print)
6 0.064 (process: abs(0.064-2*0.012+0.029)=0.069, thus not print. But because 8 is printed out, 6 will be printed out as well)
7 0.088 (process: abs(0.088-2*0.064+0.029)=0.011, thus not print. But because 8 is printed out, 7 will be printed out as well)
8 0.220 (process: abs(0.220-2*0.088+0.029)=0.073, thus print)
9 0.083 (process: abs(0.083-2*0.220+0.029)=0.328, thus print)
Thus, the output will be:
Code
1 0.058
2 0.029
6 0.064
7 0.088
8 0.220
9 0.083
Any help is appreciated. I upload the excel file if what I write is not clear.Laugh
eminempark
Novice
May 30, 2013, 12:33 AM
Post #2 of 6 (1188 views)
Re: [eminempark] Perl to compare value and to print certain line [In reply to] Can't Post
I have did some code
Code
#! /usr/bin/perl -w
#
# compare.pl
#
# Usage: compare.pl burst1.txt
#
#
use strict;
my $pre_x = 0;
my $pre_pri_x = 0;
my $pre_pre_pri_x = 0;
while(<>){
my( $ID, $x) = split;
if ($ID == 1)
{
printf ("%d %.3f %.3f %.3f\n", $ID, $x, $pre_x, $pre_pri_x);
$pre_pre_pri_x = $pre_pri_x;
$pre_pri_x = $pre_x;
$pre_x = $x;
}
elsif ($ID != 1)
{
my $vector = abs($x-2*$pre_x+$pre_pri_x);
if ($vector >= 0.072)
{
if ($pre_pri_x == $pre_x)
{
printf ("%d %.3f %.3f %.3f\n", $ID, $x, $pre_x, $pre_pri_x);
$pre_pri_x = $pre_pre_pri_x;
$pre_x=$x;
}
else
{
printf ("%d %.3f %.3f %.3f\n", $ID, $x, $pre_x, $pre_pri_x);
$pre_pri_x = $pre_pri_x;
$pre_x=$x;
}
}
else
{
$pre_pri_x = $pre_pri_x;
$pre_x = $x;
}
}
}
But the pre_pre_pri_x has give me wrong value.
It give me output
Code
1 0.058 0.000 0.000
2 0.029 0.058 0.000
5 0.012 0.047 0.000
9 0.083 0.220 0.000
(This post was edited by eminempark on May 30, 2013, 12:39 AM)
Laurent_R
Veteran / Moderator
May 30, 2013, 8:50 AM
Post #3 of 6 (1175 views)
Re: [eminempark] Perl to compare value and to print certain line [In reply to] Can't Post
Hi,
I posted something yesterday, answering a similar question from you on another post, but no longer find it today. Hmm, strange...
Anyway, below is what I wrote yesterday
____
Hi,
I had a little bit of trouble understanding how you update the pre-pre-value, I think I have it right now. At least, it produces the same output as your example.
As I understand thez rules have changed quite a bit compared to the previous version:
- pre-value is always the value from the previous line
- the other one seems to be the previous match or the match before, depending on whether anothe rline has been processed.
Code
use strict;
my $pre_value = 0;
my $pre_match = 0;
my $pre_pre_match = 0;
my ($antepenultimate, $previous_line) = ("", "");
while(<DATA>){
my( $ID, $cur_value) = split;
if ($ID == 1) {
print;
$pre_value = $cur_value;
$pre_match = $cur_value;
next;
}
if( abs($cur_value - 2 * $pre_value + $pre_pre_match) >= 0.072 ){
if ($previous_line) {
print $antepenultimate if defined $antepenultimate;
print $previous_line;
# $pre_pre_value = $pre_value; # this line is useless but keeping it conveys the logic better
# $pre_value = (split / /, $previous_line)[1];
}
print $_;
$pre_match = $cur_value;
$previous_line = "";
$antepenultimate = "";
} else {
$antepenultimate = $previous_line;
$previous_line = $_;
}
$pre_value = $cur_value;
$pre_pre_match = $pre_match;
}
__DATA__
1 0.058
2 0.029
3 0.053
4 0.047
5 0.012
6 0.064
7 0.088
8 0.220
9 0.083
The following is the output:
Code
$perl PREVALUE.PL
1 0.058
2 0.029
6 0.064
7 0.088
8 0.220
9 0.083
eminempark
Novice
May 30, 2013, 8:33 PM
Post #4 of 6 (1169 views)
Re: [Laurent_R] Perl to compare value and to print certain line [In reply to] Can't Post
Hi, Laurent, first of all, thank you very much.
I would like to ask about the pre_match and pre_pre_match. Are the pre_match and pre_pre_match is the previous printed value and the previous previous printed value?
Laurent_R
Veteran / Moderator
May 31, 2013, 6:07 AM
Post #5 of 6 (1156 views)
Re: [eminempark] Perl to compare value and to print certain line [In reply to] Can't Post
Yes, but only if it was printed because the calculated result satisfied the condition (not it is is printed because another later line is printed. This is the way I understood your example.
eminempark
Novice
Jun 2, 2013, 10:10 AM
Post #6 of 6 (1141 views)
Re: [Laurent_R] Perl to compare value and to print certain line [In reply to] Can't Post
Dear Laurent_R, how if:
I would like only to print out the ID and value if
Code
(1) it is the $cur_val = 1
(2)$cur_val-2*last_val+pre_pri_val exceeds threshold. Mathematically is:
$cur_val-2*last_val+pre_pri_val > th
(3) it is the previous and second previous value of the printed value.
For example: if I printed the value of 8, i need to print the value of 7 and 6.
The idea of the last printed value is that:
Code
1. if the $last_ID - $ID == 1, then $pre_pri_val == last value.
2. if the $last_ID - $ID != 1, then $pre_pri_val == $pre_pri_val.
So, the process are:
Code
1 0.058 (process: $ID = 1, thus print)
2 0.029 (process: abs(0.029-2*0.058+0)=0.087, thus print)
3 0.053 (process: abs(0.053-2*0.029+0.058)=0.053, thus not print)
4 0.047 (process: abs(0.047-2*0.053+0.058)=0.001, thus not print. But be cause $ID = 6 is printed, $ID = 4 will be printed as well.)
5 0.012 (process: abs(0.012-2*0.047+0.058)=0.024, thus not print. But because $ID = 6 is printed, $ID = 5 will be printed as well.)
6 0.064 (process: abs(0.064-2*0.012+0.058)=0.098, thus print.)
7 0.088 (process: abs(0.088-2*0.064+0.012)=0.028, thus not print. But because $ID = 9 is printed, $ID = 7 will be printed as well.)
8 0.220 (process: abs(0.220-2*0.088+0.012)=0.056, thus not print. But because $ID = 9 is printed, $ID = 8 will be printed as well.)
9 0.083 (process: abs(0.083-2*0.220+0.012)=0.345, thus print)
Thus, the output will be:
Code
1 0.058
2 0.029
4 0.047
5 0.012
6 0.064
7 0.088
8 0.220
9 0.083
Search for (options) Powered by Gossamer Forum v.1.2.0
Web Applications & Managed Hosting Powered by Gossamer Threads
Visit our Mailing List Archives
|
__label__pos
| 0.628646 |
Implement Role Hierarchy with In-Memory Authentication
In order to configure role hierarchy, you need to
1. make a bean RoleHierarchy
2. define a expressionhandler to read role hierarchy
package com.ekiras.ss.config;
import org.springframework.context.annotation.Bean;
import org.springframework.security.access.expression.SecurityExpressionHandler;
import org.springframework.security.access.hierarchicalroles.RoleHierarchy;
import org.springframework.security.access.hierarchicalroles.RoleHierarchyImpl;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.web.FilterInvocation;
import org.springframework.security.web.access.expression.DefaultWebSecurityExpressionHandler;
/**
* @author ekiras
*/
@EnableWebSecurity
public class SpringSecurityConfigurer extends WebSecurityConfigurerAdapter{
private SecurityExpressionHandler<filterinvocation> webExpressionHandler() {
DefaultWebSecurityExpressionHandler defaultWebSecurityExpressionHandler = new DefaultWebSecurityExpressionHandler();
defaultWebSecurityExpressionHandler.setRoleHierarchy(roleHierarchy());
return defaultWebSecurityExpressionHandler;
}
@Bean
public RoleHierarchy roleHierarchy(){
RoleHierarchyImpl roleHierarchy = new RoleHierarchyImpl();
roleHierarchy.setHierarchy("ADMIN > USER");
return roleHierarchy;
}
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication()
.withUser("ekansh")
.password("password")
.authorities("USER", "ROLE");
auth.inMemoryAuthentication()
.withUser("admin")
.password("admin")
.authorities("ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.expressionHandler(webExpressionHandler())
.antMatchers("/admin/**").hasAuthority("ADMIN")
.antMatchers("/user/**").hasAuthority("USER")
.anyRequest().authenticated()
.and()
.formLogin()
.and()
.logout()
;
}
}
In the example above, we have made a role hierarchy where
1. ADMIN can access MODERATOR and USER roles,
2. MODERATOR can access USER roles.
3. USER can neither access MODERATOR nor ADMIN roles.
@Bean
public RoleHierarchy roleHierarchy(){
RoleHierarchyImpl roleHierarchy = new RoleHierarchyImpl();
roleHierarchy.setHierarchy("ADMIN > MODERATOR > USER");
return roleHierarchy;
}
This is an easy way to configure and manage roles and role permissions for making security groups.
Also Read
Download from Github
|
__label__pos
| 0.677271 |
MATLAB Answers
John
Calculating a weighted average of multiple matrices
Asked by John
on 24 Feb 2012
Latest activity Edited by Mona MK
on 24 Oct 2013
Hello,
I was hoping that perhaps somebody could help me out with a problem that I have. It is a relatively simple operation (a weight average), just difficult to code.
Say I import 2 3x3 matrices into a cell array using the code below.
C = cell(2,1);
for ii = 1:2
C{ii} = importdata(['matrix' num2str(ii) '.txt']) ;
end
These matrices are transitional probability matrices.
I would like to calculate a weighted 'average' matrix of these 2 matrices. Unfortunately I cannot simply just calculate the average, its needs to be a weighted average.
Say for example the 2 matrices are:
0.0 0.5 0.5 2
A = 0 0 1 4
0 0.6 0.4 7
0.2 0.0 0.8 5
B = 0 0 0 0
0.3 0.0 0.7 3
The 4th column contains the number of counts for that particular row.
The weighted average formula would be
P(x) = P_A(x) x [1-B_N/(A_N+B_N)] + P_B(x) x [1-A_N/(A_N+B_N)]
Where,
P_A(x) represents the probability value in the same row at cell x for matrix A.
P_B(x) is the same thing for matrix B.
A_N is the total number of counts for the same row we are dealing with in matrix A
B_N is the total number of counts for the same row we are dealing with in matrix B
Using the numbers, the first cell would be:
= 0 x [1-5/(2+5)] + 0.2 x [1-2/(2+5)] = 0.14
So the full weighted average matrix would be
0.14 0.14 0.71
C= 0 0 1
0.1 0.4 0.5
I would really appreciate any help to code this as I'm fairly inexperienced with matlab. In practice I have about 100 matrices but I just presented 2 as an example.
Sincere Thanks
John
0 Comments
Products
No products are associated with this question.
1 Answer
Answer by Andrei Bobrov
on 24 Feb 2012
Accepted answer
A =[ 0 0.5 0.5 2
0 0 1 4
0 0.6 0.4 7];
B =[ 0.2 0 0.8 5
0 0 0 0
0.3 0 0.7 3];
C1 = cat(3,A,B);
p1 = bsxfun(@rdivide,C1(:,end,:),sum(C1(:,end,:),3));
C = sum(bsxfun(@times,C1(:,1:end-1,:),p1),3)
7 Comments
John
on 28 Feb 2012
Hello Andrei,
Thank you for your help with my weighted average problem.
I'm just wondering if the code for pl and C needs to be changed if I change the size of the matrices say from 3x3 to 9x9?
It seems to be working fine, but I thought I should double check with you.
Many thanks
John
Andrei Bobrov
on 28 Feb 2012
No, do not change
John
on 28 Feb 2012
Thanks
Discover MakerZone
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test
Learn more
Discover what MATLAB® can do for your career.
Opportunities for recent engineering grads.
Apply Today
MATLAB Academy
New to MATLAB?
Learn MATLAB today!
|
__label__pos
| 0.927543 |
Text To Speech
Isaac SDK Text to Speech generates life-like speech from any text using deep learning networks. It can be used to develop voice UI applications. This pipeline runs natively on desktop systems and Jetson Xavier platforms and delivers low latency streaming performance. This allows applications to generate arbitrary audio messages on-the-fly instead of playing back pre-recorded audio files.
The Text to Speech pipeline is built with 3 codelets: Text To Mel, Torch Inference, and Tensor To Audio Decoder. The first component Text To Mel accepts a text string. The last component Tensor To Audio Decoder publishes audio samples which can be sent to Audio Playback.
Text To Mel
The Text to Mel codelet receives text as input and generates a corresponding Mel spectrogram as output. It uses the NVIDIA implementation of the Tacotron-2 Deep Learning network. The model maps a sequence of characters to a sequence of mel spectrums.
This codelet runs the model in streaming mode. Instead of publishing the full Mel spectrogram of the input, it publishes partial Mel spectrograms as they are generated. This helps reduce the overall latency as the partial data can be used by other components while the next set of partial data is being generated by TextToMel. This design effectively works as a pipeline and can allow real-time speech synthesis.
Limitations
Isaac SDK Text to Speech has the following limitations:
• The codelet supports input text as alphabets and ARPAbets (for well defined pronunciation).
Example 1 (alphabets): The quick brown fox jumps over the lazy dog.
Example 2 (ARPAbets): {HH AH0 L OW1} {AY1} {W AA1 N T} {T UW1} {G OW1} {T UW1} {S AE1 N T AH0}
{K L AE1 R AH0} {AH0 N D} {SH AA1 P} {TH IH1 NG Z} {AE1 T} {EH2 L AH0 M EH1 N T AH0 L}.
• Upper-case and lower-case alphabets, period (.), comma (,) and space are the only supported characters. Numbers and special characters are not supported.
• In ARPAbet, 39 phonemes and 3 lexical stress markers (0,1,2) for vowels are supported. This is the same phoneme set used in the CMU Pronouncing Dictionary. ARPAbet notation follows phonemes separated by space and words encased within curly brackets ({, }).
• Sentences should be grammatically correct including punctuations for more accurate pronunciation.
• Each input message can contain only one sentence. More sentences per message can lead to faulty pronunciations.
• Like every other deep learning model, a warmup iteration is required. The warmup iteration (first input message) initializes the GPU kernels used by the model and hence takes significantly longer time to process.
• Pronunciation of long words, such as claustrophobia and antidisestablishmentarianism, can go wrong towards the end of the word due to the model being optimized to run on device. It is recommended to avoid such words or use ARPAbet notation for them.
• Acronyms should be spelled as individual characters using ARPAbet. For example, “BMW” has to be spelled “{B IY} {EH M} {D AH B AH L Y UW}”.
• Proper nouns might be spoken differently than their spelling. Hence it is recommended to use ARPAbet notation for proper nouns and uncommon words. For example, Nvidia can written as {EH0 N V IH1 D IY0 AH0}
Configuration Parameters
Parameter Description Default Type
session_timeout_value
The session timeout value determines maximum allowed time for a streaming session within a tick before being terminated. Termination results in the remaining part of the mel spectrogram not getting generated. The next input text message is processed normally in the next tick with a new session.
This value indirectly constraints the maximum length of sentence that can be converted to speech. A larger value should be used for longer sentences. A smaller value generates only partial speech from the longer sentences.
25.0 double
Messages
Message Proto Type Name
Input ChatMessageProto text
Output TensorListProto mel_spectrogram
Platforms: Desktop, Jetson Xavier
WaveGlow Vocoder (Torch Inference)
Vocoder converts the Mel spectrograms (frequency domain) into corresponding naturally sounding audio samples in time domain. The NVIDIA WaveGlow network is used as a vocoder to synthesize speech from mel spectrograms. The pre-trained WaveGlow model is loaded into Torch Inference codelet for execution.
Tensor To Audio Decoder
The Tensor to Audio Decoder codelet repackages the audio samples from TensorListProto to AudioDataProto. The audio data is passed on from the input to output without any modification. Only the metadata required in AudioDataProto is added from the configuration parameters.
Configuration Parameters
Parameter Description Default Type
sample_rate Sample rate of the audio received 22050 int
num_channels Number of channels of the audio received 1 int
Messages
Message Proto Type Name
Input TensorListProto tensors
Output AudioDataProto audio
The text_to_speech Sample Application
The text_to_speech sample application demonstrates the end-to-end pipeline of the text to speech feature.
The application has Send Text, Text to Mel, Vocoder (Torch inference), Tensor to Audio, and Audio Playback components connected in order. The Send Text component publishes a configured list of sentences in a loop with a 6-second interval between sentences. The Text to Mel component generates partial Mel spectrograms for these input sentences configured with a timeout of 25 seconds. The Torch Inference component with WaveGlow model converts these partial Mel spectrograms into audio samples. This component is configured to process unread buffered messages that are up to 8 seconds old. The audio samples generated by the WaveGlow Torch inference are converted from TensorListProto to AudioDataProto by the Tensor to Audio Decoder component. The AudioDataProto messages are played on the speaker by the Audio Playback component.
To use the application, connect a speaker or headphones to the host or device and set it as default audio output device in system settings. The configured text list can be heard as synthesized speech played through the audio output device.
Since this application runs two heavy deep learning networks, it is recommended to set Jetson Xavier to MaxN mode and max clock speed (as shown below) before launching the application on it.
ubuntu@Jetson_Xavier:~$ sudo nvpmodel -m 0
ubuntu@Jetson_Xavier:~$ sudo jetson_clocks
Platforms: Desktop, Jetson Xavier
Hardware: Any speaker/headphones
|
__label__pos
| 0.996678 |
1
$\begingroup$
24 bytes generate methodIs it possible to generate a 24 bytes CMAC using AES-192 or AES-256 and if so, how?
According to AN10922AN10922.0df,looks like we can generate 24 bytes and 32 bytes key using div constant ?
$\endgroup$
1
• $\begingroup$ I didn't recall seeing that PDF in your initial revision of the question. Now that we know the semiconductor manufacturer NXP provide this sort of guidance, could you share with us, what environment are you developing on/for? $\endgroup$
– DannyNiu
Commented May 23 at 7:04
1 Answer 1
3
$\begingroup$
You can't. AES only has 16-byte (i.e. 128-bit) block size. The only alternatives would be to
• use Rijndael variants with wider blocks (the derivation of subkeys $K_1$ and $K_2$ will have to change to use other polynomials),
• iterate the final block through something like OFB (output feedback) mode.
$\endgroup$
1
• 1
$\begingroup$ Just iterating the final block won't make it more secure than an 128-bit MAC, though. An option might be to use some modified construction which uses a wider state, but that won't be CMAC. $\endgroup$ Commented May 22 at 21:47
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.940544 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
PerlMonks
Re^4: tr{}{} doesn't wanna work.. what am I doing wrong?
by ultranerds (Pilgrim)
on Feb 24, 2012 at 14:48 UTC ( #955935=note: print w/ replies, xml ) Need Help??
in reply to Re^3: tr{}{} doesn't wanna work.. what am I doing wrong?
in thread tr{}{} doesn't wanna work.. what am I doing wrong?
Thanks, that did the trick :) I needed to convert $filename:
utf8::decode($filename);
Working like a charm now - thanks! :)
Comment on Re^4: tr{}{} doesn't wanna work.. what am I doing wrong?
Download Code
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://955935]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (12)
As of 2014-08-01 15:33 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Who would be the most fun to work for?
Results (27 votes), past polls
|
__label__pos
| 0.737317 |
summaryrefslogtreecommitdiffstats
path: root/cpukit/libmisc/capture/capture.c
blob: 1a5ef9b497f3c1351b55361247c06c0b334e19be (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
/*
------------------------------------------------------------------------
$Id$
------------------------------------------------------------------------
Copyright Objective Design Systems Pty Ltd, 2002
All rights reserved Objective Design Systems Pty Ltd, 2002
Chris Johns ([email protected])
COPYRIGHT (c) 1989-1998.
On-Line Applications Research Corporation (OAR).
The license and distribution terms for this file may be
found in the file LICENSE in this distribution.
This software with is provided ``as is'' and with NO WARRANTY.
------------------------------------------------------------------------
RTEMS Performance Monitoring and Measurement Framework.
This is the Capture Engine component.
*/
#include <stdlib.h>
#include <string.h>
#include "capture.h"
#include <rtems/score/states.inl>
#include <rtems/score/wkspace.h>
#include <rtems/score/wkspace.inl>
/*
* These events are always recorded and are not part of the
* watch filters.
*/
#define RTEMS_CAPTURE_RECORD_EVENTS (RTEMS_CAPTURE_CREATED_BY_EVENT | \
RTEMS_CAPTURE_CREATED_EVENT | \
RTEMS_CAPTURE_STARTED_BY_EVENT | \
RTEMS_CAPTURE_STARTED_EVENT | \
RTEMS_CAPTURE_RESTARTED_BY_EVENT | \
RTEMS_CAPTURE_RESTARTED_EVENT | \
RTEMS_CAPTURE_DELETED_BY_EVENT | \
RTEMS_CAPTURE_DELETED_EVENT | \
RTEMS_CAPTURE_BEGIN_EVENT | \
RTEMS_CAPTURE_EXITTED_EVENT)
/*
* Global capture flags.
*/
#define RTEMS_CAPTURE_ON (1 << 0)
#define RTEMS_CAPTURE_NO_MEMORY (1 << 1)
#define RTEMS_CAPTURE_OVERFLOW (1 << 2)
#define RTEMS_CAPTURE_TRIGGERED (1 << 3)
#define RTEMS_CAPTURE_READER_ACTIVE (1 << 4)
#define RTEMS_CAPTURE_READER_WAITING (1 << 5)
#define RTEMS_CAPTURE_GLOBAL_WATCH (1 << 6)
/*
* RTEMS Capture Data.
*/
static rtems_capture_record_t* capture_records;
static rtems_unsigned32 capture_size;
static rtems_unsigned32 capture_count;
static rtems_capture_record_t* capture_in;
static rtems_unsigned32 capture_out;
static rtems_unsigned32 capture_flags;
static rtems_capture_task_t* capture_tasks;
static rtems_capture_control_t* capture_controls;
static int capture_extension_index;
static rtems_id capture_id;
static rtems_capture_timestamp capture_timestamp;
static rtems_task_priority capture_ceiling;
static rtems_task_priority capture_floor;
static rtems_unsigned32 capture_tick_period;
static rtems_id capture_reader;
/*
* RTEMS Event text.
*/
static const char* capture_event_text[] =
{
"CREATED_BY",
"CREATED",
"STARTED_BY",
"STARTED",
"RESTARTED_BY",
"RESTARTED",
"DELETED_BY",
"DELETED",
"BEGIN",
"EXITTED",
"SWITCHED_OUT",
"SWITCHED_IN",
"TIMESTAMP"
};
/*
* rtems_capture_get_time
*
* DESCRIPTION:
*
* This function returns the current time. If a handler is provided
* by the user get the time from that.
*/
static inline void rtems_capture_get_time (rtems_unsigned32* ticks,
rtems_unsigned32* tick_offset)
{
if (capture_timestamp)
capture_timestamp (ticks, tick_offset);
else
{
*ticks = _Watchdog_Ticks_since_boot;
*tick_offset = 0;
}
}
/*
* rtems_capture_match_names
*
* DESCRIPTION:
*
* This function compares rtems_names. It protects the
* capture engine from a change to the way names are supported
* in RTEMS.
*
*/
static inline rtems_boolean
rtems_capture_match_names (rtems_name lhs, rtems_name rhs)
{
return lhs == rhs;
}
/*
* rtems_capture_dup_name
*
* DESCRIPTION:
*
* This function duplicates an rtems_names. It protects the
* cpature engine from a change to the way names are supported
* in RTEMS.
*
*/
static inline void
rtems_capture_dup_name (rtems_name* dst, rtems_name src)
{
*dst = src;
}
/*
* rtems_capture_name_in_group
*
* DESCRIPTION:
*
* This function sees if a name is in a group of names.
*
*/
static inline rtems_boolean
rtems_capture_name_in_group (rtems_name task, rtems_name* tasks)
{
if (tasks)
{
int i;
for (i = 0; i < RTEMS_CAPTURE_TRIGGER_TASKS; i++)
if (rtems_capture_match_names (task, *tasks++))
return 1;
}
return 0;
}
/*
* rtems_capture_match_name_id
*
* DESCRIPTION:
*
* This function matches a name and/or id.
*/
static inline rtems_boolean
rtems_capture_match_name_id (rtems_name lhs_name,
rtems_id lhs_id,
rtems_name rhs_name,
rtems_id rhs_id)
{
/*
* The left hand side name or id could be 0 which means a wildcard.
*/
if ((lhs_name == 0) && (lhs_id == rhs_id))
return 1;
else if ((lhs_id == 0) || (lhs_id == rhs_id))
{
if (rtems_capture_match_names (lhs_name, rhs_name))
return 1;
}
return 0;
}
/*
* rtems_capture_init_stack_usage
*
* DESCRIPTION:
*
* This function setups a stack so its usage can be monitored.
*/
static inline void
rtems_capture_init_stack_usage (rtems_capture_task_t* task)
{
if (task->tcb)
{
rtems_unsigned32* s;
rtems_unsigned32 i;
task->stack_size = task->tcb->Start.Initial_stack.size;
task->stack_clean = task->stack_size;
s = task->tcb->Start.Initial_stack.area;
for (i = 0; i < (task->stack_size - 128); i += 4)
*(s++) = 0xdeaddead;
}
}
/*
* rtems_capture_find_control
*
* DESCRIPTION:
*
* This function searches for a trigger given a name.
*
*/
static inline rtems_capture_control_t*
rtems_capture_find_control (rtems_name name, rtems_id id)
{
rtems_capture_control_t* control;
for (control = capture_controls; control != NULL; control = control->next)
if (rtems_capture_match_name_id (name, id, control->name, control->id))
break;
return control;
}
/*
* rtems_capture_create_control
*
* DESCRIPTION:
*
* This function creates a capture control for the capture engine.
*
*/
static inline rtems_capture_control_t*
rtems_capture_create_control (rtems_name name, rtems_id id)
{
rtems_interrupt_level level;
rtems_capture_control_t* control;
rtems_capture_task_t* task;
if ((name == 0) && (id == 0))
return NULL;
control = rtems_capture_find_control (name, id);
if (control == NULL)
{
control = _Workspace_Allocate (sizeof (rtems_capture_control_t));
if (control == NULL)
{
capture_flags |= RTEMS_CAPTURE_NO_MEMORY;
return NULL;
}
control->name = name;
control->id = id;
control->flags = 0;
memset (control->from, 0, sizeof (control->from));
memset (control->from_id, 0, sizeof (control->from_id));
rtems_interrupt_disable (level);
control->next = capture_controls;
capture_controls = control;
/*
* We need to scan the task list as set the control to the
* tasks.
*/
for (task = capture_tasks; task != NULL; task = task->next)
if (rtems_capture_match_name_id (name, id, task->name, task->id))
task->control = control;
rtems_interrupt_enable (level);
}
return control;
}
/*
* rtems_capture_create_capture_task
*
* DESCRIPTION:
*
* This function create the task control.
*
*/
static inline rtems_capture_task_t*
rtems_capture_create_capture_task (rtems_tcb* new_task)
{
rtems_interrupt_level level;
rtems_capture_task_t* task;
rtems_capture_control_t* control;
task = _Workspace_Allocate (sizeof (rtems_capture_task_t));
if (task == NULL)
{
capture_flags |= RTEMS_CAPTURE_NO_MEMORY;
return NULL;
}
rtems_capture_dup_name (&task->name, ((rtems_name) new_task->Object.name));
task->id = new_task->Object.id;
task->flags = 0;
task->in = 0;
task->out = 0;
task->tcb = new_task;
task->ticks = 0;
task->tick_offset = 0;
task->ticks_in = 0;
task->tick_offset_in = 0;
task->control = 0;
task->last_ticks = 0;
task->last_tick_offset = 0;
task->tcb->extensions[capture_extension_index] = task;
task->start_priority = new_task->Start.initial_priority;
task->stack_size = new_task->Start.Initial_stack.size;
task->stack_clean = task->stack_size;
rtems_interrupt_disable (level);
task->next = capture_tasks;
capture_tasks = task;
rtems_interrupt_enable (level);
/*
* We need to scan the default control list to initialise
* this control.
*/
for (control = capture_controls; control != NULL; control = control->next)
if (rtems_capture_match_name_id (control->name, control->id,
task->name, task->id))
task->control = control;
return task;
}
/*
* rtems_capture_record
*
* DESCRIPTION:
*
* This function records a capture record into the capture buffer.
*
*/
static inline void
rtems_capture_record (rtems_capture_task_t* task,
rtems_unsigned32 events)
{
/*
* Check the watch state if we have a task control, and
* the task's real priority is lower or equal to the ceiling.
*/
if (task)
{
rtems_capture_control_t* control;
control = task->control;
/*
* Capure the record if we have an event that is always
* captured, or the task's real priority is greater than the
* watch ceiling, and the global watch or task watch is enabled.
*/
if ((events & RTEMS_CAPTURE_RECORD_EVENTS) ||
((task->tcb->real_priority >= capture_ceiling) &&
(task->tcb->real_priority <= capture_floor) &&
((capture_flags & RTEMS_CAPTURE_GLOBAL_WATCH) ||
(control && (control->flags & RTEMS_CAPTURE_WATCH)))))
{
rtems_interrupt_level level;
rtems_interrupt_disable (level);
if (capture_count < capture_size)
{
capture_count++;
capture_in->task = task;
capture_in->events = (events |
(task->tcb->real_priority) |
(task->tcb->current_priority << 8));
if ((events & RTEMS_CAPTURE_RECORD_EVENTS) == 0)
task->flags |= RTEMS_CAPTURE_TRACED;
rtems_capture_get_time (&capture_in->ticks, &capture_in->tick_offset);
if (capture_in == &capture_records[capture_size - 1])
capture_in = capture_records;
else
capture_in++;
}
else
capture_flags |= RTEMS_CAPTURE_OVERFLOW;
rtems_interrupt_enable (level);
}
}
}
/*
* rtems_capture_create_task
*
* DESCRIPTION:
*
* This function is called when a task is created.
*
*/
static rtems_boolean
rtems_capture_create_task (rtems_tcb* current_task,
rtems_tcb* new_task)
{
rtems_capture_task_t* ct;
rtems_capture_task_t* nt;
ct = current_task->extensions[capture_extension_index];
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
if (ct == NULL)
ct = rtems_capture_create_capture_task (current_task);
/*
* Create the new task's capture control block.
*/
nt = rtems_capture_create_capture_task (new_task);
/*
* If we are logging then record this fact.
*/
rtems_capture_record (ct, RTEMS_CAPTURE_CREATED_BY_EVENT);
rtems_capture_record (nt, RTEMS_CAPTURE_CREATED_EVENT);
return 1 == 1;
}
/*
* rtems_capture_start_task
*
* DESCRIPTION:
*
* This function is called when a task is started.
*
*/
static rtems_extension
rtems_capture_start_task (rtems_tcb* current_task,
rtems_tcb* started_task)
{
/*
* Get the capture task control block so we can trace this
* event.
*/
rtems_capture_task_t* ct;
rtems_capture_task_t* st;
ct = current_task->extensions[capture_extension_index];
st = started_task->extensions[capture_extension_index];
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
if (ct == NULL)
ct = rtems_capture_create_capture_task (current_task);
if (st == NULL)
st = rtems_capture_create_capture_task (started_task);
rtems_capture_record (ct, RTEMS_CAPTURE_STARTED_BY_EVENT);
rtems_capture_record (st, RTEMS_CAPTURE_STARTED_EVENT);
rtems_capture_init_stack_usage (st);
}
/*
* rtems_capture_restart_task
*
* DESCRIPTION:
*
* This function is called when a task is restarted.
*
*/
static rtems_extension
rtems_capture_restart_task (rtems_tcb* current_task,
rtems_tcb* restarted_task)
{
/*
* Get the capture task control block so we can trace this
* event.
*/
rtems_capture_task_t* ct;
rtems_capture_task_t* rt;
ct = current_task->extensions[capture_extension_index];
rt = restarted_task->extensions[capture_extension_index];
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
if (ct == NULL)
ct = rtems_capture_create_capture_task (current_task);
if (rt == NULL)
rt = rtems_capture_create_capture_task (restarted_task);
rtems_capture_record (ct, RTEMS_CAPTURE_RESTARTED_BY_EVENT);
rtems_capture_record (rt, RTEMS_CAPTURE_RESTARTED_EVENT);
rtems_capture_task_stack_usage (rt);
rtems_capture_init_stack_usage (rt);
}
/*
* rtems_capture_delete_task
*
* DESCRIPTION:
*
* This function is called when a task is deleted.
*
*/
static rtems_extension
rtems_capture_delete_task (rtems_tcb* current_task,
rtems_tcb* deleted_task)
{
/*
* Get the capture task control block so we can trace this
* event.
*/
rtems_capture_task_t* ct;
rtems_capture_task_t* dt;
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
ct = current_task->extensions[capture_extension_index];
dt = deleted_task->extensions[capture_extension_index];
if (ct == NULL)
ct = rtems_capture_create_capture_task (current_task);
if (dt == NULL)
dt = rtems_capture_create_capture_task (deleted_task);
rtems_capture_record (ct, RTEMS_CAPTURE_DELETED_BY_EVENT);
rtems_capture_record (dt, RTEMS_CAPTURE_DELETED_EVENT);
rtems_capture_task_stack_usage (dt);
/*
* This task's tcb will be invalid.
*/
dt->tcb = 0;
}
/*
* rtems_capture_begin_task
*
* DESCRIPTION:
*
* This function is called when a task is begun.
*
*/
static rtems_extension
rtems_capture_begin_task (rtems_tcb* begin_task)
{
/*
* Get the capture task control block so we can trace this
* event.
*/
rtems_capture_task_t* bt;
bt = begin_task->extensions[capture_extension_index];
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
if (bt == NULL)
bt = rtems_capture_create_capture_task (begin_task);
rtems_capture_record (bt, RTEMS_CAPTURE_BEGIN_EVENT);
}
/*
* rtems_capture_exitted_task
*
* DESCRIPTION:
*
* This function is called when a task is exitted. That is
* returned rather than was deleted.
*
*/
static rtems_extension
rtems_capture_exitted_task (rtems_tcb* exitted_task)
{
/*
* Get the capture task control block so we can trace this
* event.
*/
rtems_capture_task_t* et;
et = exitted_task->extensions[capture_extension_index];
/*
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
if (et == NULL)
et = rtems_capture_create_capture_task (exitted_task);
rtems_capture_record (et, RTEMS_CAPTURE_EXITTED_EVENT);
rtems_capture_task_stack_usage (et);
}
/*
* rtems_capture_switch_task
*
* DESCRIPTION:
*
* This function is called when a context is switched.
*
*/
static rtems_extension
rtems_capture_switch_task (rtems_tcb* current_task,
rtems_tcb* heir_task)
{
/*
* Only perform context switch trace processing if tracing is
* enabled.
*/
if (capture_flags & RTEMS_CAPTURE_ON)
{
rtems_unsigned32 ticks;
rtems_unsigned32 tick_offset;
/*
* Get the cpature task control block so we can update the
* reference anbd perform any watch or trigger functions.
* The task ponters may not be known as the task may have
* been created before the capture engine was open. Add them.
*/
rtems_capture_task_t* ct;
rtems_capture_task_t* ht;
if (_States_Is_transient (current_task->current_state))
{
rtems_id ct_id = current_task->Object.id;
for (ct = capture_tasks; ct; ct = ct->next)
if (ct->id == ct_id)
break;
}
else
{
ct = current_task->extensions[capture_extension_index];
if (ct == NULL)
ct = rtems_capture_create_capture_task (current_task);
}
ht = heir_task->extensions[capture_extension_index];
if (ht == NULL)
ht = rtems_capture_create_capture_task (heir_task);
/*
* Update the execution time. Assume the tick will not overflow
* for now. This may need to change.
*/
rtems_capture_get_time (&ticks, &tick_offset);
/*
* We could end up with null pointers for both the current task
* and the heir task.
*/
if (ht)
{
ht->in++;
ht->ticks_in = ticks;
ht->tick_offset_in = tick_offset;
}
if (ct)
{
ct->out++;
ct->ticks += ticks - ct->ticks_in;
if (capture_timestamp)
{
tick_offset += capture_tick_period - ct->tick_offset_in;
if (tick_offset < capture_tick_period)
ct->tick_offset = tick_offset;
else
{
ct->ticks++;
ct->tick_offset = tick_offset - capture_tick_period;
}
}
else
{
ct->tick_offset += 100;
}
}
/*
* If we have not triggered then see if this is a trigger condition.
*/
if (!(capture_flags & RTEMS_CAPTURE_TRIGGERED))
{
rtems_capture_control_t* cc = NULL;
rtems_capture_control_t* hc = NULL;
if (ct)
{
cc = ct->control;
/*
* Check the current task for a TO_ANY trigger.
*/
if (cc && (cc->flags & RTEMS_CAPTURE_TO_ANY))
{
capture_flags |= RTEMS_CAPTURE_TRIGGERED;
goto triggered;
}
}
if (ht)
{
hc = ht->control;
/*
* Check the next task for a FROM_ANY.
*/
if (hc && (hc->flags & RTEMS_CAPTURE_FROM_ANY))
{
capture_flags |= RTEMS_CAPTURE_TRIGGERED;
goto triggered;
}
}
/*
* Check is the trigger is from the current task
* to the next task.
*/
if (cc && hc && (hc->flags & RTEMS_CAPTURE_FROM_TO))
if (rtems_capture_name_in_group (cc->name, hc->from))
{
capture_flags |= RTEMS_CAPTURE_TRIGGERED;
goto triggered;
}
}
else
{
triggered:
rtems_capture_record (ct, RTEMS_CAPTURE_SWITCHED_OUT_EVENT);
rtems_capture_record (ht, RTEMS_CAPTURE_SWITCHED_IN_EVENT);
}
}
}
/*
* rtems_capture_open
*
* DESCRIPTION:
*
* This function initialises the realtime capture engine allocating the trace
* buffer. It is assumed we have a working heap at stage of initialisation.
*
*/
rtems_status_code
rtems_capture_open (rtems_unsigned32 size, rtems_capture_timestamp timestamp)
{
rtems_extensions_table capture_extensions;
rtems_name name;
rtems_status_code sc;
/*
* See if the capture engine is already open.
*/
if (capture_records)
return RTEMS_RESOURCE_IN_USE;
capture_records = malloc (size * sizeof (rtems_capture_record_t));
if (capture_records == NULL)
return RTEMS_NO_MEMORY;
capture_size = size;
capture_count = 0;
capture_in = capture_records;
capture_out = 0;
capture_flags = 0;
capture_tasks = NULL;
capture_ceiling = 0;
capture_floor = 255;
/*
* Create the extension table. This is copied so we
* can create it as a local.
*/
capture_extensions.thread_create = rtems_capture_create_task;
capture_extensions.thread_start = rtems_capture_start_task;
capture_extensions.thread_restart = rtems_capture_restart_task;
capture_extensions.thread_delete = rtems_capture_delete_task;
capture_extensions.thread_switch = rtems_capture_switch_task;
capture_extensions.thread_begin = rtems_capture_begin_task;
capture_extensions.thread_exitted = rtems_capture_exitted_task;
capture_extensions.fatal = NULL;
/*
* Get the tick period from the BSP Configuration Table.
*/
capture_tick_period = _Configuration_Table->microseconds_per_tick;
/*
* Register the user extension handlers for the CAPture Engine.
*/
name = rtems_build_name ('C', 'A', 'P', 'E');
sc = rtems_extension_create (name, &capture_extensions, &capture_id);
if (sc != RTEMS_SUCCESSFUL)
{
capture_id = 0;
free (capture_records);
capture_records = NULL;
}
else
{
capture_extension_index = rtems_get_index (capture_id);;
}
/*
* Iterate over the list of existing tasks.
*/
return sc;
}
/*
* rtems_capture_close
*
* DESCRIPTION:
*
* This function shutdowns the capture engine and release any claimed
* resources.
*/
rtems_status_code
rtems_capture_close ()
{
rtems_interrupt_level level;
rtems_capture_task_t* task;
rtems_capture_control_t* control;
rtems_capture_record_t* records;
rtems_status_code sc;
rtems_interrupt_disable (level);
if (!capture_records)
{
rtems_interrupt_enable (level);
return RTEMS_SUCCESSFUL;
}
capture_flags &= ~RTEMS_CAPTURE_ON;
records = capture_records;
capture_records = NULL;
rtems_interrupt_enable (level);
/*
* Delete the extension first. This means we are now able to
* release the resources we have without them being used.
*/
sc = rtems_extension_delete (capture_id);
if (sc != RTEMS_SUCCESSFUL)
return sc;
task = capture_tasks;
while (task)
{
rtems_capture_task_t* delete = task;
task = task->next;
_Workspace_Free (delete);
}
capture_tasks = NULL;
control = capture_controls;
while (control)
{
rtems_capture_control_t* delete = control;
control = control->next;
_Workspace_Free (delete);
}
capture_controls = NULL;
if (capture_records)
{
free (capture_records);
capture_records = NULL;
}
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_control
*
* DESCRIPTION:
*
* This function allows control of tracing at a global level.
*/
rtems_status_code
rtems_capture_control (rtems_boolean enable)
{
rtems_interrupt_level level;
rtems_interrupt_disable (level);
if (!capture_records)
{
rtems_interrupt_enable (level);
return RTEMS_UNSATISFIED;
}
if (enable)
capture_flags |= RTEMS_CAPTURE_ON;
else
capture_flags &= ~RTEMS_CAPTURE_ON;
rtems_interrupt_enable (level);
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_flush
*
* DESCRIPTION:
*
* This function flushes the capture buffer. The prime parameter allows the
* capture engine to also be primed again.
*/
rtems_status_code
rtems_capture_flush (rtems_boolean prime)
{
rtems_interrupt_level level;
rtems_capture_task_t* task;
rtems_interrupt_disable (level);
for (task = capture_tasks; task != NULL; task = task->next)
task->flags &= ~RTEMS_CAPTURE_TRACED;
if (prime)
capture_flags &= ~(RTEMS_CAPTURE_TRIGGERED | RTEMS_CAPTURE_OVERFLOW);
else
capture_flags &= ~RTEMS_CAPTURE_OVERFLOW;
capture_in = capture_records;
capture_out = 0;
rtems_interrupt_enable (level);
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_watch_add
*
* DESCRIPTION:
*
* This function defines a watch for a specific task given a name. A watch
* causes it to be traced either in or out of context. The watch can be
* optionally enabled or disabled with the set routine. It is disabled by
* default.
*/
rtems_status_code
rtems_capture_watch_add (rtems_name name, rtems_id id)
{
rtems_capture_control_t* control;
if ((name == 0) && (id == 0))
return RTEMS_UNSATISFIED;
control = rtems_capture_find_control (name, id);
if (control && !id)
return RTEMS_TOO_MANY;
if (!control)
control = rtems_capture_create_control (name, id);
if (!control)
return RTEMS_NO_MEMORY;
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_watch_del
*
* DESCRIPTION:
*
* This function removes a watch for a specific task given a name. The task
* description will still exist if referenced by a trace record in the trace
* buffer or a global watch is defined.
*/
rtems_status_code
rtems_capture_watch_del (rtems_name name, rtems_id id)
{
rtems_interrupt_level level;
rtems_capture_control_t* control;
rtems_capture_control_t** prev_control;
rtems_capture_task_t* task;
rtems_boolean found = 0;
/*
* Should this test be for wildcards ?
*/
for (prev_control = &capture_controls, control = capture_controls;
control != NULL; )
{
if (rtems_capture_match_name_id (name, id, control->name, control->id))
{
rtems_interrupt_disable (level);
for (task = capture_tasks; task != NULL; task = task->next)
if (task->control == control)
task->control = 0;
*prev_control = control->next;
rtems_interrupt_enable (level);
_Workspace_Free (control);
control = *prev_control;
found = 1;
}
else
{
prev_control = &control->next;
control = control->next;
}
}
if (found)
return RTEMS_SUCCESSFUL;
return RTEMS_INVALID_NAME;
}
/*
* rtems_capture_watch_set
*
* DESCRIPTION:
*
* This function allows control of a watch. The watch can be enabled or
* disabled.
*/
rtems_status_code
rtems_capture_watch_ctrl (rtems_name name, rtems_id id, rtems_boolean enable)
{
rtems_interrupt_level level;
rtems_capture_control_t* control;
rtems_boolean found = 0;
/*
* Find the control and then set the watch. It must exist before it can
* be controlled.
*/
for (control = capture_controls; control != NULL; control = control->next)
{
if (rtems_capture_match_name_id (name, id, control->name, control->id))
{
rtems_interrupt_disable (level);
if (enable)
control->flags |= RTEMS_CAPTURE_WATCH;
else
control->flags &= ~RTEMS_CAPTURE_WATCH;
rtems_interrupt_enable (level);
found = 1;
}
}
if (found)
return RTEMS_SUCCESSFUL;
return RTEMS_INVALID_NAME;
}
/*
* rtems_capture_watch_global
*
* DESCRIPTION:
*
* This function allows control of a global watch. The watch can be enabled or
* disabled. A global watch configures all tasks below the ceiling and above
* the floor to be traced.
*/
rtems_status_code
rtems_capture_watch_global (rtems_boolean enable)
{
rtems_interrupt_level level;
rtems_interrupt_disable (level);
/*
* We need to keep specific and global watches separate so
* a global enable/disable does not lose a specific watch.
*/
if (enable)
capture_flags |= RTEMS_CAPTURE_GLOBAL_WATCH;
else
capture_flags &= ~RTEMS_CAPTURE_GLOBAL_WATCH;
rtems_interrupt_enable (level);
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_watch_global_on
*
* DESCRIPTION:
*
* This function returns the global watch state.
*/
rtems_boolean
rtems_capture_watch_global_on ()
{
return capture_flags & RTEMS_CAPTURE_GLOBAL_WATCH ? 1 : 0;
}
/*
* rtems_capture_watch_ceiling
*
* DESCRIPTION:
*
* This function sets a watch ceiling. Tasks at or greating that the
* ceiling priority are not watched. This is a simple way to monitor
* an application and exclude system tasks running at a higher
* priority level.
*/
rtems_status_code
rtems_capture_watch_ceiling (rtems_task_priority ceiling)
{
capture_ceiling = ceiling;
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_watch_get_ceiling
*
* DESCRIPTION:
*
* This function gets the watch ceiling.
*/
rtems_task_priority
rtems_capture_watch_get_ceiling ()
{
return capture_ceiling;
}
/*
* rtems_capture_watch_floor
*
* DESCRIPTION:
*
* This function sets a watch floor. Tasks at or less that the
* floor priority are not watched. This is a simple way to monitor
* an application and exclude system tasks running at a lower
* priority level.
*/
rtems_status_code
rtems_capture_watch_floor (rtems_task_priority floor)
{
capture_floor = floor;
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_watch_get_floor
*
* DESCRIPTION:
*
* This function gets the watch floor.
*/
rtems_task_priority
rtems_capture_watch_get_floor ()
{
return capture_floor;
}
/*
* rtems_capture_set_trigger
*
* DESCRIPTION:
*
* This function sets an edge trigger. Left is the left side of
* the edge and right is right side of the edge. The trigger type
* can be -
*
* FROM_ANY : a switch from any task to the right side of the edge.
* TO_ANY : a switch from the left side of the edge to any task.
* FROM_TO : a switch from the left side of the edge to the right
* side of the edge.
*
* This set trigger routine will create a capture control for the
* target task. The task list is searched and any existing tasks
* are linked to the new control.
*
* We can have a number of tasks that have the same name so we
* search using names. This means a number of tasks can be
* linked to single control.
*/
rtems_status_code
rtems_capture_set_trigger (rtems_name from,
rtems_id from_id,
rtems_name to,
rtems_id to_id,
rtems_capture_trigger_t trigger)
{
rtems_capture_control_t* control;
int i;
/*
* Find the capture control blocks for the from and to
* tasks.
*/
if (trigger == rtems_capture_to_any)
{
control = rtems_capture_create_control (from, from_id);
if (control == NULL)
return RTEMS_NO_MEMORY;
control->flags |= RTEMS_CAPTURE_TO_ANY;
}
if ((trigger == rtems_capture_from_to) ||
(trigger == rtems_capture_from_any))
{
control = rtems_capture_create_control (to, to_id);
if (control == NULL)
return RTEMS_NO_MEMORY;
if (trigger == rtems_capture_from_any)
control->flags |= RTEMS_CAPTURE_FROM_ANY;
else
{
control->flags |= RTEMS_CAPTURE_FROM_TO;
for (i = 0; i < RTEMS_CAPTURE_TRIGGER_TASKS; i++)
{
if (control->from[i] == 0)
{
control->from[i] = from;
control->from_id[i] = from_id;
break;
}
}
}
}
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_read
*
* DESCRIPTION:
*
* This function reads a number of records from the capture buffer.
* The user can optionally block and wait until the buffer as a
* specific number of records available or a specific time has
* elasped.
*
* The function returns the number of record that is has that are
* in a continous block of memory. If the number of available records
* wrap then only those records are provided. This removes the need for
* caller to be concerned about buffer wrappings. If the number of
* requested records cannot be met due to the wrapping of the records
* less than the specified number will be returned.
*
* The user must release the records. This is achieved with a call to
* rtems_capture_release. Calls this function without a release will
* result in at least the same number of records being released.
*
* The 'threshold' parameter is the number of records that must be
* captured before returning. If a timeout period is specified (non-0)
* any captured records will be returned. These parameters stop
* thrashing occuring for a small number of records, yet allows
* a user configured latiency to be applied for single events.
*
* The 'timeout' parameter is in micro-seconds. A value of 0 will disable
* the timeout.
*
*/
rtems_status_code
rtems_capture_read (rtems_unsigned32 threshold,
rtems_unsigned32 timeout,
rtems_unsigned32* read,
rtems_capture_record_t** recs)
{
rtems_interrupt_level level;
rtems_status_code sc = RTEMS_SUCCESSFUL;
rtems_unsigned32 count;
*read = 0;
*recs = NULL;
rtems_interrupt_disable (level);
/*
* Only one reader is allowed.
*/
if (capture_flags & RTEMS_CAPTURE_READER_ACTIVE)
{
rtems_interrupt_enable (level);
return RTEMS_RESOURCE_IN_USE;
}
capture_flags |= RTEMS_CAPTURE_READER_ACTIVE;
*read = count = capture_count;
rtems_interrupt_enable (level);
*recs = &capture_records[capture_out];
for (;;)
{
/*
* See if the count wraps the end of the record buffer.
*/
if (count && ((capture_out + count) >= capture_size))
*read = capture_size - capture_out;
/*
* Do we have a threshold and the current count has not wrapped
* around the end of the capture record buffer ?
*/
if ((*read == count) && threshold)
{
/*
* Do we have enough records ?
*/
if (*read < threshold)
{
rtems_event_set event_out;
rtems_task_ident (RTEMS_SELF, RTEMS_LOCAL, &capture_reader);
rtems_interrupt_disable (level);
capture_flags |= RTEMS_CAPTURE_READER_WAITING;
rtems_interrupt_enable (level);
sc = rtems_event_receive (RTEMS_EVENT_0,
RTEMS_WAIT | RTEMS_EVENT_ANY,
TOD_MICROSECONDS_TO_TICKS (timeout),
&event_out);
/*
* Let the user handle all other sorts of errors. This may
* not be the best solution, but oh well, it will do for
* now.
*/
if ((sc != RTEMS_SUCCESSFUL) && (sc != RTEMS_TIMEOUT))
break;
rtems_interrupt_disable (level);
*read = count = capture_count;
rtems_interrupt_enable (level);
continue;
}
}
/*
* Always out if we reach here. To loop use continue.
*/
break;
}
rtems_interrupt_disable (level);
capture_flags &= ~RTEMS_CAPTURE_READER_ACTIVE;
rtems_interrupt_enable (level);
return sc;
}
/*
* rtems_capture_release
*
* DESCRIPTION:
*
* This function releases the requested number of record slots back
* to the capture engine. The count must match the number read.
*/
rtems_status_code
rtems_capture_release (rtems_unsigned32 count)
{
rtems_interrupt_level level;
rtems_interrupt_disable (level);
if (count > capture_count)
count = capture_count;
capture_count -= count;
capture_out = (capture_count + count) % capture_size;
rtems_interrupt_enable (level);
return RTEMS_SUCCESSFUL;
}
/*
* rtems_capture_tick_time
*
* DESCRIPTION:
*
* This function returns the tick period in nano-seconds.
*/
rtems_unsigned32
rtems_capture_tick_time ()
{
return capture_tick_period;
}
/*
* rtems_capture_event_text
*
* DESCRIPTION:
*
* This function returns a string for an event based on the bit in the
* event. The functions takes the bit offset as a number not the bit
* set in a bit map.
*/
const char*
rtems_capture_event_text (int event)
{
if ((event < RTEMS_CAPTURE_EVENT_START) || (event > RTEMS_CAPTURE_EVENT_END))
return "invalid event id";
return capture_event_text[event - RTEMS_CAPTURE_EVENT_START];
}
/*
* rtems_capture_get_task_list
*
* DESCRIPTION:
*
* This function returns the head of the list of tasks that the
* capture engine has detected.
*/
rtems_capture_task_t*
rtems_capture_get_task_list ()
{
return capture_tasks;
}
/*
* rtems_capture_task_stack_usage
*
* DESCRIPTION:
*
* This function updates the stack usage. The task control block
* is updated.
*/
rtems_unsigned32
rtems_capture_task_stack_usage (rtems_capture_task_t* task)
{
if (task->tcb)
{
rtems_unsigned32* st;
rtems_unsigned32* s;
/*
* @todo: Assumes all stacks move the same way.
*/
st = task->tcb->Start.Initial_stack.area + task->stack_size;
s = task->tcb->Start.Initial_stack.area;
while (s < st)
{
if (*s != 0xdeaddead)
break;
s++;
}
task->stack_clean =
s - (rtems_unsigned32*) task->tcb->Start.Initial_stack.area;
}
return task->stack_clean;
}
/*
* rtems_capture_get_control_list
*
* DESCRIPTION:
*
* This function returns the head of the list of control in the
* capture engine.
*/
rtems_capture_control_t*
rtems_capture_get_control_list ()
{
return capture_controls;
}
|
__label__pos
| 0.82417 |
Take the 2-minute tour ×
Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required.
I'm trying to unlock 4th core of AMD Athlon II x3 450 CPU on Asus motherboard and American Megatrends BIOS. However, after I enable Unleashed! mode in BIOS, and restart computer, the monitor doesn't turn on; it stays in power saving mode.
After restarting computer again, I get the BIOS error: Activating Unleashed mode failed. Press F1 to load setup F2 to load default settings. Tried with turning on Unleashed mode and leaving everything else on auto, and with manually setting cores, same result. Also getting same outcome when activating ASUS core unlocker. The thing that got my attention that 'Core value', I think it's called like that, is set to -2% for all cores, and I'm not sure what it means or is it related.
So, did I just had bad luck and got the cpu with malfunctioning 4th core, or I am doing something wrong?
EDIT: info you asked: Motherboard Asus M5A78L-M LX. Couldn't get that from CPU-Z or Everest, took me a couple restarts to finally write down model. It's not brand-name computer, so manually assembled. It's ~1.5 yrs old. Don't get me wrong - it works just fine with 3 cores, and it's meant to work with 3 cores, but I hoped I can unlock 4th one too.
share|improve this question
More information is needed: What is the model number of the motherboard? If you don't have that, what's the model of computer? Are these brand new, or are they "new to you"? – Taegost Jul 19 '13 at 20:40
Added information you asked for in question. – Luke Jul 20 '13 at 8:32
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Browse other questions tagged or ask your own question.
|
__label__pos
| 0.561753 |
Importing data
WebCmsModule provides infrastructure for easy importing of data using YAML files. The data import services are also the ones used for the default data of WebCmsModule. See the default data section for the YAML version of the default types and assets.
WebCmsDataImportService
The WebCmsDataImportService allows importing data from Map structures. The Map based data could for example be sourced from YAML, JSON or XML files.
When converting to the domain model, the WebCmsDataImportService will dispatch to WebCmsDataImporter beans for handling data items. The WebCmsDataConversionService is used to convert from raw values (for example String) to the actual data type (for example WebCmsPage).
Example importing from YAML file data
Yaml yaml = new Yaml();
Map<String, Object> data = (Map<String, Object>) yaml.load( inputStream );
dataImportService.importData( data );
Using an installer
The AbstractWebCmsDataInstaller is a base class for an installer that imports one or more YAML files using the WebCmsDataImportService. The files specified will be imported in registration order.
Example YAML data installer
@Installer(description = "Install some test data", phase = InstallerPhase.AfterContextBootstrap, version = 1)
@RequiredArgsConstructor
public class TestDataInstaller extends AbstractWebCmsDataInstaller
{
@Override
protected void registerResources( List<String> locations ) {
locations.add( "classpath:installers/test-data/my-types.yml" );
locations.add( "classpath:installers/test-data/my-assets.yml" );
}
}
|
__label__pos
| 0.837003 |
(MANA) - Decentraland
mana decentraland logo
(MANA) – Decentraland
What is Decentraland?
Decentraland is a virtual-reality platform hosted on the Ethereum blockchain, and MANA is the currency used for in-world purchases of virtual LAND parcels, as well as other goods and services that will be published there. Once purchased, landowners can choose what content to publish on their land, content can range from static 3D scenes to interactive content such as games. Imagine for a moment walking through a virtual world, let’s say that you’re looking for a streaming service to watch, and there is a lane of Kiosks in the world where you can interact with virtual representatives for an app service like Netflix or Hulu. They could inform you on services, prices, options, what content they have, in an interactive manner that hasn’t been seen. Shop for games in a virtual world where you could even try them out by walking into a “game portal” and entering a type of “trial world”, there are endless possibilities.
The global virtual reality market while relatively small at the moment, is projected to rapidly grow into a $40 billion market by 2020. Currently, users spend more time than ever in virtual worlds, but this takes place in 2D interfaces such as the web, and mobile phones, in the coming years this will be primarily accomplished by means of virtual reality and augmented reality as the technology continues to rapidly advance.
How does Decentraland work?
-Consensus Layer
Land is stored within the consensus layer, where the land content is accessed through a hash of the file’s content in the blockchain. This content is then downloaded from BitTorrent or IPFS, and the file contains a full description of the content on that land such as objects, textures, sounds, and various other element needed to render the scene. Furthermore, the file contains a URL to a server that will establish P2P connections allowing users explore the content simultaneously.
Additionally, land ownership is maintained via Ethereum smart contracts which are referred the digital assets referred to as LAND. Each piece of LAND has unique corresponding coordinates (x,y), an owner, and a reference file leading to the content description that stores the data the LAND owner wants to display. Native Decentraland clients will connect to the Ethereum Network allowing for LAND updates to be fetched and displayed in accordance with the smart contract specifications.
-Content Distribution Layer
For each LAND parcel that needs to be rendered in-world, a corresponding file with the description of the content is retrieved from the smart contract. Currently, BitTorrent, and Kademlia DHT networks store the magnet links for each parcel, but IPFS also provides a potential alternative as the technology advances. Because, Decentraland is hosted in this manner it removes the need for a centralized server infrastructure. Additionally, this system structure provides strong censorship-resistance meaning no central authority can change the rules or prevent users from participating in the system. However, the bandwidth necessary to host the files, and serve the content on-demand comes with significant costs. Currently, users are seeding the content for free, but in the future this cost can potentially be covered by protocols such as
Filecoin. Until then, micropayments can be used as a substitute to pay for quality of service.
– Real-Time Layer
Landowners will ideally host servers to allow for P2P connections to be established and connect over a network. Because there will be no centralized server infrastructure, P2P connections will be required for users to connect in real-time. Additionally, this provides landowners with the incentive to host these servers otherwise users will not be able to see each other in the owners LAND parcel. Furthermore, this connectivity provides users with a social experience allowing them to have avatars in-world, display the positioning of avatars, voice chat, messaging, and other interactions with the virtual environment.
What are MANA and LAND tokens used for?
In the next stage of Decentralands roadmap is the Iron Age. At that time, Decentraland will fully introduce two digital assets into their ecosystem, LAND, these are non-fungible parcels in which the virtual world is divided, and MANA, an ER-20 token that is burned to claim LAND parcels, as well as make in-world purchases of goods and services hosted within Decentraland. The main utility of LAND is its ability to host content such as applications, games, services, and other content. Developers will need to purchase LAND in order to build on top of it thus allowing them to reach their target audience through an interactive virtual platform.
However, all LAND can be purchased at the same flat rate (1000 MANA = 1 LAND). Although, LAND parcels are distinguishable from eachother, because of this LAND parcels may trade at different prices on a secondary market due to differences in location or adjacencies to more or less popular hubs in-world. For example, you would most likely want to build your content in a busy area with other services present, and lots of end-users. As opposed to, building your content in an empty zone or area lacking traffic and competition. As for MANA tokens, their value is intrinsically tied to the purchasing of LAND, goods, and services in Decentraland.
What use-cases are present for Decentraland?
• Applications – The scripting language for Decentraland will allow the development of applications, games, gambling, and other dynamic 3D scenes. Their scripting language will be designed with the ability to handle various programming tasks including object creation, loading textures, handling physics, encoding user interactions, sounds, payments, and many other applications will be possible as well.
• Content Curation – Users in Decentraland will gather around virtual neighborhoods of shared interest. Being located in high-traffic hubs will help drive users to the content of landowners.
• Advertising – Industry brands and market leaders will have the option to advertise using virtual billboards near, or in, high-traffic land parcels to promote their own products, services, and events. In this way, some neighborhoods may even choose to become virtual version of Times Square in New York City.
Digital Collectibles – Decentraland expects that users will publish, distribute, and collect rare digital assets issued on the blockchain by creators. This will open up the opportunity for digital assets to be traded within the virtual world through the scripting system. For example Etheremon has partnered with Decentraland for their digital collectibles.
• Other use cases – Because there are no technical specifications to what can be built in Decentraland there’s a broad channel open for more use cases to emerge in time such as professional development, education, therapy, 3D design, Virtual Tourism, and many more.
What challenges are present for Decentraland?
Traditionally, decentralized content distribution has two inherent issues. The first of which involves download speed, this is due to slow file retrieval in slower legacy systems and specifically in graphical applications such as Decentraland. The second issue is availability, meaning that content is sufficiently distributed across the network without loss of data. However, IPFS and the Filecoin protocol will be tackling these issues and Decentraland plans to implement them once they become ready.Secondly, scripting APIs for Decentraland will need to be secure enough for clients to hold private and authorize micropayments frequently. Lastly, content curation could pose a problem in regards to issues created by filtering content intended for mature audiences which is difficult to solve in a decentralized network. However, they plan to make a reputation-based approach to this idea in which users can select one or more providers from a whitelist/blacklist that tracks the type of content being served on the LAND parcels.
Demo video example:
Sources/Links
Website: Click to see website
Discord: Click Here
Twitter: Click Here
Reddit: Click Here
Facebook: Click Here
Thank you for reading Cyber Lion Weekly!
* Support Us *
BTC: 1JZdqpJPMG6TBhfGYjgy59sVRbzgHLC9Ju
ETH: 0x4E932705F5A824f7e071FBE6FE2347fed4A3688a
LTC: LLQSLu2E3vit8Ssi8wfTyZdbLVj3EXkehw
PayPal: [email protected]
(MANA) - Decentraland - August 2nd 2018
© Copyright 2018 - Cyber Lion Weekly
|
__label__pos
| 0.658514 |
CBSE Class 10-Mathematics: Chapter – 9 Some Applications of Trigonometry Part 15 (For CBSE, ICSE, IAS, NET, NRA 2023)
Get top class preparation for CBSE/Class-10 right from your home: get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-10.
3 Mark Questions
Question 1:
A tall boy is standing at some distance from a tall building. The angle of elevation from his eyes to the top of the building increases from to as he walks towards the building. Find the distance he walked towards the building.
Tall Boy is Standing at Some Distance
Answer:
and
In right triangle ,
In right triangle ACP,
Hence, the walked towards the building is .
Question 2:
From a point on the ground, the angles of elevation of the bottom and the top of a transmission tower fixed at the top of a high building are respectively. Find the height of the tower.
Find the Height of the Tower
Answer:
Let the height of the tower be m. Then, in right triangle CBP,
In right triangle ABP,
Putting this value in eq. (i) , we get,
The height of the tower is
|
__label__pos
| 0.979345 |
示例#1
0
public void BagSerializeText()
{
PropertyBag bag = this.CreateBag();
XmlSerializer serializer = new XmlSerializer(typeof(PropertyBag));
string xml = null;
Assert.DoesNotThrow(() => xml = serializer.ToXml(bag));
Assert.True(!string.IsNullOrEmpty(xml));
PropertyBag bag2 = null;
Assert.DoesNotThrow(() => bag2 = (PropertyBag) serializer.FromXml(xml));
Assert.NotNull(bag2);
Assert.True(Compare(bag, bag2));
}
示例#2
0
public void SetSerializeText()
{
PropertySet set = this.CreateSet();
XmlSerializer serializer = new XmlSerializer(typeof(PropertySet));
string xml = null;
Assert.DoesNotThrow(() => xml = serializer.ToXml(set));
Assert.True(!string.IsNullOrEmpty(xml));
PropertySet set2 = null;
Assert.DoesNotThrow(() => set2 = (PropertySet) serializer.FromXml(xml));
Assert.NotNull(set2);
Assert.True(Compare(set, set2));
}
|
__label__pos
| 0.871445 |
本文将告诉大家如何使用 Vortice 底层库从零开始,从一个控制台项目,开始搭建一个最简单的使用 Direct2D1 的 DirectX 应用。本文属于入门级博客,期望本文能让大家了解 Vortice 底层库是可以如何调用 DirectX 的功能,以及了解 DirectX 中,特别是 D2D 部分的初始化逻辑
在开始聊 Vortice 之前,必须要先聊聊 SharpDx 库。 众所周知,现在 SharpDx 已不维护,尽管 SharpDx 的不维护对咱开发影响很小,除非需要用到这几年新加的功能,否则使用不维护的 SharpDx 的问题也不大。而 Vortice 是作为 SharpDx 的一个代替的存在,是从 SharpDx 的基础上,继续开发的一个项目。 使用 Vortice 底层库,能让 C# 代码比较方便的和 DirectX 对接
从设计上,此 Vortice 库和 SharpDx 是对 DirectX 的低级封装,低级封装意味着将会让咱在开发时,必须了解非常的细节,但同时也带来了可以进行底层优化的可能
可以代替 SharpDx 的库,除了 Vortice 之外,还有很多,详细请看 SharpDx 的代替项目
在开始阅读本文之前,我期望读者已了解很多相关的知识,例如 Win32 的概念,以及 DirectX 是什么,和 .NET 框架的基础知识加 C# 的基础语法等知识。尽管本文属于入门级博客,但不会涉及到过于基础的知识
想要开始使用 D2D 绘制内容,就需要有一个用来承载绘制内容的 “画布” 对象,在 D2D 里面,对应的就是一个 ID2D1RenderTarget 类型的对象
为了能在屏幕上能看到绘制的内容,那最好是有一个窗口用来显示绘制内容。当然,使用离屏渲染也可以,只是用离屏渲染的话,自然有离屏渲染的自带的坑再加上为了能看到渲染内容而做的编码为图片的坑,这就让入门博客不友好了。本文将通过 Win32 的方式一步步创建窗口,尽可能告诉大家更多的细节
本文使用的步骤如下:
• 创建一个 Win32 窗口
• 创建 D3D11 的设备,和交换链,将 D3D 挂到窗口上
• 通过 DXGI 配合 D3D11 创建 D2D 的 ID2D1RenderTarget 进行绘制修改颜色
创建项目
本文将使用 VisualStudio 2022 作为 IDE 工具,理论上还在使用低于 VisualStudio 2022 版本的开发者,也应该升级 IDE 了
使用非 VisualStudio 作为 IDE 的,那推荐本文看着玩就好了,不要去尝试本文的代码
新建一个 dotnet 6 的控制台项目
接下来咱将从这个控制台项目开始,编写 D2D 应用
本文贴出的代码只有部分,如果构建不通过,推荐到本文的最后获取整个项目的代码。本文的所有的源代码可在本文的最后找到下载方式
安装库
找到咱 dotnet 的惯例,在使用某个库之前,就是使用 NuGet 安装库
本文需要安装以下的 NuGet 库:
• Vortice.Direct2D1
• Vortice.Direct3D11
• Vortice.DirectX
• Vortice.D3DCompiler
• Vortice.Win32
• Microsoft.Windows.CsWin32
新建的项目默认采用 SDK 风格的 csproj 项目文件格式,可以双击项目,编辑 csproj 项目文件,在项目文件上添加如下代码用来快速安装 NuGet 库
<ItemGroup>
<PackageReference Include="Vortice.Direct2D1" Version="2.1.32" />
<PackageReference Include="Vortice.Direct3D11" Version="2.1.32" />
<PackageReference Include="Vortice.DirectX" Version="2.1.32" />
<PackageReference Include="Vortice.D3DCompiler" Version="2.1.32" />
<PackageReference Include="Vortice.Win32" Version="1.6.2" />
<PackageReference Include="Microsoft.Windows.CsWin32" PrivateAssets="all" Version="0.2.63-beta" />
</ItemGroup>
编辑之后的 csproj 项目文件的代码如下
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Vortice.Direct2D1" Version="2.1.32" />
<PackageReference Include="Vortice.Direct3D11" Version="2.1.32" />
<PackageReference Include="Vortice.DirectX" Version="2.1.32" />
<PackageReference Include="Vortice.D3DCompiler" Version="2.1.32" />
<PackageReference Include="Vortice.Win32" Version="1.6.2" />
<PackageReference Include="Microsoft.Windows.CsWin32" PrivateAssets="all" Version="0.2.63-beta" />
</ItemGroup>
</Project>
加上命名空间
为了更加方便大家静态阅读代码,我特意使用了以下引用方式,让大家在阅读代码的时候,了解到对应的类型是属于哪个命名空间下
using D3D = Vortice.Direct3D;
using D3D11 = Vortice.Direct3D11;
using DXGI = Vortice.DXGI;
using D2D = Vortice.Direct2D1;
其他的引用代码如下
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Runtime.Versioning;
using Windows.Win32.Foundation;
using Windows.Win32.UI.WindowsAndMessaging;
using static Windows.Win32.PInvoke;
using static Windows.Win32.UI.WindowsAndMessaging.PEEK_MESSAGE_REMOVE_TYPE;
using static Windows.Win32.UI.WindowsAndMessaging.WNDCLASS_STYLES;
using static Windows.Win32.UI.WindowsAndMessaging.WINDOW_STYLE;
using static Windows.Win32.UI.WindowsAndMessaging.WINDOW_EX_STYLE;
using static Windows.Win32.UI.WindowsAndMessaging.SYSTEM_METRICS_INDEX;
using static Windows.Win32.UI.WindowsAndMessaging.SHOW_WINDOW_CMD;
using Vortice.DCommon;
using Vortice.Mathematics;
using AlphaMode = Vortice.DXGI.AlphaMode;
由于本文使用的项目,在 csproj 项目文件设置了使用 ImplicitUsings 属性,加上对 System 等命名空间的默认引用,这里就不需要再写对默认命名空间的引用
加上 Win32 定义
为了创建 Win32 窗口以及初始化创建 DX 对象,就需要使用一些 Win32 函数。使用 Win32 函数之前,需要对 Win32 函数进行定义。本文使用 Microsoft.Windows.CsWin32 库来辅助编写 Win32 函数的定义
在安装了 Microsoft.Windows.CsWin32 库,即可新建一个 NativeMethods.txt 的文件,在这个文件里面,一行一个函数或 Win32 类型名,即可自动使用源代码生成的方式创建定义。详细请看 dotnet 使用 CsWin32 库简化 Win32 函数调用逻辑
新建一个 NativeMethods.txt 文件,在这个文件里面写上需要使用的 Win32 函数,内容如下
GetModuleHandle
PeekMessage
TranslateMessage
DispatchMessage
GetMessage
RegisterClassExW
DefWindowProc
LoadCursor
PostQuitMessage
CreateWindowExW
DestroyWindow
ShowWindow
GetSystemMetrics
AdjustWindowRectEx
GetClientRect
GetWindowRect
IDC_ARROW
WM_KEYDOWN
WM_KEYUP
WM_SYSKEYDOWN
WM_SYSKEYUP
WM_DESTROY
WM_QUIT
WM_PAINT
WM_CLOSE
WM_ACTIVATEAPP
VIRTUAL_KEY
完成了初始化准备之后,接下来就可以开始编写代码
创建窗口
本文使用控制台项目,在创建 Win32 窗口,需要使用到很多 Win32 窗口创建的细节代码,但本文更侧重如何使用 DX 的知识,因此关于 Win32 创建窗口的逻辑,大部分都会略过
在开始创建 Win32 窗口之前,先准备一些参数
设置窗口的尺寸
SizeI clientSize = new SizeI(1000, 1000);
再给窗口定义一个标题
// 窗口标题
var title = "Demo";
var windowClassName = "lindexi doubi";
配置窗口的 Win32 样式,这个样式的内容没啥固定的,可以根据自己的需求来,也可以乱来,不离谱就好
// 窗口样式,窗口样式含义请执行参阅官方文档,样式只要不离谱,自己随便写,影响不大
WINDOW_STYLE style = WS_CAPTION |
WS_SYSMENU |
WS_MINIMIZEBOX |
WS_CLIPSIBLINGS |
WS_BORDER |
WS_DLGFRAME |
WS_THICKFRAME |
WS_GROUP |
WS_TABSTOP |
WS_SIZEBOX;
根据上面设置的窗口尺寸,尝试根据样式算出实际可用的尺寸
var rect = new RECT
{
right = clientSize.Width,
bottom = clientSize.Height
};
// Adjust according to window styles
AdjustWindowRectEx(&rect, style, false, WS_EX_APPWINDOW);
接着算出窗口显示的坐标和尺寸,这个逻辑非核心逻辑,也可以自己随意写一个坐标。本文将尝试让窗口显示在屏幕中间
int x = 0;
int y = 0;
int windowWidth = rect.right - rect.left;
int windowHeight = rect.bottom - rect.top;
// 随便,放在屏幕中间好了。多个显示器?忽略
int screenWidth = GetSystemMetrics(SM_CXSCREEN);
int screenHeight = GetSystemMetrics(SM_CYSCREEN);
x = (screenWidth - windowWidth) / 2;
y = (screenHeight - windowHeight) / 2;
准备完成,开始创建窗口
var hInstance = GetModuleHandle((string)null);
fixed (char* lpszClassName = windowClassName)
{
PCWSTR szCursorName = new((char*)IDC_ARROW);
var wndClassEx = new WNDCLASSEXW
{
cbSize = (uint)Unsafe.SizeOf<WNDCLASSEXW>(),
style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC,
// 核心逻辑,设置消息循环
lpfnWndProc = new WNDPROC(WndProc),
hInstance = (HINSTANCE)hInstance.DangerousGetHandle(),
hCursor = LoadCursor((HINSTANCE)IntPtr.Zero, szCursorName),
hbrBackground = (Windows.Win32.Graphics.Gdi.HBRUSH)IntPtr.Zero,
hIcon = (HICON)IntPtr.Zero,
lpszClassName = lpszClassName
};
ushort atom = RegisterClassEx(wndClassEx);
if (atom == 0)
{
throw new InvalidOperationException(
$"Failed to register window class. Error: {Marshal.GetLastWin32Error()}"
);
}
}
// 创建窗口
var hWnd = CreateWindowEx
(
WS_EX_APPWINDOW,
windowClassName,
title,
style,
x,
y,
windowWidth,
windowHeight,
hWndParent: default,
hMenu: default,
hInstance: default,
lpParam: null
);
获取到的 hWnd 将会在接下来被 DX 挂上
但愿大家知道 hWnd 是啥意思
既然创建出了窗口了,那就显示出来吧
// 创建完成,那就显示
ShowWindow(hWnd, SW_NORMAL);
获取实际的窗口大小,这将用来决定后续交换链的创建。什么是交换链?自己去了解
RECT windowRect;
GetClientRect(hWnd, &windowRect);
clientSize = new SizeI(windowRect.right - windowRect.left, windowRect.bottom - windowRect.top);
以上代码就完成了创建 Win32 窗口
获取显示适配器接口
这一步是可选的,通过枚举 DX 提供的抽象的显示适配器接口,可以用来后续创建 D3D 设备。本文这里是给大家演示如何获取抽象的显示适配器接口的方法,没有指定显示适配器接口也是可以创建 D3D 设备
显示适配器接口 IDXGIAdapter 是对硬件或软件的一个抽象,可以是一个显卡,也可以是一个软件渲染器。这里获取到的抽象的显示适配器接口,在大部分情况下都是和具体的显卡相关的,但是不代表着一定就是真实的显卡
下图是从官方文档拷贝的,一个电脑加两个显卡的对象关系
先尝试使用 IDXGIFactory6 提供的 EnumAdapterByGpuPreference 方法枚举显卡,这个方法的功能是可以按照给定的参数进行排序,特别方便开发时,获取首个可用显卡
想要使用 EnumAdapterByGpuPreference 方法,需要先获取 IDXGIFactory6 对象。而 IDXGIFactory6 对象可以通过工厂创建 IDXGIFactory2 对象间接获取
接下来咱也会用到 IDXGIFactory2 提供的功能
// 开始创建工厂创建 D3D 的逻辑
var dxgiFactory2 = DXGI.DXGI.CreateDXGIFactory1<DXGI.IDXGIFactory2>();
为了让大家方便阅读获取显卡的代码,将获取显卡的代码放入到 GetHardwareAdapter 方法
private static IEnumerable<DXGI.IDXGIAdapter1> GetHardwareAdapter(DXGI.IDXGIFactory2 factory)
{
}
先尝试从 IDXGIFactory2 对象获取 IDXGIFactory6 对象
在 DX 的设计上,接口都是一个个版本迭代的,为了保持兼容性,只是新加接口,而不是更改原来的接口定义。也就是获取到的对象,也许有在这台设备上的 DX 版本,能支持到 IDXGIFactory6 版本,通用的做法是调用 QueryInterface* 方法,例如 QueryInterfaceOrNull 方法,尝试获取到更新的版本的接口对象。使用封装的 QueryInterfaceOrNull 方法,可以在不支持时返回空,通过判断返回值即可了解是否支持
DXGI.IDXGIFactory6? factory6 = factory.QueryInterfaceOrNull<DXGI.IDXGIFactory6>();
if (factory6 != null)
{
// 这个系统的 DX 支持 IDXGIFactory6 类型
}
else
{
// 不支持就不支持咯,用旧版本的方式获取显示适配器接口
}
IDXGIFactory6 新加的 EnumAdapterByGpuPreference 方法可以支持传入参数,通过参数按照顺序返回显示适配器接口
传入高性能参数开始获取,将会按照顺序获取到 DX 认为的高性能排列的顺序
// 先告诉系统,要高性能的显卡
for (int adapterIndex = 0;
factory6.EnumAdapterByGpuPreference(adapterIndex, DXGI.GpuPreference.HighPerformance,
out DXGI.IDXGIAdapter1? adapter).Success;
adapterIndex++)
{
if (adapter == null)
{
continue;
}
}
再扔掉使用软渲染的,扔掉软渲染的这一步只是为了演示如何判断获取到的显示适配器接口是采用软渲染的
// 先告诉系统,要高性能的显卡
for (int adapterIndex = 0;
factory6.EnumAdapterByGpuPreference(adapterIndex, DXGI.GpuPreference.HighPerformance,
out DXGI.IDXGIAdapter1? adapter).Success;
adapterIndex++)
{
if (adapter == null)
{
continue;
}
DXGI.AdapterDescription1 desc = adapter.Description1;
if ((desc.Flags & DXGI.AdapterFlags.Software) != DXGI.AdapterFlags.None)
{
// Don't select the Basic Render Driver adapter.
adapter.Dispose();
continue;
}
}
这里可以输出获取到的显示适配器接口的描述,可以看看在不同的设备上的输出
Console.WriteLine($"枚举到 {adapter.Description1.Description} 显卡");
所有的获取的代码如下
DXGI.IDXGIFactory6? factory6 = factory.QueryInterfaceOrNull<DXGI.IDXGIFactory6>();
if (factory6 != null)
{
// 先告诉系统,要高性能的显卡
for (int adapterIndex = 0;
factory6.EnumAdapterByGpuPreference(adapterIndex, DXGI.GpuPreference.HighPerformance,
out DXGI.IDXGIAdapter1? adapter).Success;
adapterIndex++)
{
if (adapter == null)
{
continue;
}
DXGI.AdapterDescription1 desc = adapter.Description1;
if ((desc.Flags & DXGI.AdapterFlags.Software) != DXGI.AdapterFlags.None)
{
// Don't select the Basic Render Driver adapter.
adapter.Dispose();
continue;
}
//factory6.Dispose();
Console.WriteLine($"枚举到 {adapter.Description1.Description} 显卡");
yield return adapter;
}
factory6.Dispose();
}
如果获取不到,那就使用旧的方法枚举
// 如果枚举不到,那系统返回啥都可以
for (int adapterIndex = 0;
factory.EnumAdapters1(adapterIndex, out DXGI.IDXGIAdapter1? adapter).Success;
adapterIndex++)
{
DXGI.AdapterDescription1 desc = adapter.Description1;
if ((desc.Flags & DXGI.AdapterFlags.Software) != DXGI.AdapterFlags.None)
{
// Don't select the Basic Render Driver adapter.
adapter.Dispose();
continue;
}
Console.WriteLine($"枚举到 {adapter.Description1.Description} 显卡");
yield return adapter;
}
为了方便调试,这里就加上 ToList 让所有代码都执行
var hardwareAdapter = GetHardwareAdapter(dxgiFactory2)
// 这里 ToList 只是想列出所有的 IDXGIAdapter1 在实际代码里,大部分都是获取第一个
.ToList().FirstOrDefault();
if (hardwareAdapter == null)
{
throw new InvalidOperationException("Cannot detect D3D11 adapter");
}
以上代码即可获取到显示适配器接口用来进行后续的初始化
初始化 D3D 交换链
在开始之前,按照 C# 从零开始写 SharpDx 应用 聊聊功能等级 的方法,定义代码
// 功能等级
// [C# 从零开始写 SharpDx 应用 聊聊功能等级](https://blog.lindexi.com/post/C-%E4%BB%8E%E9%9B%B6%E5%BC%80%E5%A7%8B%E5%86%99-SharpDx-%E5%BA%94%E7%94%A8-%E8%81%8A%E8%81%8A%E5%8A%9F%E8%83%BD%E7%AD%89%E7%BA%A7.html)
D3D.FeatureLevel[] featureLevels = new[]
{
D3D.FeatureLevel.Level_11_1,
D3D.FeatureLevel.Level_11_0,
D3D.FeatureLevel.Level_10_1,
D3D.FeatureLevel.Level_10_0,
D3D.FeatureLevel.Level_9_3,
D3D.FeatureLevel.Level_9_2,
D3D.FeatureLevel.Level_9_1,
};
使用以上获取的显示适配器接口创建设备
DXGI.IDXGIAdapter1 adapter = hardwareAdapter;
D3D11.DeviceCreationFlags creationFlags = D3D11.DeviceCreationFlags.BgraSupport;
var result = D3D11.D3D11.D3D11CreateDevice
(
adapter,
D3D.DriverType.Unknown,
creationFlags,
featureLevels,
out D3D11.ID3D11Device d3D11Device, out D3D.FeatureLevel featureLevel,
out D3D11.ID3D11DeviceContext d3D11DeviceContext
);
也许使用这个显示适配器接口创建不出设备,通过判断返回值即可了解是否成功。创建失败,那就不指定具体的参数,使用 WARP 的方法创建
if (result.Failure)
{
// 如果失败了,那就不指定显卡,走 WARP 的方式
// http://go.microsoft.com/fwlink/?LinkId=286690
result = D3D11.D3D11.D3D11CreateDevice(
IntPtr.Zero,
D3D.DriverType.Warp,
creationFlags,
featureLevels,
out d3D11Device, out featureLevel, out d3D11DeviceContext);
// 如果失败,就不能继续
result.CheckError();
}
以上代码的 CheckError 方法,将会在失败抛出异常
创建成功,可以获取到 ID3D11Device 和 ID3D11DeviceContext 类型的对象和实际的功能等级。 这里的 ID3D11Device 就是 D3D 设备,提供给交换链绑定的功能,可以绘制到交换链的缓存里,从而被交换链刷新到屏幕上。这里的 ID3D11DeviceContext 是包含了 D3D 设备的环境和配置,可以用来设置渲染状态等
由于后续期望使用的是 ID3D11Device1 接口,按照惯例,从 d3D11Device 获取
// 大部分情况下,用的是 ID3D11Device1 和 ID3D11DeviceContext1 类型
// 从 ID3D11Device 转换为 ID3D11Device1 类型
var d3D11Device1 = d3D11Device.QueryInterface<D3D11.ID3D11Device1>();
理论上当前能运行 dotnet 6 的 Windows 系统,都是支持 ID3D11Device1 的
同理,获取 ID3D11DeviceContext1 接口
var d3D11DeviceContext1 = d3D11DeviceContext.QueryInterface<D3D11.ID3D11DeviceContext1>();
获取到了新的两个接口,就可以减少 d3D11Deviced3D11DeviceContext 的引用计数。调用 Dispose 不会释放掉刚才申请的 D3D 资源,只是减少引用计数
d3D11Device.Dispose();
d3D11DeviceContext.Dispose();
创建设备完成之后,接下来就是创建交换链和关联窗口。创建交换链需要很多参数,在 DX 的设计上,将参数放入到 SwapChainDescription 类型里面。和 DX 的接口设计一样,也有多个 SwapChainDescription 版本
创建 SwapChainDescription1 参数的代码如下
// 颜色格式,如果后续准备接入 WPF 那推荐使用此格式
DXGI.Format colorFormat = DXGI.Format.B8G8R8A8_UNorm;
// 缓存的数量,包括前缓存。大部分应用来说,至少需要两个缓存,这个玩过游戏的伙伴都知道
const int FrameCount = 2;
DXGI.SwapChainDescription1 swapChainDescription = new()
{
Width = clientSize.Width,
Height = clientSize.Height,
Format = colorFormat,
BufferCount = FrameCount,
BufferUsage = DXGI.Usage.RenderTargetOutput,
SampleDescription = DXGI.SampleDescription.Default,
Scaling = DXGI.Scaling.Stretch,
SwapEffect = DXGI.SwapEffect.FlipDiscard,
AlphaMode = AlphaMode.Ignore
};
参数上面的各个参数的排列组合可以实现很多不同的功能,但是 DX 有一个坑的地方在于,参数是不正交的,有些参数设置不对,将会在后续创建失败
再设置是否进入全屏模式,对于现在很多游戏和应用,都可以使用设置窗口进入最大化的全屏模式,这里就设置不进入全屏
// 设置是否全屏
DXGI.SwapChainFullscreenDescription fullscreenDescription = new DXGI.SwapChainFullscreenDescription
{
Windowed = true
};
设置了参数,就可以创建交换链。可以通过 HWnd 窗口句柄创建,也可以创建和 UWP 对接的 CreateSwapChainForCoreWindow 方式,也可以通过 DirectComposition 的 CreateSwapChainForComposition 创建。本文这里采用 CreateSwapChainForHwnd 创建,关联窗口
DXGI.IDXGISwapChain1 swapChain =
dxgiFactory2.CreateSwapChainForHwnd(d3D11Device1, hWnd, swapChainDescription, fullscreenDescription);
附带禁止按下 alt+enter 进入全屏,这是可选的
// 不要被按下 alt+enter 进入全屏
dxgiFactory2.MakeWindowAssociation(hWnd, DXGI.WindowAssociationFlags.IgnoreAltEnter);
这就完成了最重要的交换链的创建,以上完成之后,即可让 D3D 的内容绘制在窗口上。接下来准备再加上 D2D 的绘制
创建 D2D 绘制
如下图,通过 D3D 承载 D2D 的内容。以上完成了 D3D 的初始化,接下来可以通过 DXGI 辅助创建 D2D 的 ID2D1RenderTarget 画布
如上图的框架,想要使用 D2D 之前,需要先解决让 D2D 绘制到哪。让 D2D 绘制的输出,可以是一个 IDXGISurface 对象。通过 CreateDxgiSurfaceRenderTarget 方法既可以在 IDXGISurface 创建 ID2D1RenderTarget 对象,让绘制可以输出。而 IDXGISurface 可以从 ID3D11Texture2D 获取到。通过交换链的 GetBuffer 方法可以获取到 ID3D11Texture2D 对象
本文将按照这个步骤,创建 ID2D1RenderTarget 画布。除了以上步骤之外,还有其他的方法,详细还请看官方文档的转换框架
按照惯例创建 D2D 需要先创建工厂
// 对接 D2D 需要创建工厂
D2D.ID2D1Factory1 d2DFactory = D2D.D2D1.D2D1CreateFactory<D2D.ID2D1Factory1>();
先从交换链获取到 ID3D11Texture2D 对象,通过 IDXGISwapChain1 的 GetBuffer 获取交换链的一个后台缓存
D3D11.ID3D11Texture2D backBufferTexture = swapChain.GetBuffer<D3D11.ID3D11Texture2D>(0);
接着使用 QueryInterface 将 ID3D11Texture2D 转换为 IDXGISurface 对象
DXGI.IDXGISurface dxgiSurface = backBufferTexture.QueryInterface<DXGI.IDXGISurface>();
获取到 IDXGISurface 即可通过 D2D 工厂创建 ID2D1RenderTarget 画布
var renderTargetProperties = new D2D.RenderTargetProperties(PixelFormat.Premultiplied);
D2D.ID2D1RenderTarget d2D1RenderTarget =
d2DFactory.CreateDxgiSurfaceRenderTarget(dxgiSurface, renderTargetProperties);
这里获取到的 ID2D1RenderTarget 就是可以用来方便绘制 2D 的画布
修改颜色
最简单的绘制方式就是使用 Clear 方法修改颜色。本文只是告诉大家如何进行初始化,不会涉及到如何使用 D2D 绘制的内容
在开始调用 Clear 方法之前,需要先调用 BeginDraw 方法,告诉 DX 开始绘制。完成绘制,需要调用 EndDraw 方法告诉 DX 绘制完成。这里必须明确的是,在对 ID2D1RenderTarget 调用各个绘制方法时,不是方法调用完成就渲染完成的,这些方法只是收集绘制指令,而不是立刻进行渲染
var renderTarget = d2D1RenderTarget;
// 开始绘制逻辑
renderTarget.BeginDraw();
// 随意创建颜色
var color = new Color4((byte)Random.Shared.Next(255), (byte)Random.Shared.Next(255),
(byte)Random.Shared.Next(255));
renderTarget.Clear(color);
renderTarget.EndDraw();
以上代码使用随意的颜色清理,调用 Clear 时,将会让整个 ID2D1RenderTarget 使用给定的颜色清理,也就是修改颜色
在完成之后,调用一下交换链的 Present 和等待刷新
swapChain.Present(1, DXGI.PresentFlags.None);
// 等待刷新
d3D11DeviceContext1.Flush();
调用交换链的 Present 函数在屏幕上显示渲染缓冲区的内容 swapChain.Present(1, PresentFlags.None); 是等待垂直同步,在刷新完成在完成这个方法,第一个参数是同步间隔,第二个参数是演示的标志
尝试运行一下代码,就可以看到创建出了一个窗口,窗口的设置了一个诡异的颜色
这就是入门级的使用 Vortice 从零开始控制台创建窗口,在窗口上使用 D2D 绘制的方法
在完成初始化的逻辑之后,就可以使用 D2D 绘制复杂的界面了。 在 ID2D1RenderTarget 可以方便调用各个方法进行绘制,如绘制矩形,画圆等。详细请看 C# 从零开始写 SharpDx 应用 绘制基础图形
本文有部分代码没有贴出,可以通过以下方法获取本文使用的项目。如果发现自己照着写,跑不起来,推荐使用本文的项目跑一下对比代码
代码
本文的代码放在githubgitee 欢迎访问
可以通过如下方式获取本文的源代码,先创建一个空文件夹,接着使用命令行 cd 命令进入此空文件夹,在命令行里面输入以下代码,即可获取到本文的代码
git init
git remote add origin https://gitee.com/lindexi/lindexi_gd.git
git pull origin 162977106065bd3cf7bfbed0a87828c992b8df3d
以上使用的是 gitee 的源,如果 gitee 不能访问,请替换为 github 的源。请在命令行继续输入以下代码
git remote remove origin
git remote add origin https://github.com/lindexi/lindexi_gd.git
git pull origin 162977106065bd3cf7bfbed0a87828c992b8df3d
获取代码之后,进入 WakolerwhaKanicabirem 文件夹
更多博客
渲染部分,关于 SharpDx 使用,包括入门级教程,请参阅:
在 WPF 框架的渲染部分,请参阅: WPF 底层渲染_lindexi_gd的博客-CSDN博客
更多关于我博客请参阅 博客导航
本文会经常更新,请阅读原文: https://blog.lindexi.com/post/DirectX-%E4%BD%BF%E7%94%A8-Vortice-%E4%BB%8E%E9%9B%B6%E5%BC%80%E5%A7%8B%E6%8E%A7%E5%88%B6%E5%8F%B0%E5%88%9B%E5%BB%BA-Direct2D1-%E7%AA%97%E5%8F%A3%E4%BF%AE%E6%94%B9%E9%A2%9C%E8%89%B2.html ,以避免陈旧错误知识的误导,同时有更好的阅读体验。
如果你想持续阅读我的最新博客,请点击 RSS 订阅,推荐使用RSS Stalker订阅博客,或者收藏我的博客导航
知识共享许可协议 本作品采用 知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议 进行许可。欢迎转载、使用、重新发布,但务必保留文章署名林德熙(包含链接: https://blog.lindexi.com ),不得用于商业目的,基于本文修改后的作品务必以相同的许可发布。如有任何疑问,请 与我联系
微软最具价值专家
无盈利,不卖课,做纯粹的技术博客
以下是广告时间
推荐关注 Edi.Wang 的公众号
欢迎进入 Eleven 老师组建的 .NET 社区
以上广告全是友情推广,无盈利
|
__label__pos
| 0.570474 |
m3core/src/text/Text8Short.m3
Copyright 1996-2000, Critical Mass, Inc. All rights reserved.
See file COPYRIGHT-CMASS for details.
UNSAFE MODULE Text8Short;
IMPORT TextClass;
REVEAL
T = Public BRANDED "Text8Short.T" OBJECT OVERRIDES
get_info := GetInfo;
get_char := GetChar;
get_chars := GetChars;
END;
PROCEDURE New (READONLY a: ARRAY OF CHAR): T =
VAR t := NEW (T);
BEGIN
t.len := NUMBER (a);
IF (t.len > 0) THEN SUBARRAY (t.contents, 0, t.len) := a; END;
t.contents[t.len] := '\000';
RETURN t;
END New;
PROCEDURE GetInfo (t: T; VAR info: TextClass.Info) =
BEGIN
info.start := ADR (t.contents[0]);
info.length := t.len;
info.wide := FALSE;
END GetInfo;
PROCEDURE GetChar (t: T; i: CARDINAL): CHAR =
BEGIN
IF i >= t.len THEN (* force a subscript fault *) i := LAST (INTEGER); END;
RETURN t.contents[i];
END GetChar;
PROCEDURE GetChars (t: T; VAR a: ARRAY OF CHAR; start: CARDINAL) =
VAR n := MIN (NUMBER (a), t.len - start);
BEGIN
IF (n > 0) THEN
SUBARRAY (a, 0, n) := SUBARRAY (t.contents, start, n);
END;
END GetChars;
BEGIN
END Text8Short.
|
__label__pos
| 0.776415 |
Magnifying glass icon with the text Search underneath. Heart icon with the text Likes, bell with the text Notifications.
While at GOV.UK I got the opportunity to watch a renowned Accessibility Specialist test with Dragon.
Dragon is voice control software that allows a user to interact with a computer using just their voice. Not everyone uses a mouse or even a keyboard to interact with websites. Some people use voice control.
A key-takeaway was to always add text to icons and iconography in UI designs, this benefits voice control users. When there is no text, a user can easily misread or misunderstand what an icon means and struggle to select it.
Take the following Design below, a user might say “select love” or “select heart” and they would receive no feedback. Nothing would be selected.
A menu bar design with 5 icons: a house, magnifying glass, heart, bell and person. The house is selected and the colour scheme is purple.
However, with the following updated Design, the user can easily see that the heart icon is called “Likes”. Should they then say “select likes” they can easily navigate through the application without having to guess their way through.
A menu bar design with 5 icons: a house, magnifying glass, heart, bell and person. The house is selected and the colour scheme is purple. However in this design there is text under each icon so the house icon is home, the magnifying glass icon is search, the heart is likes, the bell is notifications and the person is profile.
It might seem like a small change, but easily has a tangible impact on an experience for some users.
|
__label__pos
| 0.722221 |
Angular 10 Firebase CRUD Realtime DB | AngularFireDatabase
In this tutorial, I will show you how to build Angular 10 CRUD App with Firebase Realtime Database that uses AngularFireDatabase service.
Related Posts:
Angular 10 Firestore CRUD with AngularFireStore
Angular 10 Firebase Storage: File Upload/Display/Delete example
Angular 10 CRUD Application example with Web API
Angular 10 Form Validation example (Reactive Forms)
Newer versions:
Angular 11 Firebase CRUD example
Angular 12 Firebase CRUD example
Angular 13 Firebase CRUD example
Angular 14 Firebase CRUD example
Angular 15 Firebase CRUD example
Angular 16 Firebase CRUD example
Angular 17 Firebase CRUD example
Angular 10 Firebase CRUD Overview
We’re gonna build an Angular 10 Firebase App using @angular/fire library in which:
• Each Tutorial has key, title, description, published status.
• We can create, retrieve, update, delete Tutorials.
Here are the screenshots:
– Create a new Tutorial:
angular-10-firebase-crud-realtime-database-create
Firebase Realtime Database right after the Operation:
angular-10-firebase-crud-realtime-database-create-db
– Retrieve all Tutorials with details when clicking on a Tutorial:
angular-10-firebase-crud-realtime-database-retrieve-all
– Change status to Published/Pending using Publish/UnPublish button:
angular-10-firebase-crud-realtime-database-update-status
– Update the Tutorial details with Update button:
angular-10-firebase-crud-realtime-database-update
If you want to implement Reactive Form Validation, please visit:
Angular 10 Form Validation example (Reactive Forms)
– Delete the Tutorial using Delete button:
angular-10-firebase-crud-realtime-database-delete
– Delete all Tutorials with Remove All button:
angular-10-firebase-crud-realtime-database-delete-all
AngularFireDatabase service
@angular/fire provides AngularFireDatabase service that allows us to work with the Realtime Database. It’s an efficient, low-latency solution for apps that require synced states across clients in realtime.
import { AngularFireDatabase} from '@angular/fire/database';
export class TutorialService {
constructor(private db: AngularFireDatabase) { }
}
AngularFireDatabase for Object
The AngularFireObject is a service for manipulating and streaming object data which is created via AngularFireDatabase service.
– Create an object binding/ Retrieve:
tutorial: AngularFireObject<any>;
// db: AngularFireDatabase
this.tutorial = db.object('tutorial');
// or
Observable<any> tutorial = db.object('tutorial').valueChanges();
– Create/Update an object:
const tutRef = db.object('tutorial');
// set() for destructive updates
tutRef.set({ title: 'zkoder Tutorial'});
– Update an object:
const tutRef= db.object('tutorial');
tutRef.update({ url: 'bezkoder.com/zkoder-tutorial' });
– Delete an object:
const tutRef = db.object('tutorial');
tutRef.remove();
AngularFireDatabase for List
Through the AngularFireDatabase service, we can create AngularFireList service that helps to synchronize data as lists.
– Create a list binding/ Retrieve:
+ Get an Observable of data as a synchronized array of JSON objects without snapshot metadata.
tutorials: Observable<any[]>;
// db: AngularFireDatabase
this.tutorials = db.list('tutorials').valueChanges();
+ Get an Observable of data as a synchronized array of AngularFireAction<DatabaseSnapshot>[] with metadata (the underyling DatabaseReference and snapshot key):
tutorials: Observable<any[]>;
this.tutorials = db.list('tutorials').snapshotChanges();
– Create a List and push a new object:
const tutorialsRef = db.list('tutorials');
tutorialsRef.push({ title: 'zkoder Tutorial', url: 'bezkoder.com/zkoder-tutorial' });
– Update a List:
+ destructive update using set(): delete everything currently in place, then save the new value
const tutorialsRef = db.list('tutorials');
tutorialsRef.set('key', { title: 'zkoder Tut#1', url: 'bezkoder.com/zkoder-tut-1' });
+ non-destructive update using update(): only updates the specified values
const tutorialsRef = db.list('tutorials');
tutorialsRef.update('key', { title: 'zkoder new Tut#1' });
– Delete an object in List:
const tutorialsRef = db.list('tutorials');
tutorialsRef.remove('key');
– Delete entire List:
const tutorialsRef = db.list('tutorials');
tutorialsRef.remove();
Technology
• Angular 10
• firebase 7
• @angular/fire 6
• rxjs 6
Setup the Firebase Project
Go to Firebase Console, login with your Google Account, then click on Add Project.
You will see the window like this:
angular-10-firebase-crud-realtime-database-create-project
Enter Project name, set Project Id and click on Continue.
Turn off Enable Google Analytics for this project, then click Create Project.
Now, browser turns into following view:
angular-10-firebase-crud-realtime-database-web-app
If you don’t see it, just choose Project Overview.
Click on Web App, you will see:
angular-10-firebase-crud-realtime-database-register-app
Set the nickname and choose Register App for next step.
angular-10-firebase-crud-realtime-database-add-firebase-sdk
Save the information for later usage.
Choose Database in the left (list of Firebase features) -> Realtime Database -> Create Database.
angular-10-firebase-crud-realtime-database-create-database
In this tutorial, we don’t implement Authentication, so let’s choose test mode:
angular-10-firebase-crud-realtime-database-config-rules
Or if you come from another situation, just open Tab Rules, then change .read and .write values to true.
Setup Angular 10 Project
Let’s open cmd and use Angular CLI to create a new Angular Project as following command:
ng new Angular10Crud
? Would you like to add Angular routing? Yes
? Which stylesheet format would you like to use? CSS
We also need to generate some Components and Services:
ng g s services/tutorial
ng g c components/add-tutorial
ng g c components/tutorial-details
ng g c components/tutorials-list
Now you can see that our project directory structure looks like this.
Project Structure
angular-10-firebase-crud-realtime-database-project-structure
Let me explain it briefly.
environment.ts configures information to connect with Firebase Project.
models/tutorial.ts defines data model class.
services/tutorial.service.ts exports TutorialService that uses @angular/fire‘s AngularFireDatabase to interact with Firebase Database.
– There are 3 components that uses TutorialService:
• add-tutorial for creating new item
• tutorials-list contains list of items, parent of tutorial-details
• tutorial-details shows item details
app-routing.module.ts defines routes for each component.
app.component contains router view and navigation bar.
app.module.ts declares Angular components and imports necessary environment & modules.
Integrate Firebase into Angular 10 App
First run the command: npm install firebase @angular/fire.
Open src/environments/environment.ts, add Firebase configuration that we have saved when Popup window was shown:
export const environment = {
production: false,
firebase: {
apiKey: 'xxx',
authDomain: 'angular-10-crud-db.firebaseapp.com',
databaseURL: 'https://angular-10-crud-db.firebaseio.com',
projectId: 'angular-10-crud-db',
storageBucket: 'angular-10-crud-db.appspot.com',
messagingSenderId: 'xxx',
appId: '1:xxx:web:xxx'
}
};
Open app.module.ts, import AngularFireModule, AngularFireDatabaseModule and environment:
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { AngularFireModule } from '@angular/fire';
import { AngularFireDatabaseModule } from '@angular/fire/database';
import { environment } from '../environments/environment';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { AddTutorialComponent } from './components/add-tutorial/add-tutorial.component';
import { TutorialDetailsComponent } from './components/tutorial-details/tutorial-details.component';
import { TutorialsListComponent } from './components/tutorials-list/tutorials-list.component';
@NgModule({
declarations: [
AppComponent,
AddTutorialComponent,
TutorialDetailsComponent,
TutorialsListComponent
],
imports: [
BrowserModule,
AppRoutingModule,
FormsModule,
AngularFireModule.initializeApp(environment.firebase),
AngularFireDatabaseModule,
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
Define Model Class
Let’s create Tutorial class with 4 fields: key, title, description, published.
models/tutorial.ts
export default class Tutorial {
key: string;
title: string;
description: number;
published = false;
}
Create Data Service
This service will use AngularFireDatabase and AngularFireList to interact with Firebase Realtime Database. It contains necessary functions for CRUD operations.
services/tutorial.service.ts
import { Injectable } from '@angular/core';
import { AngularFireDatabase, AngularFireList } from '@angular/fire/database';
import Tutorial from '../models/tutorial';
@Injectable({
providedIn: 'root'
})
export class TutorialService {
private dbPath = '/tutorials';
tutorialsRef: AngularFireList<Tutorial> = null;
constructor(private db: AngularFireDatabase) {
this.tutorialsRef = db.list(this.dbPath);
}
getAll(): AngularFireList<Tutorial> {
return this.tutorialsRef;
}
create(tutorial: Tutorial): any {
return this.tutorialsRef.push(tutorial);
}
update(key: string, value: any): Promise<void> {
return this.tutorialsRef.update(key, value);
}
delete(key: string): Promise<void> {
return this.tutorialsRef.remove(key);
}
deleteAll(): Promise<void> {
return this.tutorialsRef.remove();
}
}
Component for creating Object
his component has a Form to submit new Tutorial with 2 fields: title & description. It calls TutorialService.create() method.
components/add-tutorial/add-tutorial.component.ts
import { Component, OnInit } from '@angular/core';
import { TutorialService } from 'src/app/services/tutorial.service';
import Tutorial from 'src/app/models/tutorial';
@Component({
selector: 'app-add-tutorial',
templateUrl: './add-tutorial.component.html',
styleUrls: ['./add-tutorial.component.css']
})
export class AddTutorialComponent implements OnInit {
tutorial: Tutorial = new Tutorial();
submitted = false;
constructor(private tutorialService: TutorialService) { }
ngOnInit(): void {
}
saveTutorial(): void {
this.tutorialService.create(this.tutorial).then(() => {
console.log('Created new item successfully!');
this.submitted = true;
});
}
newTutorial(): void {
this.submitted = false;
this.tutorial = new Tutorial();
}
}
components/add-tutorial/add-tutorial.component.html
<div class="submit-form">
<div *ngIf="!submitted">
<div class="form-group">
<label for="title">Title</label>
<input
type="text"
class="form-control"
id="title"
required
[(ngModel)]="tutorial.title"
name="title"
/>
</div>
<div class="form-group">
<label for="description">Description</label>
<input
class="form-control"
id="description"
required
[(ngModel)]="tutorial.description"
name="description"
/>
</div>
<button (click)="saveTutorial()" class="btn btn-success">Submit</button>
</div>
<div *ngIf="submitted">
<h4>You submitted successfully!</h4>
<button class="btn btn-success" (click)="newTutorial()">Add</button>
</div>
</div>
Component for List of Objects
This component calls TutorialService methods:
• getAll()
• deleteAll()
components/tutorials-list/tutorials-list.component.ts
import { Component, OnInit } from '@angular/core';
import { TutorialService } from 'src/app/services/tutorial.service';
import { map } from 'rxjs/operators';
@Component({
selector: 'app-tutorials-list',
templateUrl: './tutorials-list.component.html',
styleUrls: ['./tutorials-list.component.css']
})
export class TutorialsListComponent implements OnInit {
tutorials: any;
currentTutorial = null;
currentIndex = -1;
title = '';
constructor(private tutorialService: TutorialService) { }
ngOnInit(): void {
this.retrieveTutorials();
}
refreshList(): void {
this.currentTutorial = null;
this.currentIndex = -1;
this.retrieveTutorials();
}
retrieveTutorials(): void {
this.tutorialService.getAll().snapshotChanges().pipe(
map(changes =>
changes.map(c =>
({ key: c.payload.key, ...c.payload.val() })
)
)
).subscribe(data => {
this.tutorials = data;
});
}
setActiveTutorial(tutorial, index): void {
this.currentTutorial = tutorial;
this.currentIndex = index;
}
removeAllTutorials(): void {
this.tutorialService.deleteAll()
.then(() => this.refreshList())
.catch(err => console.log(err));
}
}
In the code above, to get the key of each item, we use snapshotChanges() with RxJS map() operator. This key is unique and important for update operation.
We also have refreshList() function for every time delete operation is done.
components/tutorials-list/tutorials-list.component.html
<div class="list row">
<div class="col-md-6">
<h4>Tutorials List</h4>
<ul class="list-group">
<li
class="list-group-item"
*ngFor="let tutorial of tutorials; let i = index"
[class.active]="i == currentIndex"
(click)="setActiveTutorial(tutorial, i)"
>
{{ tutorial.title }}
</li>
</ul>
<button class="m-3 btn btn-sm btn-danger" (click)="removeAllTutorials()">
Remove All
</button>
</div>
<div class="col-md-6">
<div *ngIf="currentTutorial">
<app-tutorial-details
(refreshList)="refreshList()"
[tutorial]="currentTutorial"
></app-tutorial-details>
</div>
<div *ngIf="!currentTutorial">
<br />
<p>Please click on a Tutorial...</p>
</div>
</div>
</div>
You can see that when we click on any item, setActiveTutorial() function will be invoked to change current active Tutorial, which data is passed to tutorial-details component.
Component for Object details
This component is the child of tutorial-list. It bind tutorial data and emit refreshList event to the parent.
For getting update, delete the Tutorial, we’re gonna use two TutorialService methods:
• update()
• delete()
components/tutorial-details/tutorial-details.component.ts
import { Component, OnInit, Input, OnChanges, Output, EventEmitter } from '@angular/core';
import { TutorialService } from 'src/app/services/tutorial.service';
import Tutorial from 'src/app/models/tutorial';
@Component({
selector: 'app-tutorial-details',
templateUrl: './tutorial-details.component.html',
styleUrls: ['./tutorial-details.component.css']
})
export class TutorialDetailsComponent implements OnInit, OnChanges {
@Input() tutorial: Tutorial;
@Output() refreshList: EventEmitter<any> = new EventEmitter();
currentTutorial: Tutorial = null;
message = '';
constructor(private tutorialService: TutorialService) { }
ngOnInit(): void {
this.message = '';
}
ngOnChanges(): void {
this.message = '';
this.currentTutorial = { ...this.tutorial };
}
updatePublished(status): void {
this.tutorialService.update(this.currentTutorial.key, { published: status })
.then(() => {
this.currentTutorial.published = status;
this.message = 'The status was updated successfully!';
})
.catch(err => console.log(err));
}
updateTutorial(): void {
const data = {
title: this.currentTutorial.title,
description: this.currentTutorial.description
};
this.tutorialService.update(this.currentTutorial.key, data)
.then(() => this.message = 'The tutorial was updated successfully!')
.catch(err => console.log(err));
}
deleteTutorial(): void {
this.tutorialService.delete(this.currentTutorial.key)
.then(() => {
this.refreshList.emit();
this.message = 'The tutorial was updated successfully!';
})
.catch(err => console.log(err));
}
}
components/tutorial-details/tutorial-details.component.html
<div *ngIf="currentTutorial" class="edit-form">
<h4>Tutorial</h4>
<form>
<div class="form-group">
<label for="title">Title</label>
<input
type="text"
class="form-control"
id="title"
[(ngModel)]="currentTutorial.title"
name="title"
/>
</div>
<div class="form-group">
<label for="description">Description</label>
<input
type="text"
class="form-control"
id="description"
[(ngModel)]="currentTutorial.description"
name="description"
/>
</div>
<div class="form-group">
<label><strong>Status:</strong></label>
{{ currentTutorial.published ? "Published" : "Pending" }}
</div>
</form>
<button
class="badge badge-primary mr-2"
*ngIf="currentTutorial.published"
(click)="updatePublished(false)"
>
UnPublish
</button>
<button
*ngIf="!currentTutorial.published"
class="badge badge-primary mr-2"
(click)="updatePublished(true)"
>
Publish
</button>
<button class="badge badge-danger mr-2" (click)="deleteTutorial()">
Delete
</button>
<button type="submit" class="badge badge-success" (click)="updateTutorial()">
Update
</button>
<p>{{ message }}</p>
</div>
<div *ngIf="!currentTutorial">
<br />
<p>Cannot access this Tutorial...</p>
</div>
Define Routes for App Routing Module
There are 2 main routes:
• /add for add-tutorial component
• /tutorials for tutorials-list component
app-routing.module.ts
import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import { TutorialsListComponent } from './components/tutorials-list/tutorials-list.component';
import { AddTutorialComponent } from './components/add-tutorial/add-tutorial.component';
const routes: Routes = [
{ path: '', redirectTo: 'tutorials', pathMatch: 'full' },
{ path: 'tutorials', component: TutorialsListComponent },
{ path: 'add', component: AddTutorialComponent }
];
@NgModule({
imports: [RouterModule.forRoot(routes)],
exports: [RouterModule]
})
export class AppRoutingModule { }
Add Navbar and Router View to Angular Firebase App
Let’s open src/app.component.html, this App component is the root container for our application, it will contain a nav element.
<div>
<nav class="navbar navbar-expand navbar-dark bg-dark">
<a href="#" class="navbar-brand">bezKoder</a>
<div class="navbar-nav mr-auto">
<li class="nav-item">
<a routerLink="tutorials" class="nav-link">Tutorials</a>
</li>
<li class="nav-item">
<a routerLink="add" class="nav-link">Add</a>
</li>
</div>
</nav>
<div class="container mt-3">
<h2>{{ title }}</h2>
<router-outlet></router-outlet>
</div>
</div>
Run & Check
You can run this App with command: ng serve.
Open browser with url: http://localhost:4200/ and check the result.
Conclusion
Today we’ve built an Angular 10 Firebase CRUD Application successfully working with Realtime Database using AngularFireDatabase from @angular/fire library. Now we can display, modify, delete object and list at ease.
If you want to use Firestore instead, please visit:
Angular 10 Firestore CRUD with AngularFireStore
Or File upload:
Firebase Storage + Angular 10: File Upload/Display/Delete example
You can also find how to create Angular HTTP Client for working with Restful API in:
Angular 10 CRUD Application example with Web API
Or implement Reactive Form Validation:
Angular 10 Form Validation example (Reactive Forms)
Happy learning, see you again!
Further Reading
Other versions:
Angular 8 Firebase CRUD example
Angular 11 Firebase CRUD example
Angular 12 Firebase CRUD example
Angular 13 Firebase CRUD example
Angular 14 Firebase CRUD example
Angular 15 Firebase CRUD example
Fullstack CRUD Application:
Source Code
You can find the complete source code for this tutorial on Github.
5 thoughts to “Angular 10 Firebase CRUD Realtime DB | AngularFireDatabase”
1. This is mostly great. Everything works except for the update function. it is looking for a ‘key’ in the document but it never gets populated. I am trying to figure out how to populate the ‘key’ field with the auto-generated document ID but so far no luck. Do you have any idea how to do this?
2. How do I create the model folder on cli? I followed the instructions but the models folder was not in the file structure.
Comments are closed to reduce spam. If you have any question, please send me an email.
|
__label__pos
| 0.931482 |
18, Apr
2022
Variance in Sports Betting: An Essential Concept
What is variance in sports betting? It’s essential to understand the concept of variance in sports betting if you want to understand sports betting in the best way: accept the loss of a bet, accept the bad run but also protect yourself from tilt.
Definition Variance in Sports Betting
The variance is a statistical concept, which allows us to calculate the dispersion of the average deviation with the expectation. In other words, assuming that you are betting value bets, the variance in sports betting represents the average difference between your expected winnings and your actual winnings.
In sports betting, when you bet with your average odds, you expect of winning or at least of breaking even:
If you bet an average odds of 2, your expectation of not having a loss is 50% success (assuming you are in a fixed bet).
However, throughout your career, you will sometimes have x% success in one month, sometimes y% in another, sometimes z% in the following month, etc. The curve of your bankroll will therefore oscillate.
These oscillations represent the deviation from your expectation, these oscillations are nothing other than the variance. In short, a behavior other than a rectilinear progression; at least in the short term.
The Notion of Probability
Now let’s go back in time and recall your mathematics classes on probability. If you toss a coin, theoretically you have a 50% chance of getting heads or tails. Now, in a series of 10 tosses, you will most likely get x number of tails and y number of heads. However, the more you repeat the operation over and over again, the more you will tend to reach 50%.
We will use this image to understand the notion of both volume and smoothing of variance in sports betting.
Smoothing out the Variance in Sports Betting
Sports Contingency and Imponderables
None of us will deny that we have never experienced a long losing streak. Never having lost or validated a bet at the last minute, on a stroke of luck or misfortune?
By an unexpected fact of play such as a red card or an unforeseen injury, etc.
A whole list of imponderables, i.e. unpredictable events that can turn the match upside down. Imponderables at the very source of sport. Let’s remember that sports betting is first and foremost subject to the vagaries of sport. And this will help us to understand the rest.
What Does Smoothing the Variance Mean?
Smoothing the variance means minimizing its effects over the long term. It is to obtain in the long term, a more or less rectilinear progression. This progression can be positive or negative but it will leave no room for chance. No room for imponderables!
That’s why we talk about a minimum sample of 500 predictions to know if you are a profitable bettor or not. Because the larger the sample of your predictions, the more the luck factor disappears. This is called smoothing the variance.
Do Volume to Reduce the Impact of Variance on Your Results
The more volume you do, i.e. the larger the sample size of your sports bets, the closer your real winnings will be to your winnings expectation.
This is not illogical and has been theorized in mathematics by the law of large numbers.
This concept explains that when the same event based on chance is repeated a large number of times, the average of the results obtained tends to be closer to the expectation of the event.
This is exactly the image of the coin toss I showed you in the introduction. Your expectation is 50% and the real result will tend towards 50/50 with the greatest number of throws possible. Except that if you toss the coin only 10 times you may only get the tails side twice. By throwing it 500 times the gap will close.
This implies that the more you will tend to judge a balance with the least number of counted beats possible, the more you will apply the law of small numbers to which a high variance is subject.
|
__label__pos
| 0.634105 |
/***************************************************************************** Copyright (c) 2014, Intel Corp. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Intel Corporation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ***************************************************************************** * Contents: Native high-level C interface to LAPACK function dopgtr * Author: Intel Corporation *****************************************************************************/ #include "lapacke_utils.h" lapack_int LAPACKE_dopgtr( int matrix_layout, char uplo, lapack_int n, const double* ap, const double* tau, double* q, lapack_int ldq ) { lapack_int info = 0; double* work = NULL; if( matrix_layout != LAPACK_COL_MAJOR && matrix_layout != LAPACK_ROW_MAJOR ) { LAPACKE_xerbla( "LAPACKE_dopgtr", -1 ); return -1; } #ifndef LAPACK_DISABLE_NAN_CHECK if( LAPACKE_get_nancheck() ) { /* Optionally check input matrices for NaNs */ if( LAPACKE_dsp_nancheck( n, ap ) ) { return -4; } if( LAPACKE_d_nancheck( n-1, tau, 1 ) ) { return -5; } } #endif /* Allocate memory for working array(s) */ work = (double*)LAPACKE_malloc( sizeof(double) * MAX(1,n-1) ); if( work == NULL ) { info = LAPACK_WORK_MEMORY_ERROR; goto exit_level_0; } /* Call middle-level interface */ info = LAPACKE_dopgtr_work( matrix_layout, uplo, n, ap, tau, q, ldq, work ); /* Release memory and exit */ LAPACKE_free( work ); exit_level_0: if( info == LAPACK_WORK_MEMORY_ERROR ) { LAPACKE_xerbla( "LAPACKE_dopgtr", info ); } return info; }
|
__label__pos
| 0.995598 |
Assistance scolaire personnalisée
un partenariat rue des écoles Maif
Lundi 2 mai 2016. Bonjour S'inscrire gratuitement > Se connecter
icone FicheFiche
ExercicesExercices
Icône de rechercheRechercher
Vecteurs colinéaires et équations de droite
La géométrie analytique, inventée par Descartes, associe à tout point ou tout vecteur du plan des coordonnées. Démontrer les propriétés d'une figure revient alors à effectuer des calculs. C'est plus direct et souvent plus convaincant.
On peut ainsi montrer que des droites sont parallèles, que des points sont alignés, que quatre points définissent un parallélogramme, formuler différemment la propriété de Thalès, etc.
À toute droite on peut associer une équation, c'est à dire une relation vérifiée par les coordonnées de chacun de ses points. On sait qu'une fonction affine se représente par une droite. Réciproquement toute droite du plan ne peut représenter une fonction affine. Les droites parallèles à l'axe des ordonnées ne conviennent pas. La forme générale d'une équation de droite n'est donc pas y\,=\,mx\,+\,p, mais ax + by + c =0 ((a\,;\,b)\,\neq\,(0\,;\,0)).
Le repère nécessaire aux calculs n'est pas toujours fourni, il faut alors définir précédemment une origine et deux vecteurs non nuls et non colinéaires pour que tout point ou tout vecteur du plan ait un couple unique de coordonnées.
1. Quand dit-on que deux vecteurs sont colinéaires ?
Définition : Un vecteur \vec{v} est colinéaire à un vecteur \vec{u} non nul lorsqu'il existe k\,\in\,R tel que : \vec{v}\,=\,k\,\times\,\vec{u}.
Remarque : Si \vec{u}\,\neq\,\vec{0}, les deux vecteurs ont la même direction.
Relation sur les coordonnées : \vec{v}(a'\,;\,b')\,=\,k\,\times\,\vec{u}(a\,;\,b) équivaut à \left \lbrace \begin {array}{lllll} a'\,=\,k\,\times\,a \\ \,b'\,=\,k\,\times\,b\end {array} \right. soit \left \lbrace \begin {array}{lllll} a'b\,=\,k\,\times\,ab \\ \,ab'\,=\,k\,\times\,ab\end {array} \right..
Soit a'b = ab' ou ab'\,-\,ab'\,=\,0.
C'est à dire que deux vecteurs sont colinéaires si et seulement si leurs coordonnées sont proportionnelles.
Points alignés : Trois points A, B, C sont alignés si et seulement si il existe une égalité de la forme : \overrightarrow{AC}\,=\,k\,\times\,\overrightarrow{AB} (k\,\in\,R).
Droites parallèles : Les droites (AB) et (CD) sont parallèles si et seulement si il existe une relation de la forme \overrightarrow{CD}\,=\,k\,\times\,\overrightarrow{AB}.
Exercice n°1Exercice n°2Exercice n°3
2. Quelles égalités vectorielles caractérisent la propriété de Thalès dans le triangle ?
Énoncé : La parallèle à l'un des côtés d'un triangle découpe sur les deux autres côtés, ou leurs prolongements, des segments de longueurs proportionnelles.
Si (IJ) // (BC) et \overrightarrow{AI}\,=\,k\overrightarrow{AB} alors \overrightarrow{AJ}\,=\,k\overrightarrow{AC}.
Réciproque : Si \overrightarrow{AI}\,=\,k\overrightarrow{AB} et \overrightarrow{AJ}\,=\,k\overrightarrow{AC} alors (IJ) // (BC).
Exercice n°4Exercice n°5
3. Comment déterminer un vecteur directeur à partir d'une équation de droite ?
Méthode : Si (D) est une droite passant par le point A et de vecteur directeur \vec{u} alors pour tout point M de la droite (D), les vecteurs \overrightarrow{AM} et \vec{u} sont colinéaires.
On obtient une équation cartésienne de la forme ax\,+\,by\,+\,c\,=\,c\,=\,0\,((a\,;b)\,\neq\,(0\,;\,0)).
Remarques : Si b\,\neq\,0 c'est à dire lorsque la droite n'est pas parallèle à l'axe des ordonnées, on peut écrire cette équation sous forme réduite : y\,=\,mx\,+\,p.
Une droite a une infinité d'équations cartésiennes, il suffit de multiplier les deux membre de l'égalité par un nombre réel différent de 0. Mais l'équation réduite est unique.
Exemple : Soit la droite (D), passant par le point A(1 ; 2) et de vecteur directeur \vec{u}(3\,;\,4).
Si M(x\,;y)\,\in(D) alors \overrightarrow{AM}(x-1; y\,-\,2) et \vec{u}(3\,;4) sont colinéaires.
Donc (x\,-\,1)\,\times\,4\,-\,3\,\times\,(y\,-\,2)\,=\,0 soit 4x\,-\,4\,-\,3y\,+\,6\,=\,0 ou 4x\,-\,3y\,+\,2\,=\,0.
4. Comment déterminer un vecteur directeur à partir d'une équation de droite ?
Cas d'une équation cartésienne ax\,+\,by\,+\,c\,=\,0\,((a\,;\,b\,\neq\,0\,;\,0)) :
Le système \left \lbrace \begin {array} {lllllll} ax\,+\,by\,+,c\,=\,0 \\ \,ax\,+\,by\,=\,0\end {array} \right. n'a pas de solution pour c\,\neq\,0 et une infinité de solutions pour c\,=\,0. Les droites d'équations ax\,+\,by\,+\,c\,=\,0\,(D) et ax\,+\,by\,=\,0\,(D') sont donc parallèles et distinctes ou parallèles et confondues.
Les points O(0 ; 0) et A(− b ; a) appartiennent à la droite (D'), donc le vecteur \overrightarrow{OA}(-b\,;\,a) est un vecteur directeur de (D') et de (D).
Cas d'une équation réduite y\,=\,mx\,+\,p :
Le système \left \lbrace \begin {array} {lllll} y\,=\,mx\,+\,p \\ y\,=\,mx\end {array} \right. n'a pas de solution pour p\,\neq\,0 et une infinité de solutions pour p\,=\,0.
Les droites d'équations y\,=\,mx\,+\,p\,(d) et y\,=\,mx\,(d') sont donc parallèles et distinctes ou parallèles et confondues.
Les points O(0 ; 0) et B(1 ; m) appartiennent à la droite (d'), donc le vecteur \overrightarrow{OB}(1\,;\,m) est un vecteur directeur de (d') et de (d).
Droites parallèles : Deux droites sont parallèles si et seulement si un vecteur directeur de l'une est colinéaire à un vecteur directeur de l'autre.
5. Comment décomposer un vecteur en fonction de deux vecteurs non nuls et non colinéaires ?
Base de deux vecteurs : Dans le plan, un point et deux vecteurs non nuls et non colinéaires constituent un repère cartésien. Les deux seuls vecteurs constituent alors une base.
Propriété : Tout vecteur du plan s'écrit de manière unique en fonction des deux vecteurs d'une base.
Si (\vec{u}\,;\,\vec{v}) est une base du plan, alors tout vecteur \vec{w} s'écrit de manière unique sous la forme :
\vec{w}\,=\,a\vec{u}\,+\,b\vec{v}, avec a\,\in\,R et b\,\in\,R.
(a\,;\,b) est le couple des coordonnées du vecteur \vec{w}.
Exemple : Soit un parallélogramme ABCD et E le symétrique de D par rapport à C.
D'après la relation de Chasles :
\overrightarrow{AE}\,=\,\overrightarrow{AD}\,+\,\overrightarrow{DE},
\overrightarrow{AE}\,=\,\overrightarrow{AD}\,+\,2\overrightarrow{DC},
\overrightarrow{AE}\,=\,\overrightarrow{AD}\,+\,2\overrightarrow{AB},
\overrightarrow{AE}\,=\,2\overrightarrow{AB}\,+\,1\overrightarrow{AD},
Le vecteur \overrightarrow{AE} a pour coordonnées (2\,;\,1) dans la base (\overrightarrow{AB}\,;\,\overrightarrow{AD}) ;
le point E a pour coordonnées (2 ; 1) dans le repère (A\,;\,\overrightarrow{AB}\,;\,\overrightarrow{AD}).
Exercice n°10Exercice n°11
À retenir
Soient \vec{u}(a\,;\,b) et \vec{v}(a'\,;\,b'), avec (a\,;\,b)\,\neq\,(0\,;\,0).
\vec{v} est colinéaire à \vec{u} si et seulement si \vec{v}\,=\,k\,\times\,\vec{u} (k\,\in\,R) ou ab'\,-\,a'b\,=\,0;
Toute droite du plan a une équation de la forme ax\,+\,by\,+\,c\,=\,0\,((a\,;\,b)\,\neq\,(0\,;\,0)).
Si b\,\neq\,0, la droite n'est pas verticale et son équation peut s'écrire y\,=\,mx\,+\,p.
Un vecteur directeur est de la forme \vec{u}(-b\,;a) ou \vec{v}(-1\,;\,m).
Si \vec{u} et \vec{v} sont deux vecteurs non nuls et non colinéaires du plan, tout vecteur \vec{w} de ce plan se décompose de manière unique sous la forme \vec{w}\,=\,a\vec{u}\,+\,b\vec{y} (a\,\in\,R et b\,\in\,R).
(a ; b) est le couple de coordonnées de \vec{w}.
© rue des écoles. Tous droits réservés.
Partager
Partager sur Tweeter
|
__label__pos
| 0.986754 |
BAB
5 STATISTIK DESKRIPTIF
Salah satu statistik yang secara sadar maupun tidak, sering digunakan dalam berbagai bidang adalah statistik deskriptif. Pada bagian ini akan dipelajari beberapa contoh kasus dalam penggunaan statistik deskriptif.
Pengertian Statistik Deskriptif
Statistik deskriptif merupakan bidang ilmu statistika yang mempelajari cara-cara pengumpulan, penyusunan, dan penyajian data suatu penelitian. Kegiatan yang termasuk dalam kategori tersebut adalah kegiatan collecting atau pengumpulan data, grouping atau pengelompokan data, penentuan nilai dan fungsi statistik, serta yang terakhir termasuk pembutan grafik dan gambar.
Gambar 5.1. Menu Statistik Deskriptif
69
Berbagai jenis Statistik Deskriptif pada SPSS 16 dapat dilihat pada menu Analyze -> Descriptive Statistics. Pemilihan menu tersebut akan memunculkan sub-submenu yang nampak seperti Gambar 5.1 di atas. Sub-submenu tersebut antara lain analisis frekuensi, analisis deskripsi, analisis eksplorasi data, dan analisis crosstabs.
Prosedur Frekuensi
Prosedur FREQUENCIES memiliki kegunaan pokok untuk melakukan pengecekan terhadap input data. Apakah data sudah diinputkan dengan benar. Hal ini mengingat bahwa dengan statistik frekuensi kita bisa mengetahui resume data secara umum. Seperti berapa jumlah responden laki-laki, jumlah responden perempuan, dan sebagainya. Selain itu, prosedur FREQUENCIES juga memiliki kegunaan untuk menyediakan informasi deskripsi data yang menggambarkan demographic characteristics dari sampel yang diambil. Misalnya berapa persen responden yang setuju terhadap tindakan yang dilakukan, berapa persen responden yang menolak, dan sebagainya. Berikut akan dibahas contoh kasus melakukan analisa deskriptif dengan SPSS 16. Secara umum, untuk menjalankan suatu prosedur dalam analisa statistik mengikuti langkah seperti berikut.
Gambar 5.2. Prosedur Analisis dalam SPSS
70
Contoh Kasus
Tabel di bawah ini menunjukkan data yang akan dianalisa dengan statistik deskriptif. Dari tabel di bawah, field yang akan dianalisa antara lain umur, pendidikan, jenis kelamin, dan keterangan.
Gambar 5.3. Data yang Akan Diolah
Untuk membuat statistik deskripsi dari tabel di atas, lakukan langkah-langkah dengan program SPSS sebagai berikut: 1. Klik menu Analyze, pilih Descriptive Statistics dan lanjutkan dengan pilihan Frequencies. Tampilan yang muncul sebagai berikut.
Gambar 5.4. Dialog Frequencies
71
yaitu jika data diurutkan dan dibagi dua sama besar. Gambar 5. . Mode menunjukkan nilai yang paling sering muncul dalam suatu range statistik. Sum. Masukkan variabel Umur. menunjukkan total data. Dari gambar di atas bisa dilihat bahwa terjadi pembagian kelompok Statistik.2. Berilah tanda chek point untuk memunculkan item-item analisa yang diinginkan. mode. 3. median. Frequencies Statistics 4. Central tendency Pengukuran tendensi pusat yang meliputi mean. dan Excel ke dalam kotak Variables untuk dianalisa. Pembagian kelompok tersebut adalah: a. Median menunjukkan titik tengah data.5. dan sum. Pilih tombol Statistics untuk mengatur item-item yang akan ditampilkan dalam output seperti berikut. Word. • • • • 72 Mean menunjukkan rata-rata dari masing-masing variabel semua responden.
= kurtosis statistic ± 1. variance. Standard error of mean. Dispersion Pengukuran dispersi yang meliputi standard deviation. Minimum menunjukkan nilai terendah dari suatu deretan data.96 * (standard error of skewness). • • Ukuran skewness adalah nilai skewness dibagi dengan standard error skewness. = skewness statistic ± 1.96 * (standard error of kurtosis). Jika rasio skewness berada di antara nilai -2. dan 95% C. range. dan standard error of the mean. maximum. • Bahwa 95% confidence interval (C. Standard deviasi menunjukkan despersi rata-rata dari sampel.) di sekitar nilai skewness and 95% confidence interval yang lain di sekitar nilai kurtosis. c. diukur sebagai standard deviasi dibagi dengan akar dari jumlah data valid (n).I. minimum.00. Maximum menunjukkan nilai tertinggi dari suatu deretan data. Bagian ini digunakan untuk melakukan pengecekan apakah distribusi data adalah distribusi normal.b. 73 .I. Nilai kurtosis adalah nilai kurtosis dibagi dengan standard error-nya. maka distribusi data adalah normal sehingga data di atas masih berdistribusi normal. The 95% confidence intervals atau tingkat kepercayaan didefinisikan sebagai berikut: • • 95% C. I.00 sampai dengan 2. Distribution Pengukuran distribusi yang meliputi skewness and kurtosis.
= skewness statistic ± 1. = kurtosis statistic ± 1. maka 95% confidence interval ditemukan sebagai 95% C.705 .761) = -1.6.Sebagai contoh. = -.759 = .193 Representasi grafik dari 95% confidence interval dari nilai kurtosis ini dapat ditunjukkan seperti gambar di bawah ini.96 * . 95% Confidence Interval for the Kurtosis Value 74 .339 ± 1.1.388. jika skewness statistik adalah -. I.7. Gambar 5. 95% Confidence Interval for the Skewness Value Untuk kurtosisnya dapat dihitung sebagai berikut: 95% C.761) to (-.96 * (standard error of skewness). Gambar 5.422 Representasi grafik dari 95% confidence interval dari nilai skewness ini dapat ditunjukkan seperti gambar berikut.761 = (-.339 .96 * .783 to 2.488 = (.448) = -0.705 ± 1.339 ± 0.488) to (.339 dan standard error skewness adalah .100 to 0.0. I.96 * (standard error of kurtosis) = .388 = -.705 + 1.339 + 0.705 ± 1.
6.Range dari 95% confidence interval adalah dari -0. Setelah dipilih point-point statistik yang diinginkan dan sesuai dengan kebutuhan.783 (through zero) sampai 2. Gambar 5. d. Pilih tombol Charts untuk memilih model grafik yang ingin ditampilkan dalam output. Rata-rata umur 75% di bawah 23 tahun.8. berikan tanda check point untuk item-item analisa yang akan ditampilkan pada output window. Sebagai contoh. Dialog Charts 75 . 5. Rata-rata umur 50% di bawah 20 tahun. Oleh karena 95% confidence interval memiliki nilai nol di dalamnya. maka dapat dikatakan bahwa “the distribution has no kurtosis”. Ini dapat diartikan bahwa nilai korelasi memenuhi syarat untuk sebuah normally distributed atau distribusi normal. Dari opsi-opsi statistik yang telah dibahas di atas.193. data yang terkelompok sebagai berikut. klik tombol Continue. Percentile values Percentile values akan menampilkan data-data secara berkelompok menjadi sebuah prosentase. • • • Rata-rata umur 25% di bawah 19 tahun.
Grafik baris mempunyai beberapa kelebihan. Klik OK dari kotak dialog Frequencies. seperti: • Grafik Batang Grafik batang menunjukkan variasi nilai dari suatu data yang ditampilkan dalam bentuk batang atau kotak. Membaca Output Setelah dilakukan pemilihan option-option yang diinginkan dan sesuai kebutuhan. 76 . klik Continue jika ingin dilanjutkan ke langkah berikutnya.Adapun bentuk-bentuk dari sebuah grafik sebenarnya dikelompokkan menjadi beberapa jenis. selanjutnya tekan tombol OK pada kotak dialog Frequencies untuk melanjutkan perintah. • Grafik Pie Seperti namanya. 7. Tiap-tiap potong dari kue tersebut menunjukkan nilai prosentase dari data. Kelemahannya adalah jika terlalu banyak garis akan terkesan rumit dan tampilan yang terkesan sangat sederhana. Selanjutnya setelah mengatur semua pilihan. grafik model pie merupakan bagan yang berbentuk lingkaran yang menyerupai sebuah kue (pie). Penekanan tombol OK akan memunculkan output lengkap seperti gambar di bawah ini. 8. • Grafik Garis Grafik garis akan menunjukkan variasi nilai suatu data dengan tampilan yang berupa garis. seperti dapat menunjukkan hubungan antarnilai dengan baik dan mudah dimengerti. Grafik model ini paling cocok jika digunakan untuk memvisualisasikan suatu perbandingan serta dapat menunjukkan nilai dengan tepat.
yaitu 20 buah data.9. Median menunjukkan titik tengah data. Standard deviasi menunjukkan dispersi rata-rata dari sampel. yaitu jika data diurutkan dan dibagi dua sama besar.Gambar 5. • • • • • N menunjukkan jumlah data yang diproses. Output Lengkap Output Tabel Statistik Tabel statistik ditunjukkan seperti Gambar 5.10 di bawah. terlihat beberapa hal hasil pengolahan yang dapat dijelaskan sebagai berikut. 77 . Mode menunjukkan nilai yang paling sering muncul dalam suatu range statistik. Mean menunjukkan rata-rata dari masing-masing variabel semua responden.
11. • Frequency. demikian seterusnya.• • Minimum menunjukkan data terkecil dari sekelompok variabel. Pada output tersebut dapat dijelaskan beberapa hal sebagai berikut.10. Tabel ini menunjukkan frekuensi kemunculan data seperti Gambar 5. menunjukkan jumlah responden yang memiliki umur tertentu. demikian seterusnya. Gambar 5. responden dengan umur 18 tahun ada 1 orang. Maximum menunjukkan nilai data yang terbesar. Seperti responden dengan umur 16 tahun ada 1 orang. 78 . Tabel Statistik Output Tabel Frekuensi Output berikutnya dari hasil pengolahan data di atas yang masih tampil pada lembar analisa ini adalah tabel Frekuensi.
dan 4 responden yang lulusan SMA. Gambar 5.• Percent. Frekuensi Pendidikan Dari tabel frekuensi Pendidikan pada Gambar 5.12 bisa dilihat bahwa terdapat 2 reponden yang lulusan SD. Tabel Frequency Umur Gambar 5.12. 79 . serta 1 responden lulusan Perguruan Tinggi. 1 reponden yang lulusan SMP.11. menunjukkan prosentase dari jumlah data yang memiliki tinggi tertentu.
Gambar 5.14.Gambar 5. Output Grafik Output terakhir yang ada dalam lembar data output adalah tampilan grafik bar chart. dan 3 responden juga yang menyatakan tidak setuju. Frekuensi Keterangan Sedangkan dari tabel keterangan bisa dilihat bahwa terdapat 3 responden mengatakan setuju. Laporan berbentuk grafik ini akan cukup penting karena mempermudah pemakai untuk memahami secara cepat isi dari sebuah laporan yang disajikan. 3 responden mengatakan raguragu. 80 .13. Frekuensi Jenis Kelamin Dari table frekuensi Jenis kelamin bisa dilihat bahwa ada 3 responden yang jenis kelamin laki-laki dan 5 responden yang memiliki jenis kelamin perempuan.
maksimum). dan sebagainya. Seperti berapa jumlah responden laki-laki. sebenarnya hampir sama dengan statistik frekuensi. skewness) dan mean. berapa jumlah responden perempuan. sum. tetapi prosedur analisis deskritif memiliki keunggulan. minimum. Analisis ini juga memiliki kegunaan pokok untuk melakukan pengecekan terhadap input data. distribusi (kurtosis. Output Grafik Prosedur Descriptives Statistik dengan analisis deskriptif. mengingat bahwa analisis ini akan menghasilkan resume data secara umum. 81 . analisis ini juga memiliki kegunaan untuk menyediakan informasi deskripsi data dan demografi sampel yang diambil. Sebagian besar analisis statistik memang dikalkulasi menggunakan prosedur frekuensi.Gambar 5. Disamping itu. yaitu menghasilkan analisa dispersi (standard deviasi.15. yaitu lebih efisien dalam beberapa hal karena tidak melakukan sorting atau pengurutan data nilai ke tabel frekuensi. dan lain sebagainya.
yaitu Analyze -> Descriptive Statistics -> Descriptives. 2. Deskriptif Dialog 82 . lakukan langkah sebagai berikut.17. Data yang Akan Diolah Untuk menjalankan prosedur deskriptive. 1. Dengan pemilihan menu Descriptives tersebut.16. Untuk menjalankan prosedur deskriptif ini. Gambar 5.Contoh Kasus Gambar 5. Gambar 5. kita dapat menggunakan menu pada SPSS. akan muncul tampilan kotak dialog sebagai berikut.16 menunjukkan tabel sebagai contoh kasus yang akan dibahas pada analisa menggunakan prosedur descriptives.
Penekanan tombol tersebut akan memunculkan tampilan seperti di bawah ini. pendidikan. Memindahkan Variabel 5. Dengan tampilan seperti di atas. 4. Gambar 5. 6.18. Tombol Options digunakan untuk menampilkan daftar opsiopsi statistik yang akan ditampilkan pada lembar output sesuai dengan kebutuhan analisis. Terlihat bahwa meskipun file ini sebenarnya memiliki 5 variabel. Masukkan variabel yang akan dianalisa dari kolom kiri ke kolom Variabel yang ada di sebelah kanan. dan juga keterangan. tetapi yang terlihat hanya variabel yang bertipe numerik saja. Tekan tombol tersebut untuk mengatur opsi statistik yang diinginkan. berarti bahwa ada empat data yang akan dianalisa. jeniskelamin. Penekanan tombol tersebut akan menampilkan jendela seperti di bawah ini. Perhatikan tampilannya seperti pada gambar di bawah ini. Selanjutnya klik tombol Option untuk mengatur opsi-opsi analisis dekripsi. 7.3. 83 . Hal ini berbeda dengan ketika menggunakan analisis frekuensi yang memunculkan semua variabel dan semua tipe data. yaitu umur.
Minimum. untuk menentukan kriteria dalam melakukan pengurutan data.19.Gambar 5. isinya hampir sama dengan statistik frekuensi. Kurtosis dan Skewness. Tentukan jenis opsi yang diinginkan dan berikan tanda chek point untuk opsi yang dipilih. pilih opsi-opsi sebagai berikut: • • • • • Mean. Namun. Pilih Order berdasarkan Variable List. Perhatikan bahwa pada kotak Options Dialog. Maximum. Sebagai latihan. menunjukkan nilai terendah dari suatu deretan data. • 84 . Kotak Dialog Options 8. Standard Deviasi. menunjukkan rata-rata dari masing-masing variabel semua responden. menunjukkan dispersi rata-rata dari sampel. memang lebih simpel dan hanya memuat beberapa item statistik saja. 9. menunjukkan nilai tertinggi dari suatu deretan data. yang digunakan untuk melakukan pengecekan apakah distribusi data yang diolah masuk dalam kategori distribusi normal.
Seperti contoh data umur responden yang tertinggi adalah 45 tahun dan data umur terendah adalah 22 tahun. Gambar 5. Tekan tombol OK pada kotak dialog analisis deskriptif sehingga akan muncul window output seperti pada gambar di bawah ini.20 di atas menunjukkan nilai maksimum dan minimum data.20. rata-rata nilai. Output Maximum dan Minimum Gambar 5.21. Gambar 5. atau alphabetic. Hasil Analisis Setelah ditentukan variable yang dipilih. Output Mean dan STD 85 .• Kotak Display Order menunjukkan kriteria pengurutan data. langkah selanjutnya adalah menjalankan prosedur. Pengurutan bisa dilakukan berdasarkan variabel.
23. 86 .75 yang berarti bahwa sebagian besar responden menyatakan setuju dengan kenaikan SPP.21 menunjukkan nilai rata-rata dan nilai standard deviasi. Diikuti dengan standard error dari nilai kurtosis tersebut.22. Diikuti dengan standard error dari nilai skewness tersebut.23 memperlihatkan nilai kurtosis yang menunjukkan distribusi data normal. Output Kurtosis Gambar 5. Gambar 5. Gambar 5.22 menunjukkan nilai skewness yang menunjukkan distribusi data normal.Gambar 5. Rata-rata keterangan adalah 1. Output Skewness Gambar 5.
Pada hasil eksplorasi data ini. jarak antara data atau karakteristik yang lain. perbedaan karakteristik antara subpopulasi. identifikasi data. yaitu Tabel 2. yaitu group dalam suatu kasus. tampilan data mungkin menunjukkan bahwa data yang akan dianalisis memiliki nilai yang tidak biasa. buatlah eksplorasi data-nya. Prosedur eksplorasi data memungkinkan untuk mengetahui tampilan data. deskripsi data.Prosedur Explore Analisis eksplorasi data merupakan teknik analisa yang sekaligus dapat membantu memberi arahan bagi peneliti untuk memilih teknik statistik yang akan diimplementasikan pada data yang akan dikehendaki. Gambar 5. Contoh Kasus Pada tabel yang telah dibuat pada latihan sebelumnya. Tabel yang Akan Dianalisa dengan Explore 87 .1. nilai yang ekstrim. Selanjutnya dengan melakukan eksplorasi data mungkin mengindikasikan kebutuhan transformasi data (akan dibahas lebih rinci pada bab selanjutnya) jika teknik yang akan digunakan mensyaratkan berdistribusi normal atau mungkin membutuhkan suatu analisis statistik non-parametrik. pengujian asumsi.24. Misalkan dalam suatu data pengamatan ada suatu data yang nilainya jauh dari jangkauan nilai-nilai yang ada.
Gambar 5. Pada bagian tombol analisis terdapat tiga pilihan. Options. 2. 6. Variabel yang terdapat pada Faktor List digunakan untuk mem-break data dalam suatu group menunjukkan kategori. Pada eksplorasi data dapat digunakan variabel kuantitatif (pengukuran interval atau rasio). dan pilih variabel yang dikehendaki. 88 . nilai ini dapat berupa numerik atau string pendek. Plots. digunakan untuk mengelola missing case atau data yang tidak tercatat. yaitu: • • • Statistics. Klik anak panah kanan pada kolom yang sesuai untuk variabel tersebut. misalkan kolom Dependent List. untuk melakukan perhitungan statistik-statistik dasar. Pilih menu Analyze pada menubar. kemudian klik Descriptive Statistics. Pilih bagian Explore sehingga akan keluar tampilan seperti gambar di bawah ini. Label Case digunakan untuk memberi label pada tampilan luar Boxplot yang dapat berupa string atau numerik.25. 5. 4.1. Dialog Explore 3. tombol untuk membuat visualisasi grafik dari analisis.
Jika selesai diatur. Gambar 5. Gambar 5. Tekan Continue dan selanjutnya pilih tombol Plots. akan muncul tampilan berikut. klik continue dan tekan tombol OK. percentile.27. dan tampilan grafik steam and leaf plots power estimation.26. 89 . tabel m-estimator.7. Jika dipilih tombol statistik. Akan muncul dialog sebagai berikut. Pembahasan Output Beberapa hasil analisa yang dapat dilihat dari tabel output antara lain adalah tabel descriptives. Outputoutput tersebut akan dibahas sebagai berikut. outlier. Dialog Plot 9. Dialog Explore Statistik 8.
Setelah ketemu nilainya. standard error. Output Descriptives Trimmed mean sendiri dihitung dengan cara data diurutkan secara ascending. range. nilai minimum.28. varian. dan 5% trimmed mean. median. nilai maksimum. nilai tersebut digunakan untuk mengurangi data sebanyak nilai 90 . kemudian setelah urut dihitung 5% dari jumlah data dengan dimutlakkan perhitungannya. Gambar 5. jangkaun interkuartil.Tabel Descriptive Pengukuran ini menunjukkan ukuran terpusat dari data yang diwakili oleh mean (rata-rata) dan dispersi data yang berupa standard deviasi.
Dalam Descriptive dapat ditentukan interval konfidensi rata-rata dengan default 95%. Estimator Andrew’s wave. Output M-Estimator Perhitungan yang diperoleh antara lain M-estimator Huber. Gambar 5. tetapi nilai dapat diubah sesuai dengan kemauan dari penganalisis data. Tabel M-Estimators Pengukuran ini berkaitan dalam statistik Robust yang diimplementasikan pada perhitungan rata-rata dan median untuk mengestimasi lokasi data terpusat. Tabel Percentiles Pengukuran ini digunakan untuk menampilkan nilai persentil seperti yang terlihat pada tampilan seperti gambar di bawah. M-estimator Hampel. kemudian sisa data dicari mean-nya. 91 .yang diperoleh dari urutan terkecil dan juga dari urutan terbesar. dan Estimator Tukey.29.
Gambar 5. Dengan adanya tampilan seperti ini.31. Percentiles Tabel Outliers Pengukuran ini digunakan untuk menampilkan nilai data terbesar dan data terkecil beserta dengan labelnya.30. peneliti akan mengetahui range data dengan baik. 92 . Outliers Steam and Leaf Plots Steam and Leaf Plots berkaitan dengan visualisasi grafik dari data yang merupakan alternatif kontrol visualisasi jika dimiliki lebih dari satu variabel dependent. Gambar 5.
Set pengorganisasian statistik CROSSTABS beberapa di antaranya sesuai dengan ukuran skala nominal.Gambar 5. Beberapa statistik CROSSTABS digunakan untuk data skala nominal. Terdapat banyak kategori statistik yang tersedia di dalam CROSSTABS prosedur. tetapi beberapa di antaranya juga skala interval. sedangkan Dependents Together melakukan ”generalisasi” sebagian visualisasi untuk setiap group yang didefinisikan dengan faktor variabel. Stem and Leaf Plots Factor Level melakukan ”generalisasi” sebagian visualisasi untuk setiap variabel dependent.32. antara lain: • • Pearson Chi-Square Likelihood Ratio 93 . kita harus bisa mengenali seperti apa macam data adalah sesuai dengan statistik masing-masing dan harus pula mengenali tingkatan pengukuran untuk skala yang sedang diteliti. Dalam rangka menggunakan hasil dari CROSSTABS. Analisa Crosstabs Analisa crosstabs merupakan analisa yang masuk dalam kategori statistik deskripsi di mana menampilkan tabulasi silang atau tabel kontingensi yang menunjukkan suatu distribusi bersama dan pengujian hubungan antara dua variabel atau lebih.
Contoh Kasus Di bawah ini data hasil penelitian 15 orang mengenai jenis kelamin. Dari data tersebut di atas. dan rata-rata nilai ujian. lakukan uji statistik untuk mengetahui ada dan tidaknya hubungan antara jenis kelamin dengan rata-rata nilai atau jam belajar dengan rata-rata nilai. jumlah jam belajar dalam satu hari. Cramer's V Contingency Coefficient Lambda Goodman & Kruskal Tau Uncertainty Coefficient Kappa Beberapa pengorganisasian yang lainnya sesuai dengan skala ordinal. yaitu: • • Pearson's R Eta. 94 . seperti: • • • • • Mantel-Haenszel Gamma.• • • • • • Phi. Tau c dan Tau b Somers' D Spearman Korelasi Sedangkan beberapa pengorganisasian yang lain sesuai dengan ukuran tingkatan interval.
pilih variabelnya seperti berikut. Gambar 5.34. 2. Lakukan analisis Crosstabs dengan memilih menu analyze. Jika kita akan melihat hubungan jenis kelamin dengan nilai rata-rata. lakukan langkah-langkah dengan program SPSS sebagai berikut: 1.Gambar 5.33. Contoh Data Untuk membuat statistik deskripsi dengan crosstabs. Dialog Crosstabs 95 . lalu pilih Descriptive Statistics dan klik Crosstabs.
berapa data yang hilang. Crosstabs Statistics 4. Berapa data yang ada.3. Untuk jenis analisis yang lain akan kita bahas pada bagian analisa statistik berikutnya dari buku ini. Case Processing Summary 96 . Klik bagian Statistics dan perhatikan pilihan yang ada seperti berikut.36. Gambar 5.35. baik dalam prosentase maupun dalam nilai angkanya. 5. Gambar 5. Beri tanda di kotak check point Chi Square saja. Pembahasan Output Beberapa tampilan output yang muncul antara lain sebagai berikut. Tabel Case Processing Summary Pengukuran ini digunakan untuk mengetahui rangkuman data yang dianalisa. Tekan Continue dan pilih tombol OK.
Tabel Chi-Square Test Uji Chi Square ini akan mengamati secara lebih detail tentang ada dan tidaknya hubungan antara variabel jenis kelamin dan nilai test. Terdapat 3 orang perempuan yang mendapat rata-rata nilai 5. Perhatikan gambar di bawah ini. Tabel Chi Square Test 97 . dan seterusnya. Case Processing Summary Dari tabel di atas terlihat bahwa: • • • • • Terdapat 2 orang laki-laki yang mendapat rata-rata nilai 5. Terdapat 3 orang perempuan yang mendapat rata-rata nilai 6. Terdapat 1 orang laki-laki yang mendapat rata-rata nilai 7. Terdapat 1 orang laki-laki yang mendapat rata-rata nilai 6.37.Tabel Crosstabulation Tabel ini memuat deskripsi jumlah data dan hubungannya. Gambar 5.38. Gambar 5.
05. Case Processing Summary Berapa data yang ada. perhatikan dasar pengambilan keputusan berdasarkan ilmu statistika sebagai berikut: • • Ho: Tidak ada hubungan antara baris dan kolom. maka Ho diterima. H1: Ada hubungan antara baris dan kolom. Gambar 5.39. jika probabilitas < 0. dan akan muncul hasil analisa sebagai berikut.05 maka Ho diterima. yaitu menggunakan cross tabulation. Selanjutnya jika probabilitas > 0. berapa data yang hilang. 98 . Case Processing Summary Tabel Crosstabulation Tabel deskripsi jumlah data dan hubungannya dapat dilihat seperti di bawah ini. Sig-nya adalah 0. Karena Asymp. berarti bahwa tidak ada hubungan antara baris dan kolom variabel atau lebih jelasnya “tidak ada hubungan antara jenis kelamin dengan ratarata nilai”. Mencari Hubungan Jam Belajar dengan Rata-Rata Nilai Dengan cara yang sama. dapat dicari hubungan antara jam belajar dengan rata-rata nilai. Nilai probablilitas pada tabel ini bisa dilihat dari kolom Asymp Sig (2 Sided). Jika Ho diterima. baik dalam prosentase maupun dalam nilai angkanya bisa dilihat sebagai berikut. Lakukan langkah-langkah seperti di atas.Untuk mengetahui ada dan tidaknya hubungan antara variabel jenis kelamin dan nilai test.05 maka Ho ditolak.891 yang berarti lebih besar dari 0. Dan sebaliknya.
maka Ho diterima.086 yang berarti lebih besar dari 0. Jika Ho diterima. Untuk mengetahui ada dan tidaknya hubungan antara variabel jenis rata-rata jam belajar dan nilai test. Case Processing Summary Dari tabel di atas terlihat hasil analisa sebagai berikut. • • • • • • Terdapat 2 orang yang belajar 1 jam sehari dan mendapat rata-rata nilai 5. Akan tetapi. Dari tabel tersebut terlihat bahwa Asymp Sig-nya adalah 0. Tabel Chi-Square Test Uji Chi Square ini akan mengamati secara lebih detail tentang ada dan tidaknya hubungan antara variabel jenis rata-rata jam belajar dan nilai test. Dan seterusnya.Gambar 5. Terdapat 3 orang yang belajar 2 jam sehari dan mendapat rata-rata nilai 5. setidaknya kita bisa melihat 99 . Terdapat 3 orang yang belajar 2 jam sehari dan mendapat rata-rata nilai 5.40. berarti bahwa tidak ada hubungan antara baris dan kolom variabel.05. Terdapat 1 orang yang belajar 1 jam sehari dan mendapat rata-rata nilai 6. dapat langsung dilihat probabilitasnya. Terdapat 2 orang yang belajar 4 jam sehari dan mendapat rata-rata nilai 7.
karena kurangnya data penelitian. Gambar 5. Hasilnya bisa 1001 kemungkinan.bahwa ternyata nilai probabilitas mendekati 0. dan lain-lain sehingga hasil sebuah penelitian terkadang dipandang rancu dan tidak sesuai harapan atau tidak sesuai dengan teori yang ada. Namun.41. seperti salah metode pengambilan sampel. Ketidaktepatan hasil penelitian mungkin dapat disebabkan banyak hal.05 yang dapat dikatakan bahwa hampir ada hubungan antara rata-rata jam belajar dengan nilai test. 100 . itulah penelitian. Tabel Chi Square Test Terkadang memang sebuah penelitian tidak mendapatkan hasil sesuai dengan keinginan atau teori yang ada.
Sign up to vote on this title
UsefulNot useful
|
__label__pos
| 0.997113 |
Cómo usar tf.while_loop () en tensorflow
Esta es una pregunta genérica. Encontré que en el tensorflow, después de que construimos el gráfico, obtenemos los datos en el gráfico, la salida del gráfico es un tensor. pero en muchos casos, necesitamos realizar algunos cálculos basados en esta salida (que es un tensor ), que no está permitido en tensorflow.
por ejemplo, estoy tratando de implementar un RNN, que realiza ciclos de tiempo basados en la propiedad de los datos. Es decir, necesito usar un tensor para juzgar si debo detenerme (no estoy usando dynamic_rnn ya que en mi diseño, el rnn es altamente personalizado). Me parece que tf.while_loop(cond,body.....) podría ser un candidato para mi implementación. Pero el tutorial oficial es demasiado simple. No sé cómo agregar más funcionalidades al ‘cuerpo’. ¿Puede alguien darme algunos ejemplos más complejos?
Además, en tal caso, si el cálculo futuro se basa en la salida del tensor (por ejemplo, la parada de RNN en función del criterio de salida), es un caso muy común. ¿Hay una forma elegante o mejor en lugar de un gráfico dynamic?
¿Qué te impide agregar más funcionalidad al cuerpo? Puede crear cualquier gráfico computacional complejo que desee en el cuerpo y tomar las entradas que desee del gráfico adjunto. Además, fuera del bucle, puede hacer lo que quiera con las salidas que regrese. Como se puede ver en la cantidad de “malhechores”, las primitivas de flujo de control de TensorFlow se construyeron con mucha generalidad en mente. Abajo hay otro ejemplo ‘simple’, en caso de que ayude.
import tensorflow as tf import numpy as np def body(x): a = tf.random_uniform(shape=[2, 2], dtype=tf.int32, maxval=100) b = tf.constant(np.array([[1, 2], [3, 4]]), dtype=tf.int32) c = a + b return tf.nn.relu(x + c) def condition(x): return tf.reduce_sum(x) < 100 x = tf.Variable(tf.constant(0, shape=[2, 2])) with tf.Session(): tf.global_variables_initializer().run() result = tf.while_loop(condition, body, [x]) print(result.eval())
|
__label__pos
| 0.975098 |
Signals
Discussion in 'C Programming' started by CJ, Oct 29, 2007.
1. CJ
CJ Guest
Hi Friends -
Is there any way to tell if a C library function installs its own signal
handler?
~CJ
CJ, Oct 29, 2007
#1
1. Advertisements
2. In article <>,
CJ <> wrote:
>Is there any way to tell if a C library function installs its own signal
>handler?
C89 section 4.7.1:
The implementation shall behave as if no library function calls
the signal function.
Thus, if a C library function does install its own signal
handler, then it would have to do so in a manner that is
transparent to any of the C standard routines that could possibly
detect that signal handler. Therefore, if there is a way of
detecting such a thing, it would have to lie outside of the C
standard (e.g., an OS routine.)
--
"History is a pile of debris" -- Laurie Anderson
Walter Roberson, Oct 29, 2007
#2
1. Advertisements
Want to reply to this thread or ask your own question?
It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
1. Thomas
unused bits in signals
Thomas, Jul 6, 2003, in forum: VHDL
Replies:
2
Views:
1,137
Mario Trams
Jul 6, 2003
2. Ingmar Seifert
Delay of control signals
Ingmar Seifert, Aug 18, 2003, in forum: VHDL
Replies:
0
Views:
1,252
Ingmar Seifert
Aug 18, 2003
3. Simone Winkler
generate testbench for array signals
Simone Winkler, Aug 31, 2003, in forum: VHDL
Replies:
1
Views:
824
Pedro Claro
Sep 2, 2003
4. walala
Replies:
3
Views:
864
Allan Herriman
Sep 9, 2003
5. Sajan
Dumping real signals in VCD
Sajan, Sep 19, 2003, in forum: VHDL
Replies:
8
Views:
7,403
vlsi.engg27
Jul 21, 2007
Loading...
Share This Page
|
__label__pos
| 0.818264 |
Hyperthreading
Discussion in 'Die-hard Overclocking & Case Modifications' started by nateluthje, Feb 26, 2018.
1. nateluthje
nateluthje Master Guru
Messages:
266
Likes Received:
3
GPU:
Asus strix 1080 SLI
Hi all,
I was wondering is it better to have hyperthreading off in the bios and be able to run higher clock speeds, or to have it enabled and run lower clock speeds.
With HT "ON" I can only reach 4.6ghz, with it "off" I can reach 4.8ghz. This affects benchmark scores at least - well, not really Timespy, but there is a noticeable drop in the other 3dmark tests.
Most games are not that CPU intensive - take (for example) "the division" or "ghost recon wildlands" they barely reach on average 50-60% cpu (I said as an average), I know that this can vary.
What are your opinions?
Cheers.
2. Turanis
Turanis Ancient Guru
Messages:
1,622
Likes Received:
331
GPU:
Gigabyte RX500
Only YOU can tell us how it is. :D
Usually all users let HT on.But you can test it for yourself and post results here. :)
(Test it from games bench or run MSI AB bench tool)
Dragondale13 likes this.
3. DAW40
DAW40 Master Guru
Messages:
482
Likes Received:
38
GPU:
EVGA 1060 6GB
No no no, if you disable HT then apps that use it like Photoshop, Premiere, DAW apps etc. It gives about a 20 to 25 percent boost in speed. By turning it off is a cardinal sin. thx
4. RealNC
RealNC Ancient Guru
Messages:
3,142
Likes Received:
1,369
GPU:
EVGA GTX 980 Ti FTW
Nah. Leave it on, unless you come across a game that is really starving for single-core performance. CS:GO is an example, where insanely high frame rates can be beneficial (400FPS+).
5. WareTernal
WareTernal Master Guru
Messages:
248
Likes Received:
39
GPU:
XFX RX 580 GTS BE
Turanis is right. It depends on what games you play and how you use your PC.
If you are mostly gaming, then I'd go for the higher clock speed.
IMO, 3dmark doesn't reflect reality very well when i comes to HT. I have a [email protected] and [email protected]. The 2600 always scores way higher in 3dmark, but in every game I've tested the 2500k always score slightly higher.
Few games use HT for anything useful(or course depending on your hardware and settings that may change).
If you're streaming games, you might get a smoother experience with it on, but even that is a maybe.
Some game may show a lower peak FPS, but higher min FPS with HT on, and sometimes that is more important than a few more peak FPS.
I'd say if you are doing things where you are waiting on your PC for 30+ minutes(like rendering a large video), you might want to consider leaving it on.
For games, I think it's not a huge difference either way, and it all comes back to what Turanis said.
Dragondale13 likes this.
6. DAW40
DAW40 Master Guru
Messages:
482
Likes Received:
38
GPU:
EVGA 1060 6GB
Yes, wareternal is correct. You don't need HT for games. You need HT for hard core apps that take full advantage of it. But honestly it's almost like having extra cores, except the boost in speed is like 20 to 25 percent.
7. -Tj-
-Tj- Ancient Guru
Messages:
16,846
Likes Received:
1,770
GPU:
Zotac GTX980Ti OC
I never turned it off since I got this cpu back in June 2013, imo its pointless for that 200MHz increase.
Angantyr and DAW40 like this.
8. DAW40
DAW40 Master Guru
Messages:
482
Likes Received:
38
GPU:
EVGA 1060 6GB
Ill take 12 cores, well 12 sorta virtual cores over a 1000 more Mhz let alone 200Mhz. sheeeez... sighs
9. RealNC
RealNC Ancient Guru
Messages:
3,142
Likes Received:
1,369
GPU:
EVGA GTX 980 Ti FTW
Well, let's not get too much into hyperbole here. 1GHz higher clock but with 6 threads will result in a much higher performance in the majority of games compared to 1GHz lower clock but with 12 threads.
Organik likes this.
10. DAW40
DAW40 Master Guru
Messages:
482
Likes Received:
38
GPU:
EVGA 1060 6GB
You are correct Sir. Because games don't use HT, So if its a hardcore gamer you want to turn off HT. However if your casual gamer and do a lot of desktop work with apps like CAD, Video Editing, 3D modeling, PS and DAW then you need to turn on HT. thx
11. insp1re2600
insp1re2600 Maha Guru
Messages:
1,172
Likes Received:
431
GPU:
RTX 2080TI OC H20
yes they can...
Share This Page
|
__label__pos
| 0.718073 |
What is meta? ×
Meta Stack Exchange is where users like you discuss bugs, features, and support issues that affect the software powering all 142 Stack Exchange communities.
When a registered user gets enough suggested edits rejected, they're blocked from suggesting edits for a number of days.
I've always assumed it was a user-level block, but what about anonymous suggested edits? Can anonymous users suggest edits with impunity, or is there a lower-level block (like, say, IP-blocking) in place, if only for anonymous users?
share|improve this question
3
I can't say for sure, but it's likely. Their IP addresses are saved at any rate, as evidenced by their consistent Gravatars. – lunboks Jan 6 '12 at 23:25
@lunboks Oh... this puts down my theory... – Gabe Jan 6 '12 at 23:33
2 Answers 2
up vote 8 down vote accepted
Yes there is an IP level block in place as well. The logic is exactly the same as it is for registered users, except that the key is the IP Address.
share|improve this answer
I would presume that the block is an IP block, because of the fact that when they block users from asking on meta, it is also an IP block and I'm sure that they've taken the anonymous suggested edit users into consideration.
I looked high and low for more details on how the edit-blocking system works and found nothing, so some clarification from someone who knows for sure would be nice.
share|improve this answer
You must log in to answer this question.
|
__label__pos
| 0.689435 |
Online JudgeProblem SetAuthorsOnline ContestsUser
Web Board
Home Page
F.A.Qs
Statistical Charts
Problems
Submit Problem
Online Status
Prob.ID:
Register
Update your info
Authors ranklist
Current Contest
Past Contests
Scheduled Contests
Award Contest
User ID:
Password:
Register
Language:
Not So Flat After All
Time Limit: 1000MSMemory Limit: 65536K
Total Submissions: 747Accepted: 283
Description
Any positive integer v can be written as p1a1*p2a2*...*pnan where pi is a prime number and ai ≥ 0. For example: 24 = 23*31.
Pick any two prime numbers p1 and p2 where p1 = p2. Imagine a two dimensional plane where the powers of p1 are plotted on the x-axis and the powers of p2 on the y-axis. Now any number that can be written as p1a1*p2a2 can be plotted on this plane at location (x, y) = (a1, a2). The figure on the right shows few examples where p1 = 3 and p2 = 2.
This idea can be extended for any N-Dimensional space where each of the N axes is assigned a unique prime number. Each N-Dimensional space has a unique set of primes.
We call such set the Space Identification Set or S for short. |S| (the ordinal of S) is N.
Any number that can be expressed as a multiplication of pi ∈ S (each raised to a power (ai ≥ 0) can be plotted in this |S|-Dimensional space. The figure at the bottom illustrates this idea for N = 3 and S = {2, 3, 7}. Needless to say, any number that can be plotted on space A can also be plotted on space B as long as SA \subset SB.
We define the distance between any two points in a given N-Dimensional space to be the sum of units traveled to get from one point to the other while following the grid lines (i.e. movement is always parallel to one of the axes.) For example, in the figure below, the distance between 168 and 882 is 4.
Given two positive integers, write a program that determines the minimum ordinal of a space where both numbers can be plotted in. The program also determines the distance between these two integers in that space.
Input
Your program will be tested on one or more test cases. Each test case is specified on a line with two positive integers (0 < A,B < 1, 000, 000) where A * B > 1.
The last line is made of two zeros.
Output
For each test case, print the following line:
k. X:D
Where k is the test case number (starting at one,) X is the minimum ordinal needed in a space that both A and B can be plotted in. D is the distance between these two points.
Sample Input
168 882
770 792
0 0
Sample Output
1. 3:4
2. 5:6
Source
[Submit] [Go Back] [Status] [Discuss]
Home Page Go Back To top
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator
|
__label__pos
| 0.974965 |
The strange story of “Extended Random”
Yesterday, David Benjamin posted a pretty esoteric note on the IETF’s TLS mailing list. cap032At a superficial level, the post describes some seizure-inducingly boring flaws in older Canon printers. To most people that was a complete snooze. To me and some of my colleagues, however, it was like that scene in X-Files where Mulder and Scully finally learn that aliens are real.
Those fossilized printers confirmed a theory we’d developed in 2014, but had been unable to prove: namely, the existence of a specific feature in RSA’s BSAFE TLS library called “Extended Random” — one that we believe to be evidence of a concerted effort by the NSA to backdoor U.S. cryptographic technology.
Before I get to the details, I want to caveat this post in two different ways. First, I’ve written about the topic of cryptographic backdoors way too much. In 2013, the Snowden revelations revealed the existence of a campaign to sabotage U.S. encryption systems. Since that time, cryptographers have spent thousands of hours identifying, documenting, and trying to convince people to care about these backdoors. We’re tired and we want to do more useful things.
The second caveat covers a problem with any discussion of cryptographic backdoors. Specifically, you never really get absolute proof. There’s always some innocent or coincidental explanation that could sort of fit the evidence — maybe it was all a stupid mistake. So you look for patterns of unlikely coincidences, and use Occam’s razor a lot. You don’t get a Snowden every day.
With all that said, let’s talk about Extended Random, and what this tells us about the NSA. First some background.
Dual_EC_DRBG and RSA BSAFE
To understand the context of this discovery, you need to know about a standard called Dual EC DRBG. This was a proposed random number generator that the NSA developed in the early 2000s. It was standardized by NIST in 2007, and later deployed in some important cryptographic products — though we didn’t know it at the time.
Dual EC has a major problem, which is that it likely contains a backdoor. This was pointed out in 2007 by Shumow and Ferguson, and effectively confirmed by the Snowden leaks in 2013. Drama ensued. NIST responded by pulling the standard. (For an explainer on the Dual EC backdoor, see here.)
Somewhere around this time the world learned that RSA Security had made Dual EC the default random number generator in their popular cryptographic library, which was called BSAFE. RSA hadn’t exactly kept this a secret, but it was such a bonkers thing to do that nobody (in the cryptographic community) had known. So for years RSA shipped their library with this crazy algorithm, which made its way into all sorts of commercial devices.
The RSA drama didn’t quite end there, however. In late 2013, Reuters reported that RSA had taken $10 million to backdoor their software. RSA sort of denies this. Or something. It’s not really clear.
Regardless of the intention, it’s known that RSA BSAFE did incorporate Dual EC. This could have been an innocent decision, of course, since Dual EC was a NIST standard. To shed some light on that question, in 2014 my colleagues and I decided to reverse-engineer the BSAFE library to see if it the alleged backdoor in Dual EC was actually exploitable by an attacker like the NSA. We figured that specific engineering decisions made by the library designers could be informative in tipping the scales one way or the other.
It turns out they were.
Extended Random
In the course of reverse engineering the Java version of BSAFE, we discovered a funny inclusion. Specifically, we found that BSAFE supports a non-standard extension to the TLS protocol called “Extended Random”.
The Extended Random extension is an IETF Draft proposed by an NSA employee named Margaret Salter (at some point the head of NSA’s Information Assurance Directorate, which worked on “defensive” crypto for DoD) along with Eric Rescorla as a contractor. (Eric was very clearly hired to develop a decent proposal that wouldn’t hurt TLS, and would primarily be used on government machines. The NSA did not share their motivations with him.)
It’s important to note that Extended Random by itself does not introduce any cryptographic vulnerabilities. All it does is increase the amount of random data (“nonces”) used in a TLS protocol connection. This shouldn’t hurt TLS at all, and besides it was largely intended for U.S. government machines.
The only thing that’s interesting about Extended Random is what happens when that random data is generated using the Dual EC algorithm. Specifically, this extra data acts as “rocket fuel”, significantly increasing the efficiency of exploiting the Dual EC backdoor to decrypt TLS connections.
In short, if you’re an agency like the NSA that’s trying to use Dual EC as a backdoor to intercept communications, you’re much better off with a system that uses both Dual EC DRBG and Extended Random. Since Extended Random was never standardized by the IETF, it shouldn’t be in any systems. In fact, to the best of our knowledge, BSAFE is the only system in the world that implements it.
In addition to Extended Random, we discovered a variety of features that, combined with the Dual EC backdoor, could make RSA BSAFE fairly easy to exploit. But Extended Random is by far the strangest and hardest to justify.
So where did this standard come from? For those who like technical mysteries, it turns out that Extended Random isn’t the only funny-smelling proposal the NSA made. It’s actually one of four failed IETF proposals made by NSA employees, or contractors who work closely with the NSA, all of which try to boost the amount of randomness in TLS. Thomas Ptacek has a mind-numbingly detailed discussion of these proposals and his view of their motivation in this post.
Oh my god I never thought spies could be so boring. What’s the new development?
Despite the fact that we found Extended Random in RSA BSAFE (a free version we downloaded from the Internet), a fly in the ointment was that it didn’t actually seem to be enabled. That is: the code was there but the switches to enable it were hard-coded to “off”.
This kind of put a wrench in our theory that RSA might have included Extended Random to make BSAFE connections more exploitable by the NSA. There might be some commercial version of BSAFE out there with this code active, but we were never able to find it or prove it existed. And even worse, it might appear only in some special “U.S. government only” version of BSAFE, which would tend to undermine the theory that there was something intentional about including this code — after all, why would the government spy on itself?
Which finally brings us to the news that appeared on the TLS mailing list the other day. It turns out that certain Canon printers are failing to respond properly to connections made using the new version of TLS (which is called 1.3), because they seem to have implemented an unauthorized TLS extension using the same number as an extension that TLS 1.3 needs in order to operate correctly. Here’s the relevant section of David’s post:
The web interface on some Canon printers breaks with 1.3-capable
ClientHello messages. We have purchased one and confirmed this with a
PIXMA MX492. User reports suggest that it also affects PIXMA MG3650
and MX495 models. It potentially affects a wide range of Canon
printers.
These printers use the RSA BSAFE library to implement TLS and this
library implements the extended_random extension and assigns it number
40. This collides with the key_share extension and causes 1.3-capable
handshakes to fail.
So in short, this news appears to demonstrate that commercial (non-free) versions of RSA BSAFE did deploy the Extended Random extension, and made it active within third-party commercial products. Moreover, they deployed it specifically to machines — specifically off-the-shelf commercial printers — that don’t seem to be reserved for any kind of special government use.
(If these turn out to be special Department of Defense printers, I will eat my words.)
Ironically, the printers are now the only thing that still exhibits the features of this (now deprecated) version of BSAFE. This is not because the NSA was targeting printers. Whatever devices they were targeting are probably gone by now. It’s because printer firmware tends to be obsolete and yet highly persistent. It’s like a remote pool buried beneath the arctic circle that preserves software species that would otherwise vanish from the Internet.
Which brings us to the moral of the story: not only are cryptographic backdoors a terrible idea, but they totally screw up the assigned numbering system for future versions of your protocol.
Actually no, that’s a pretty useless moral. Instead, let’s just say that you can deploy a cryptographic backdoor, but it’s awfully hard to control where it will end up.
Attack of the week: DUHK
Before we get started, fair warning: this is going to be a post about a fairly absurd (but duck_1f986non-trivial!) attack on cryptographic systems. But that’s ok, because it’s based on a fairly absurd vulnerability.
This work comes from Nadia Heninger, Shaanan Cohney and myself, and follows up on some work we’ve been doing to look into the security of pseudorandom number generation in deployed cryptographic devices. We made a “fun” web page about it and came up with a silly logo. But since this affects something like 25,000 deployed Fortinet devices, the whole thing is actually kind of depressing.
The paper is called “Practical state recovery attacks against legacy RNG implementation“, and it attacks an old vulnerability in a pseudorandom number generator called ANSI X9.31, which is used in a lot of government certified products. The TL;DR is that this ANSI generator really sucks, and is easy to misuse. Worse, when it’s misused — as it has been — some very bad things can happen to the cryptography that relies on it.
First, some background.
What is an ANSI, and why should I care?
A pseudorandom number generator (PRG) is a deterministic algorithm designed to “stretch” a short random seed into a large number of apparently random numbers. These algorithms are used ubiquitously in cryptographic software to supply all of the random bits that our protocols demand.
PRGs are so important, in fact, that the U.S. government has gone to some lengths to standardize them. Today there are three generators approved for use in the U.S. (FIPS) Cryptographic Module Validation Program. Up until 2016, there were four. This last one, which is called the ANSI X9.31 generator, is the one we’re going to talk about here.
ANSI X9.31 is a legacy pseudorandom generator based on a block cipher, typically AES. It takes as its initial seed a pair of values (K, V) where K is a key and V is an initial “seed” (or “state”). The generator now produces a long stream of pseudorandom bits by repeatedly applying the block cipher in the crazy arrangement below:
ansi
A single round of the ANSI X9.31 generator instantiated using AES. The Ti value is a “timestamp”, usually generated using the system clock. Ri (at right) represents the output of the generator. The state Vi is updated at each round. However, the key K is fixed throughout the whole process, and never updates.
The diagram above illustrates one of the funny properties of the ANSI generator: namely, that while the state value V updates for each iteration of the generator, the key K never changes. It remains fixed throughout the entire process.
And this is a problem. Nearly twenty years ago, Kelsey, Schneier, Wagner and Hall pointed out that this fact makes the ANSI generator terribly insecure in the event that an attacker should ever learn the key K.
Specifically, if an attacker were to obtain K somehow, and then was able to learn only a single 16-byte raw output block (Ri) from a working PRG, she could do the following: (1) guess the timestamp T, (2) work backwards (decrypting using K) in order to recover the corresponding state value V, and now (3) run the generator forwards or backwards (with guesses for T) to obtain every previous and subsequent output of the generator.
Thus, if an application uses the ANSI generator to produce something like a random nonce (something that is typically sent in a protocol in cleartext), and also uses the generator to produce secret keys, this means an attacker could potentially recover those secret keys and completely break the protocol.
Of course, all of this requires that somehow the attacker learns the secret value K. At the time Kelsey et al. published their result, this was viewed as highly unlikely. After all, we’re really good at keeping secrets.
I assume you’re joking?
So far we’ve established that the ANSI generator is only secure if you can forever secure the value K. However, this seems fairly reasonable. Surely implementers won’t go around leaking their critical secrets all over the place. And certainly not in government-validated cryptographic modules. That would be crazy.
Yet crazy things do happen. We figured someone should probably check.
To see how the X9.31 key is managed in real products, our team developed a sophisticated analytic technique called “making a graduate student read every FIPS document on the CMVP website”.
Most of the documents were fairly vague. And yet, a small handful of widely-used cryptographic modules had language that was troubling. Specifically, several vendors include language in their security policy that indicates the ANSI key was either hard-coded, or at least installed in a factory — as opposed to being freshly generated at each device startup.
Of even more concern: at least one of the hard-coded vendors was Fortinet, a very popular and successful maker of VPN devices and firewalls.
To get more specific, it turns out that starting (apparently in 2009, or perhaps earlier), every FortiOS 4.x device has shipped with a hardcoded value for K. This key has been involved in generating virtually every random bit used to establish VPN connections on those appliances, using both the TLS and IPSec protocols. The implication is that anyone with the resources to simply reverse-engineer the FortiOS firmware (between 2009 and today) could theoretically have been able to recover K themselves — and thus passively decrypt any VPN connection.
(Note: Independent of our work, the ANSI generator was replaced with a more secure alternative as of FortiOS 5.x. As a result of our disclosure, it has also been patched in FortiOS 4.3.19. There are still lots of unpatched firewalls out there, however.)
What does the attack look like?
Running an attack against a VPN device requires three ingredients. The first is the key K, which can be recovered from the FortiOS firmware using a bit of elbow grease. Shaanan Cohney (the aforementioned graduate student) was able to pull it out with a bit of effort.
Next, the attacker must have access to some VPN or TLS traffic. It’s important to note that this is not an active attack. All you really need is a network position that’s capable of monitoring full two-sided TLS or IPSec VPN connections.
Specifically, the attacker needs a full AES block (16 bytes) worth of output from the ANSI generator, plus part of a second block to check success against. Fortunately both TLS and IPSec (IKE) include nonces of sufficient length to obtain this output, and both are drawn from the ANSI generator, which lives in the FortiOS kernel. The attacker also needs the Diffie-Hellman ephemeral public keys, which are part of the protocol transcript.
Finally, you need to know the timestamp Ti that was used to operate the generator. In FortiOS, these timestamps have a 1-microsecond resolution, so guessing them is actually a bit of a challenge. Fortunately, TLS and other protocols include the time-in-seconds as one of the outputs of the TLS protocol, so the actually guessing space is typically only about 2^20 at most. Still, this guessing proves to be one of the most costly elements of the attack.
Given all of the ingredients above, the attacker now decrypts the output block taken from the protocol nonce using K, guesses each possible Ti value, and then winds forward or backwards until she finds the random bits that were used to generate that party’s Diffie-Hellman secret key. Fortunately, the key and nonce are generated one after the other, so this is not quite as painful as it sounds. But it is fairly time consuming. Fortunately, computers are fast, so this is not a dealbreaker.
With the secret key in hand, it’s possible to fully decrypt the VPN connection, read all traffic, and modify the data as needed.
Does the attack really work?
Since we’re not the NSA, it’s awfully hard for us to actually apply this attack to real Fortinet VPN connections in the wild. Not to mention that it would be somewhat unethical.
However, there’s nothing really unethical about scanning for FortiOS devices that are online and willing to accept incoming traffic from the Internet. To validate the attack, the team conducted a large-scale scan of the entire IPv4 address space. Each time we found a device that appeared to present as a FortiOS 4.x VPN, we initiated a connection with it and tested to see if we could break our own connection.
ThingThing
It turns out that there are a lot of FortiOS 4.x devices in the wild. Unfortunately, only a small number of them accept normal IPSec connections from strangers. Fortunately, however, a lot of them do accept TLS connections. Both protocol implementations use the same ANSI generator for their random numbers.
This scan allowed us to validate that — as of October 2017 — the vulnerability was present and exploitable on more than 25,000 Fortinet devices across the Internet. And this count is likely conservative, since these were simply the devices that bothered to answer us when we scanned. A more sophisticated adversary like a nation-state would have access to existing VPN connections in flight.
In short, if you’re using a legacy Fortinet VPN you should probably patch.
So what does it all mean?
There are really three lessons to be learned from a bug like this one.
The first is that people make mistakes. We should probably design our crypto and certification processes to anticipate that, and make it much harder for these mistakes to become catastrophic decryption vulnerabilities like the one in FortiOS 4.x. Enough said.
The second is that government crypto certifications are largely worthless. I realize that seems like a big conclusion to draw from a single vulnerability. But this isn’t just a single vendor — it’s potentially several vendors that all fell prey to the same well-known 20-year old vulnerability. When a vulnerability is old enough to vote, your testing labs should be finding it. If they’re not finding things like this, what value are they adding?
Finally, there’s a lesson here about government standards. ANSI X9.31 (and its cousin X9.17) is over twenty years old. It’s (fortunately) been deprecated as of 2016, but a huge number of products still use it. This algorithm should have disappeared ten years earlier — and yet here we are. It’s almost certain that this small Fortinet vulnerability is just the tip of the iceberg. Following on revelations of a possible deliberate backdoor in the Dual EC generator, none of this stuff looks good. It’s time to give serious thought to how we make cryptographic devices resilient — even against the people who are supposed to be helping us secure them.
But that’s a topic for a much longer post.
How do you know if an RNG is working?
No matter how much cryptographers accomplish, we’re always
building on a questionable foundation. (illustration: Marc S. Rousseau)
Last week, Edward Snowden spoke to a packed crowd at SXSW about the many problems (and limited solutions) facing those of us who want to keep our communications private. Snowden said a number of things — including a shout out to Moxie’s company Whisper Systems, who certainly deserve it. But instead of talking about that, I wanted to focus on (in my opinion) one of Snowden’s most important quotes:
We need all those brilliant Belgian cryptographers to go “alright we know that these encryption algorithms we are using today work, typically it is the random number generators that are attacked as opposed to the encryption algorithms themselves. How can we make them [secure], how can we test them?”
Now it’s possible I’m a little biased, but it seems to me this cuts to the core of our problems with building secure systems in an increasingly hostile world. Namely: most encryption relies on some source of “random” numbers, either to generate keys or (particularly in the case of public key encryption) to provide semantic security for our ciphertexts.
What this means is that an attacker who can predict the output of your RNG — perhaps by taking advantage of a bug, or even compromising it at a design level — can often completely decrypt your communications. The Debian project learned this firsthand, as have many others. This certainly hasn’t escaped NSA’s notice, if the allegations regarding its Dual EC random number generator are true.
All of this bring us back to Snowden’s quote above, and the question he throws open for us. How do you know that an RNG is working? What kind of tests can we run on our code to avoid flaws ranging from the idiotic to the highly malicious? Unfortunately this question does not have an easy answer. In the rest of this post I’m going to try to explain why.
Background: Random and Pseudorandom Number Generation
I’ve written quite a bit about random number generation on this blog, but before we go forward it’s worth summarizing a few basic facts about random number generation.
First off, the ‘random’ numbers we use in most deployed cryptographic systems actually come from two different systems:
1. A ‘true’ random number generator (or entropy generator) that collects entropy from the physical world. This can include entropy collected from low-level physical effects like thermal noise and shot noise, or it can include goofy stuff like mouse movements and hard disk seek times.
2. An algorithmic ‘pseudorandom number generator’ (PRNG) that typically processes the output of (1) to both stretch the output to provide more bits and, in some cases, provide additional security protections in case the output of (1) proves to be biased.
It’s important to note that pseudorandom number generators aren’t “random number generators” at all. These generators typically use cryptographic algorithms (e.g., block ciphers or hash functions) to process a seed value from the RNG into many apparently random looking and unpredictable bytes.
In most cases, it’s quite rare for your application to ever see the raw output of a true random number generator.* Even the low-level entropy collector within Linux’s RNG uses cryptographic constructs like hash functions in order to ‘mix’ the output of various entropy sources. To produce the bits produced in /dev/random or /dev/urandom Linux then seeds a PRNG like Yarrow or Fortuna.**
Another similar pattern occurs inside of the Intel “secure key” random number generator included in Intel Ivy Bridge processors. When you buy one of these processors, you’re getting (free!) a hardware ‘1-shot‘ circuit that collects high-entropy electronic noise, which is then measured and processed into useful RNG output. The design looks like this:
Hardware random number generator used on Intel Ivy Bridge processors. Left: the ‘1-shot’ circuit used to collect physical entropy. Right: the data flow from generator to output, including health checks and
PRNG computation. (source).
Once again, with Intel’s design you (i.e., the application developer) don’t get access to this raw randomness. It’s first used to seed a PRNG based on AES (CTR-DRBG from NIST SP800-90A). What you actually get as an application developer is the processed output of that algorithm.
In practice this typical design some implications. On the positive side, the presence of a PRNG means that the underlying RNG circuit can get pretty borked (e.g., biased) without the results being detectable by your application. On the negative side, the underlying RNG circuit can get pretty borked without the results being detectable in your application.
In other words, with only a few ugly glitches — things that can happen in real life — you can easily get a broken random number generator that nobody notices until it’s way too late. And that’s without deliberate tampering, which makes things way, way worse.
Which brings us back to our fundamental question: how do systems know that their RNG is working. This turns out to be a question without a perfect answer.
Statistical Tests
If you look at the literature on random number generators, you’ll find a lot of references to statistical randomness testing suites like Diehard or NIST’s SP 800-22. The gist of these systems is that they look a the output of an RNG and run tests to determine whether the output is, from a statistical perspective, “good enough” for government work (very literally, in the case of the NIST suite.)
The nature of these tests varies. Some look at simple factors like bias (the number of 1s and 0s) while others look for more sophisticated features such as the distribution of numbers when mapped into 3-D space.
Now I don’t want to knock these tests. They’re a perfectly valid way to detect serious flaws in a (true) RNG — I can attest to this, since I’ve built one that failed the tests miserably — but they probably won’t detect flaws in your system. That’s because like I said above, most deployed systems include a combination of RNG and PRNG, or even RNG plus “conditioning” via cryptographic hash functions or ciphers. The nature of these cryptographic, algorithmic processes is such that virtually every processed output will pass statistical tests with flying colors — even if the PRNG is initialized with ‘garbage’ input.
This means, unfortunately, that it can be very hard to use statistical tests to detect a broken RNG unless you properly test it only at the low level. And even there you won’t rule out intentional backdoors — as I’ll discuss in a moment.
Known Answer Tests (KATs)
Assuming that you’ve tested your true RNG properly and it’s passing all tests, it’s still important to test your PRNG. One approach to doing this is to use Known Answer Tests (KATs) that are essentially test vectors. These contain some input seed material as well as a set of output bytes that should be the algorithmic result of running the PRNG on that seed.
Since PRNGs are purely algorithmic, the theory here is that you can test them like algorithms. While this approach is valid, it raises two potential issues (both of which I’ve seen in practice).
First, you can only test your PRNG on so many points. Thus it’s quite possible that your PRNG will succeed on one particular test vector (i.e., it’ll output just so many valid bytes) but go completely off the reservation on some other input. This is unlikely, but not impossible in normal conditions. It’s very possible if someone is trying to build a malicious backdoor into your PRNG implementation.
Second, the process of instrumenting your PRNG implementation for testing can actually introduce vulnerabilities in your deployed system! Think about this for a second. Normal PRNGs take in real random seeds from your RNG. The last thing you’d ever want to do is run your PRNG on some predictable seed — if you did, everyone would be able to predict the PRNGs outputs. Yet adding a test harness your system means building in logic to re-seed your RNG to something predictable!
This is like adding an ejection seat to your car. Might make you safer — unless it goes off while you’re driving to work.
A quick glance through e.g., the OpenSSL code shows that indeed, exactly this sort of code exists and ships in some versions of the library. Of course, the experienced developers will note that surely such features could be surrounded by pre-processor directives (or the equivalent in your language of choice) ensuring that they’ll never be activated in production code. Sadly, at least in the case of FIPS, this is not possible — for reasons I’ll explain next.
Runtime Health Checks
Another approach to testing RNGs is to test them while the system is running. This isn’t intended to rule out design-level flaws (as the above statistical and KAT tests are) but it is intended to catch situations where the RNG becomes broken during normal operation. This can occur for a variety of reasons, e.g., manufacturing defects, system damage, and even exposure to outside radiation.
Health checks can take different forms. FIPS 140, for example, mandates that all approved RNGs be tested at startup time using KATs. (This is why you can’t make your test harness conditional on compilation flags — it must ship in your production code!) They subsequently mandate a runtime health check that verifies the generator has not become ‘stuck’, i.e., is spitting out the same bytes over and over again.
While I’m sure this last test may have saved someone, somewhere, it seems totally inappropriate and useless when applied to the output of an RNG/PRNG pair, which is how NIST recommends it be used. This is because even the most broken algorithmic PRNGs will almost never spit out duplicate values — even if the underlying RNG fails completely.
The upshot of this decision is that NIST (FIPS) recommend a check that will almost never succeed in catching anything useful from a PRNG, but does introduce a whole bunch of extra logic that can suffer from flaws and/or malicious circumvention. I’m sure the good folks at NIST realize this, but they recommend it anyway — after all, what else are they going to do?
Tampering
Which brings us to the $10 million question. What happens if an attacker is deliberately tampering with our RNG/PRNG in order to make it fail? Note that this is not an academic question. We have excellent reason to believe it’s happened in some real systems.
Just for fun, let’s go back to the Intel Ivy Bridge RNG described above. We’ll take a look specifically at the PRNG portion of the design, which uses the NIST CTR-DRBG random number generator with AES:
Portion of the Intel Ivy Bridge design, with a few annotations added by yours truly. (original source)
The CTR-DRBG design relies on two features. First, an AES key is selected at random along with some input seed. This pair goes into the AES cipher, where it is processed to derive a new key and data. The result should be unpredictable to most attackers.
But if you were able to change the way keys were updated (in the key_in_mux hilighted) so that instead of updating the key and/or using an unpredictable one, it chose a fixed key known to the attacker, you would now have a very powerful backdoor. Specifically, the output would still look statistically perfectly random. But an attacker who knows this key could simply decrypt one block of RNG output to obtain all future and past outputs of the generator until the next time it was reseeded.
Note that I am not saying the Intel system has a backdoor in it — far from it. I’m only considering how easily it might be made to have one if you were an attacker with control of Intel’s fabrication plants (or their microcode updates). And this is hardly Intel’s fault. It’s just the nature of this particular RNG design. Others could be just as vulnerable.
Actually using this knowledge to attack applications would be more complex, since many system-level RNGs (including the Linux Kernel RNG) combine the output of the RNG with other system entropy (through XOR, unfortunately, not hashing). But Intel has pushed hard to see their RNG output used directly, and there exist plugins for OpenSSL that allow you to use it similarly. If you used such a method, these hypothetical flaws could easily make their way all the way into your cryptography.
Designing against these issues
Unfortunately, so far all I’ve done is call out the challenges with building trustworthy RNGs. And there’s a reason for this: the challenges are easy to identify, while the solutions themselves are hard. And unfortunately at this time, they’re quite manual.
Building secure RNG/PRNGs still requires a combination of design expertise, careful low-level (true) RNG testing — using expert design and statistical tests — and the use of certified algorithms with proper tests. All of the techniques above contribute to building a secure RNG, but none of them are quite sufficient.
Solving this problem, at least in software, so we can ensure that code is correct and does not contain hidden ‘easter eggs’, represents one of the more significant research challenges facing those of us who depend on secure cryptographic primitives. I do hope some enterprising graduate students will give these issues the attention they deserve.
Notes:
* Though there are some exceptions. See, for example, this FIPS certified smart card that included a bad RNG which was used to generate cryptographic secrets. In general FIPS disallows this except for a very small number of approved RNGs. Perhaps this was one.
** The original version of this post claimed that /dev/random seeds /dev/urandom. This is actually a mistake — both /dev/random and /dev/urandom use the same PRNG, but /dev/random simply keeps track of how much ‘entropy’ is in the pool and blocks when you have drawn too many bits. Thanks to Brendan Long and Thomas Ptacek for setting me straight.
A few more notes on NSA random number generators
Last Friday, Joseph Menn from Reuters published an article claiming that RSA, the pioneering security firm and division of EMC, accepted $10 million dollars to include the 3a7ac-thomas_adversaryDual EC random number generator as the default in their flagship BSAFE library. I’ve written a bit about Dual EC on this blog, so readers will know that I don’t think highly of it. For one thing, it’s a rotten choice for a commercial product, due to its lousy software performance. Much worse: it may introduce a practical backdoor into any system that uses it.
Given the numerous problems with Dual EC it’s baffling that RSA would select this particular generator as the default for their software. It’s particularly strange given that BSAFE is designed for use in Java-based and embedded systems where performance truly is at a premium. And none of this can be explained by the needs of a single RSA customer, since those needs could be satisfied merely by making BSAFE an option, rather than the default.
Of course there have been many people who had viewed RSA’s decision as potentially malicious. Unsupported rumors have been floating around since long before Reuters got involved. What’s new this time is that RSA insiders appear to be going on the record.
I suppose time will tell how the industry reacts to this news. In the interim I have a few small facts to add to the discussion, so I thought I’d sketch them out in this post.
#1: Dual_EC_DRBG’s ‘backdoor’ was known as of January 2005
It’s widely believed that the ‘vulnerability’ in Dual_EC was first identified by Microsoft employees Dan Shumow and Niels Ferguson in the summer of 2007. Tanja Lange (and nymble) recently tipped me off to the fact that this isn’t precisely true.
In point of fact, the possibility of a backdoor was known to at least some members of the ANSI X9.82 standardization committee as far back in January 2005. This surprising news comes via a patent application filed by Certicom employees Dan Brown and Scott Vanstone. The application claims a priority date of January 2005. Here’s the scary bit:
If P and Q are established in a security domain controlled by an administrator, and the entity who generates Q for the domain does so with knowledge of e (or indirectly via knowledge of d). The administrator will have an escrow key for every ECRNG that follows that standard.
Escrow keys are known to have advantages in some contexts. They can provide a backup functionality. If a cryptographic key is lost, then data encrypted under that key is also lost. However, encryption keys are generally the output of random number generators. Therefore, if the ECRNG is used to generate the encryption key K, then it may be possible that the escrow key e can be used to recover the encryption key K. Escrow keys can provide other functionality, such as for use in a wiretap. In this case, trusted law enforcement agents may need to decrypt encrypted traffic of criminals, and to do this they may want to be able to use an escrow key to recover an encryption key.
For example, in the SSL and TLS protocols, which are used for securing web (HTTP) traffic, a client and server perform a handshake in which their first actions are to exchange random values sent in the clear.
The patent also describes a number of ways to close the backdoor in Dual_EC_DRBG. Indeed, it may be due to Brown and Vanstone that the NIST standard includes an alternative method to close the backdoor (by generating a random Q point).
The existence of this patent does not mean that Brown and Vanstone were responsible for Dual EC. In fact, the generator appears to be an NSA invention, and may date back to the early 2000s. What this patent demonstrates is that some members of the ANSI committee, of which RSA was also a member, had reason to at least suspect that Dual EC could be used to create a wiretapping backdoor. (Update: John Kelsey confirms this.) It would be curious to know how widely this information was shared, and whether anyone on the committee ever inquired as to the provenance of the default parameters.
#2. Dual_EC_DRBG is not really a NIST standard
This is hardly a secret, but it’s something that hasn’t been widely acknowledged in the press. Dual_EC_DRBG is generally viewed as a NIST standard, since it was published in NIST Special Publication 800-90. But that’s not the only place it appears, nor was it developed at NIST.
A complete history of Dual_EC_DRBG would begin with NSA’s drive to include it in the ANSI X9.82 DRBG standard, with a standardization process kicked off in the early 2000s. The draft ANSI standard includes Dual_EC_DRBG with all of the known parameters, along with several additional elliptic curve parameters that were not included in the NIST standards.
Members of the ANSI X9F1 Tool Standards and Guidelines Group which wrote ANSI X9.82.
However you’ll also find Dual_EC_DRBG in the international standard ISO 18031. In other words, Dual EC was widely propagated in numerous standards long before NIST finalized it, and it’s hardly limited to US borders. The ANSI and ISO drafts are in some sense worse, since they don’t include a technique for generating your own Q parameter.
#3. Dual_EC_DRBG is not the only asymmetric random number generator in the ANSI and ISO standards
Cryptographers generally think of Dual EC as the only ‘public key’ random number generator to be widely standardized. We also point to NSA’s generation of the public parameters as evidence that Dual_EC may be compromised.
But in fact, the ANSI X9.82 and ISO standards each include a second generator based on public key cryptographic techniques. And like Dual EC, this one ships with a complete set of default parameters! The additional generator is based on an algorithm due to Micali and Schnorr, and relies for its security on assumptions related to the hardness of factoring large composite numbers. It requires an RSA-type modulus, several of which are conveniently provided in the specification.
Two default MS-DRBG moduli from the ISO 18031 specification.
There’s no reason to suspect that MS-DRBG is used by any real products, let alone that there’s a backdoor in the standard. In fact, there’s a curious fact about it: it’s not obvious from the public literature how one would attack the generator even if one knew the factorization of the n values above — though it seems intuitively likely that an attack does exist. Solving the problem, and finding a practical attack for known factorization, would be a fun project for an enthusiastic mathematician.
Since MS-DRBG comes from the same people who brought you Dual EC, if you are using it you might want to think twice.
RSA warns developers not to use RSA products
In today’s news of the weird, RSA (a division of EMC) has recommended that developers desist from using the (allegedly) ‘backdoored’ Dual_EC_DRBG random number generator — which happens to be the default in RSA’s BSafe cryptographic toolkit. Youch.
In case you’re missing the story here, Dual_EC_DRBG (which I wrote about yesterday) is the random number generator voted most likely to be backdoored by the NSA. The story here is that — despite many valid concerns about this generator — RSA went ahead and made it the default generator used for all cryptography in its flagship cryptography library. The implications for RSA and RSA-based products are staggering. In the worst case a modestly bad but by no means worst case, the NSA may be able to intercept SSL/TLS connections made by products implemented with BSafe.
So why would RSA pick Dual_EC as the default? You got me. Not only is Dual_EC hilariously slow — which has real performance implications — it was shown to be a just plain bad random number generator all the way back in 2006. By 2007, when Shumow and Ferguson raised the possibility of a backdoor in the specification, no sensible cryptographer would go near the thing.
And the killer is that RSA employs a number of highly distinguished cryptographers! It’s unlikely that they’d all miss the news about Dual_EC.
We can only speculate about the past. But here in the present we get to watch RSA’s CTO Sam Curry publicly defend RSA’s choices. I sort of feel bad for the guy. But let’s make fun of it anyway.
I’ll take his statement line by line (Sam is the boldface):
“Plenty of other crypto functions (PBKDF2, bcrypt, scrypt) will iterate a hash 1000 times specifically to make it slower.”
Password hash functions are built deliberately slow to frustrate dictionary attacks. Making a random number generator slow is just dumb.
At the time, elliptic curves were in vogue
Say what?
and hash-based RNG was under scrutiny.
Nonsense. A single obsolete hash based generator (FIPS 186) was under scrutiny — and fixed. The NIST SP800-90 draft in which Dual_EC appeared ALSO provided three perfectly nice non-backdoored generators: two based on hash functions and one based on AES. BSafe even implements some of them. Sam, this statement is just plain misleading.
The hope was that elliptic curve techniques—based as they are on number theory—would not suffer many of the same weaknesses as other techniques (like the FIPS 186 SHA-1 generator) that were seen as negative
Dual-EC suffers exactly the same sort of weaknesses as FIPS 186. Unlike the alternative generators in NIST SP800-90 it has a significant bias and really should not be used in production systems. RSA certainly had access to this information after the analyses were published in 2006.
and Dual_EC_DRBG was an accepted and publicly scrutinized standard.
And every bit of public scrutiny said the same thing: this thing is broken! Grab your children and run away!
SP800-90 (which defines Dual EC DRBG) requires new features like continuous testing of the output, mandatory re-seeding,
The exact same can be said for the hash-based and AES-based alternative generators you DIDN’T choose from SP800-90.
optional prediction resistance and the ability to configure for different strengths.
So did you take advantage of any of these options as part of the BSafe defaults? Why not? How about the very simple mitigations that NIST added to SP800-90A as a means to remove concerns that the generator might have a backdoor? Anyone?
There’s not too much else to say here. I guess the best way to put it is: this is all part of the process. First you find the disease. Then you see if you can cure it.
The Many Flaws of Dual_EC_DRBG
The Dual_EC_DRBG generator from NIST SP800-90A.
Update 9/19: RSA warns developers not to use the default Dual_EC_DRBG generator in BSAFE. Oh lord.
As a technical follow up to my previous post about the NSA’s war on crypto, I wanted to make a few specific points about standards. In particular I wanted to address the allegation that NSA inserted a backdoor into the Dual-EC pseudorandom number generator.
For those not following the story, Dual-EC is a pseudorandom number generator proposed by NIST for international use back in 2006. Just a few months later, Shumow and Ferguson made cryptographic history by pointing out that there might be an NSA backdoor in the algorithm. This possibility — fairly remarkable for an algorithm of this type — looked bad and smelled worse. If true, it spelled almost certain doom for anyone relying on Dual-EC to keep their system safe from spying eyes.
Now I should point out that much of this is ancient history. What is news today is the recent leak of classified documents that points a very emphatic finger towards Dual_EC, or rather, to an unnamed ‘2006 NIST standard’. The evidence that Dual-EC is this standard has now become so hard to ignore that NIST recently took the unprecedented step of warning implementers to avoid it altogether.
Better late than never.
In this post I’m going to try to explain the curious story of Dual-EC. While I’ll do my best to keep this discussion at a high and non-mathematical level, be forewarned that I’m probably going to fail at least at a couple of points. I you’re not the mood for all that, here’s a short summary:
• In 2005-2006 NIST and NSA released a pseudorandom number generator based on elliptic curve cryptography. They released this standard — with very little explanation — both in the US and abroad.
• This RNG has some serious issues with just being a good RNG. The presence of such obvious bugs was mysterious to cryptographers.
• In 2007 a pair of Microsoft researchers pointed out that these vulnerabilities combined to produce a perfect storm, which — together with some knowledge that only NIST/NSA might have — opened a perfect backdoor into the random number generator itself.
• This backdoor may allow the NSA to break nearly any cryptographic system that uses it.
If you’re still with me, strap in. Here goes the long version.
Dual-EC
For a good summary on the history of Dual-EC-DRBG, see this 2007 post by Bruce Schneier. Here I’ll just give the highlights.
Back in 2004-5, NIST decided to address a longstanding weakness of the FIPS standards, namely, the limited number of approved pseudorandom bit generator algorithms (PRGs, or ‘DRBGs’ in NIST parlance) available to implementers. This was actually a bit of an issue for FIPS developers, since the existing random number generators had some known design weaknesses.*
NIST’s answer to this problem was Special Publication 800-90, parts of which were later wrapped up into the international standard ISO 18031. The NIST pub added four new generators to the FIPS canon. None these algorithms is a true random number generator in the sense that they collect physical entropy. Instead, what they do is process the (short) output of a true random number generator — like the one in Linux — conditioning and stretching this ‘seed’ into a large number of random-looking bits you can use to get things done.** This is particularly important for FIPS-certified cryptographic modules, since the FIPS 140-2 standards typically require you to use a DRBG as a kind of ‘post-processing’ — even when you have a decent hardware generator.
The first three SP800-90 proposals used standard symmetric components like hash functions and block ciphers. Dual_EC_DRBG was the odd one out, since it employed mathematics more that are typically used to construct public-key cryptosystems. This had some immediate consequences for the generator: Dual-EC is slow in a way that its cousins aren’t. Up to a thousand times slower.
Now before you panic about this, the inefficiency of Dual_EC is not necessarily one of its flaws! Indeed, the inclusion of an algebraic generator actually makes a certain amount of sense. The academic literature includes a distinguished history of provably secure PRGs based on on number theoretic assumptions, and it certainly didn’t hurt to consider one such construction for standardization. Most developers would probably use the faster symmetric alternatives, but perhaps a small number would prefer the added confidence of a provably-secure construction.
Unfortunately, here is where NIST ran into their first problem with Dual_EC.
Flaw #1: Dual-EC has no security proof
Let me spell this out as clearly as I can. In the course of proposing this complex and slow new PRG where the only damn reason you’d ever use the thing is for its security reduction, NIST forgot to provide one. This is like selling someone a Mercedes and forgetting to attach the hood ornament.
I’d like to say this fact alone should have damned Dual_EC, but sadly this is par for the course for NIST — which treats security proofs like those cool Japanese cookies you can never find. In other words, a fancy, exotic luxury. Indeed, NIST has a nasty habit of dumping proof-writing work onto outside academics, often after the standard has been finalized and implemented in products everywhere.
So when NIST put forward its first draft of SP800-90 in 2005, academic cryptographers were left to analyze it from scratch. Which, to their great credit, they were quite successful at.
The first thing reviewers noticed is that Dual-EC follows a known design paradigm — it’s a weird variant of an elliptic curve linear congruential generator. However they also noticed that NIST had made some odd rookie mistakes.
Now here we will have to get slightly wonky — though I will keep mathematics to a minimum. (I promise it will all come together in the end!) Constructions like Dual-EC have basically two stages:
1. A stage that generates a series of pseudorandom elliptic curve points. Just like on the graph at right, an elliptic curve point is a pair (x, y) that satisfies an elliptic curve equation. In general, both x and y are elements of a finite field, which for our purposes means they’re just large integers.***
The main operation of the PRNG is to apply mathematical operations to points on the elliptic curve, in order to generate new points that are pseudorandom — i.e., are indistinguishable from random points in some subgroup.
And the good news is that Dual-EC seems to do this first part beautifully! In fact Brown and Gjøsteen even proved that this part of the generator is sound provided that the Decisional Diffie-Hellman problem is hard in the specific elliptic curve subgroup. This is a well studied hardness assumption so we can probably feel pretty confident in this proof.
2. Extract pseudorandom bits from the generated EC points. While the points generated by Dual-EC may be pseudorandom, that doesn’t mean the specific (x, y) integer pairs are random bitstrings. For one thing, ‘x‘ and ‘y‘ are not really bitstrings at all, they’re integers less than some prime number. Most pairs don’t satisfy the curve equation or are not in the right subgroup. Hence you can’t just output the raw x or y values and expect them to make good pseudorandom bits.
Thus the second phase of the generator is to ‘extract’ some (but not all) of the bits from the EC points. Traditional literature designs do all sorts of things here — including hashing the point or dropping up to half of the bits of the x-coordinate. Dual-EC does something much simpler: it grabs the x coordinate, throws away the most significant 16-18 bits, and outputs the rest.
In 2006, first Gjøsteen and later Schoenmakers and Sidorenko took a close look at Dual-EC and independently came up with a surprising result:
Flaw #2: Dual-EC outputs too many bits
Unlike those previous EC PRGs which output anywhere from 2/3 to half of the bits from the x-coordinate, Dual-EC outputs nearly the entire thing.
This is good for efficiency, but unfortunately it also gives Dual-EC a bias. Due to some quirks in the mathematics of the field operations, an attacker can now predict the next bits of Dual-EC output with a fairly small — but non-trivial — success probability, in the range of 0.1%. While this number may seem small to non-cryptographers, it’s basically a hanging offense for a cryptographic random number generator where probability of predicting a future bit should be many orders of magnitude lower.
What’s just plain baffling is that this flaw ever saw the light of day. After all, the specification was developed by bright people at NIST — in collaboration with NSA. Either of those groups should easily have discovered a bug like this, especially since this issue had been previously studied. Indeed, within a few months of public release, two separate groups of academic cryptographers found it, and were able to implement an attack using standard PC equipment.
So in summary, the bias is mysterious and it seems to be very much an ‘own-goal’ on the NSA’s part. Why in the world would they release so much information from each EC point? It’s hard to say, but a bit more investigation reveals some interesting consequences:
Flaw #3: You can guess the original EC point from looking at the output bits
By itself this isn’t really a flaw, but will turn out to be interesting in just a minute.
Since Dual-EC outputs so many bits from the x-coordinate of each point — all but the most significant 16 bits — it’s relatively easy to guess the original source point by simply brute-forcing the missing 16 bits and solving the elliptic curve equation for y. (This is all high-school algebra, I swear!)
While this process probably won’t uniquely identify the original (x, y), it’ll give you a modestly sized list of candidates. Moreover with only 16 missing bits the search can be done quickly even on a desktop computer. Had Dual_EC thrown away more bits of the x-coordinate, this search would not have been feasible at all.
So what does this mean? In general, recovering the EC point shouldn’t actually be a huge problem. In theory it could lead to a weakness — say predicting future outputs — but in a proper design you would still have to solve a discrete logarithm instance for each and every point in order to predict the next bytes output by the generator.
And here is where things get interesting.
Flaw #4: If you know a certain property about the Dual_EC parameters, and can recover an output point, you can predict all subsequent outputs of the generator
Did I tell you this would get interesting in a minute? I totally did.
The next piece of our puzzle was discovered by Microsoft researchers Dan Shumow and Niels Ferguson, and announced at the CRYPTO 2007 rump session. I think this result can best be described via the totally intuitive diagram below. (Don’t worry, I’ll explain it!)
Annotated diagram from Shumow-Ferguson presentation (CRYPTO 2007).
Colorful elements were added by yours truly. Thick green arrows mean ‘this part is
easy to reverse’. Thick red arrows should mean the opposite. Unless you’re the NSA.
The Dual-EC generator consists of two stages: a portion that generates the output bits (right) and a part that updates the internal state (left).
Starting from the “r_i” value (circled, center) and heading right, the bit generation part first computes the output point using the function “r_i * Q” — where Q is an elliptic curve point defined in the parameters — then truncates 16 bits its off its x-coordinate to get the raw generator output. The “*” operator here describes elliptic point multiplication, which is a complex operation that should be relatively hard to invert.
Note that everything after the point multiplication should be easy to invert and recover from the output, as we discussed in the previous section.
Every time the generator produces one block of output bits, it also updates its internal state. This is designed to prevent attacks where someone compromises the internal values of a working generator, then uses this value to wind the generator backwards and guess past outputs. Starting again from the circled “r_i” value, the generator now heads upwards and computes the point “r_i * P” where P is a different elliptic curve point also described in the parameters. It then does some other stuff.
The theory here is that P and Q should be random points, and thus it should be difficult to find “r_i * P” used for state update even if you know the output point “r_i * Q” — which I stress you do know, because it’s easy to find. Going from one point to the other requires you to know a relationship between P and Q, which you shouldn’t actually know since they’re supposed to be random values. The difficulty of this is indicated by the thick red arrow.
Looks totally kosher to me. (Source: NIST SP800-90A)
There is, however, one tiny little exception to this rule. What if P and Q aren’t entirely random values? What if you chose them yourself specifically so you’d know the mathematical relationship between the two points?
In this case it turns out you can easily compute the next PRG state after recovering a single output point (from 32 bytes of RNG output). This means you can follow the equations through and predict the next output. And the next output after that. And on forever and forever.****
This is a huge deal in the case of SSL/TLS, for example. If I use the Dual-EC PRG to generate the “Client Random” nonce transmitted in the beginning of an SSL connection, then the NSA will be able to predict the “Pre-Master” secret that I’m going to generate during the RSA handshake. Given this information the connection is now a cleartext read. This is not good.
So now you should all be asking the most important question of all: how the hell did the NSA generate the specific P and Q values recommended in Appendix A of Dual-EC-DRBG? And do they know the relationship that allows them to run this attack? All of which brings us to:
Flaw #5Nobody knows where the recommended parameters came from
And if you think that’s problematic, welcome to the club.
But why? And where is Dual-EC used?
The ten million dollar question of Dual-EC is why the NSA would stick such an obviously backdoored algorithm into an important standard. Keep in mind that cryptographers found the major (bias) vulnerabilities almost immediately after Dual-EC shipped. The possibility of a ‘backdoor’ was announced in summer 2007. Who would still use it?
A few people have gone through the list of CMVP-evaluated products and found that the answer is: quite a few people would. Most certify Dual-EC simply because it’s implemented in OpenSSL-FIPS, and they happen to use that library. But at least one provider certifies it exclusively. Yuck.
Hardcoded constants from the OpenSSL-FIPS
implementation of Dual_EC_DRBG. Recognize ’em?
It’s worth keeping in mind that NIST standards carry a lot of weight — even those that might have a backdoor. Folks who aren’t keeping up on the latest crypto results could still innocently use the thing, either by accident or (less innocently) because the government asked them to. Even if they don’t use it, they might include the code in their product — say through the inclusion of OpenSSL-FIPS or MS Crypto API — which means it’s just a function call away from being surreptitiously activated.
Which is why people need to stop including Dual-EC immediately. We have no idea what it’s for, but it needs to go away. Now.
So what about the curves?
The last point I want to make is that the vulnerabilities in Dual-EC have precisely nothing to do with the specifics of the NIST standard elliptic curves themselves. The ‘back door’ in Dual-EC comes exclusively from the relationship between P and Q — the latter of which is published only in the Dual-EC specification. The attack can work even if you don’t use the NIST pseudorandom curves.
However, the revelations about NIST and the NSA certainly make it worth our time to ask whether these curves themselves are somehow weak. The best answer to that question is: we don’t know. Others have observed that NIST’s process for generating the curves leaves a lot to be desired. But including some kind of hypothetical backdoor would be a horrible, horrific idea — one that would almost certainly blow back at us.
You’d think people with common sense would realize this. Unfortunately we can’t count on that anymore.
Thanks to Tanja Lange for her assistance proofing this post. Any errors in the text are entirely mine.
Notes:
* My recollection of this period is hazy, but prior to SP800-90 the two most common FIPS DRBGs in production were (1) the SHA1-based DSA generator of FIPS 186-2 and (2) ANSI X9.31. The DSA generator was a special-purpose generator based on SHA1, and was really designed just for that purpose. ANSI X9.31 used block ciphers, but suffered from some more subtle weaknesses it retained from the earlier X9.17 generator. These were pointed out by Kelsey, Schneier, Wagner and Hall.
** This is actually a requirement of the FIPS 140-2 specification. Since FIPS does not approve any true random number generators, it instead mandates that you run your true RNG output through a DRBG (PRNG) first. The only exception is if your true RNG has been approved ‘for classified use’.
*** Specifically, x and y are integers in the range 0 to p-1 where p is a large prime number. A point is a pair (x, y) such that y^2 = x^3 + ax + b mod p. The values a and b are defined as part of the curve parameters.
**** The process of predicting future outputs involves a few guesses, since you don’t know the exact output point (and had to guess at the missing 16 bits), but you can easily reduce this to a small set of candidates — then it’s just a question of looking at a few more bits of RNG output until you guess the right one.
Surviving a bad RNG
A couple of weeks ago I wrote a long post about random number generation, which I find to be one of the most fascinating subjects in cryptography — mostly because of how terrible things get when people screw it up.
And oh boy, do people screw it up. Back in 2008 it was Debian, with their ‘custom’ OpenSSL implementation that could only produce 32,768 possible TLS keys (do you really need more?) In 2012 it’s 25,000 factorable TLS public keys, all of which appear to have been generated by embedded devices with crappy RNGs.
When this happens, people get nervous. They start to wonder: am I at risk? And if so, what can I do to protect myself?
Answering this question is easy. Answering it in detail is hard. The easy answer is that if you really believe there’s a problem with your RNG, stop reading this blog and go fix it!
The more complicated answer is that many bad things can happen if your RNG breaks down, and some are harder to deal with than others.
In the rest of this post I’m going to talk about this, and give a few potential mitigations. I want to stress that this post is mostly a thought-exercise. Please do not re-engineer OpenSSL around any of the ‘advice’ I give herein (I’m looking at you, Dan Kaminsky), and if you do follow any of my advice, understand the following:
When it all goes terribly wrong, I’ll quietly take down this post and pretend I never wrote it.
In other words, proceed at your own risk. First, some background.
What’s a ‘bad RNG’?
Before we get started, it’s important to understand what it means for an RNG to be broken. In general, failure comes in three or four different flavors, which may or may not share the same root cause:
1. Predictable output. This usually happens when a generator is seeded with insufficient entropy. The result is that the attacker can actually predict, or at least guess, the exact bits that the RNG will output. This has all kinds of implications, none of them good.
2. Resetting output. This can occur when a generator repeatedly outputs the same stream of bits, e.g., every time the system restarts. When an attacker deliberately brings about this condition, we refer to it as a Reset Attack, and it’s a real concern for devices like smartcards.
3. Shared output. Sometimes exactly the same bits will appear on two or more different devices. Often the owners won’t have any idea this is happening until someone else turns up with their public key! This is almost always caused by some hokey entropy source or hardcoded seed value.
These aren’t necessarily distinct problems, and they can easily bleed into one another. For example, a resetting RNG can become a predictable RNG once the adversary observes the first round of outputs. Shared output can become predictable if the attacker gets his hands on another device of the same model. The Debian bug, for example, could be classified into all three categories.
In addition to the problems above, there’s also a fourth (potential) issue:
4. Non-uniform or biased output. It’s at least possible that the output of your generator will exhibit biased outputs, or strings of repeated characters (the kind of thing that tests like DIEHARD look for). In the worst case, it might just start outputting zero bytes.
The good news is that this is relatively unlikely as long as you’re using a standard crypto library. That’s because modern systems usually process their collected entropy through a pseudo-random generator (PRG) built from a hash function or a block cipher.
The blessing of a PRG is that it will usually give you nice, statistically-uniform output even when you feed it highly non-uniform seed entropy. This helps to prevent attacks (like this one) which rely on the presence of obvious biases in your nonces/keys/IVs. While this isn’t a rule, most common RNG failures seem to be related to bad entropy, not to some surprise failure of the PRG.
Unfortunately, the curse is that a good PRG can hide problems. Since most people only see the output of their PRG (rather than the seed material), it’s easy to believe that your RNG is doing a great job, even when it’s actually quite sick.* This has real-world implications: many standards (like FIPS-140) perform continuous tests on the final RNG/PRG output (e.g., for repeated symbols). The presence of a decent PRG renders these checks largely useless, since they’ll really only detect the most catastrophic (algorithmic) failures.
Key generation
When it comes to generating keys with a bad (P)RNG, the only winning move is not to play. Algorithm aside, if an attacker can predict your ‘randomness’, they can generate the same key themselves. Game over. Incidentally, this goes for ephemeral keys as well, meaning that protocols like Diffie-Hellman are not secure in the presence of a predictable RNG (on either side).
If you think there’s any chance this will happen to you, then either (a) generate your keys on a reliable device, or (b) get yourself a better RNG. If neither option is available, then for god’s sake, collect some entropy from the user before you generate keys. Ask them to tap a ditty on a button, or (if a keyboard is available), get a strong, unpredictable passphrase and hash it through PBKDF2 to get a string of pseudo-random bits. This might not save you, but it’s probably better than the alternative.
What’s fascinating is that some cryptosystems are more vulnerable to bad, or shared randomness than others. The recent batch of factorable RSA keys, for example, appears to be the product of poor entropy on embedded devices. But the keys weren’t broken because someone guessed the entropy that was used. Rather, the mere fact that two different devices share entropy was enough to make both of their keys factorable.
According to Nadia Heninger, this is an artifact of the way that RSA keys are generated. Every RSA public modulus is the product of two primes. Some devices generate one prime, then reseed their RNG (with the time, say) before generating the second. The result is two different moduli, each sharing one prime. Unfortunately, this is the worst thing you can do with an RSA key, since anyone can now compute the GCD and efficiently factor both keys.
Although you’re never going to be safe when two devices share entropy, it’s arguable that you’re better off if they at least generate the same RSA key, rather than two moduli with a single shared prime. One solution is to calculate the second prime as a mathematical function of the first. An even easier fix is just to make sure that you don’t reseed between the two primes.
Of course it’s not really fair to call these ‘solutions’. Either way you’re whistling past the graveyard, but at least this might let you whistle a bit longer.
Digital signatures and MACs
There’s a widely held misconception that digital signatures must be randomized. This isn’t true, but it’s understandable that people might think this, since it’s a common property of the signatures we actually use. Before we talk about this, let me stipulate that what we’re talking about here is the signing operation itself — I’m premising this discussion on the very important assumption that we have properly-generated keys.
MACs. The good news is that virtually every practical MAC in use today is deterministic. While there are probabilistic MACs, they’re rarely used. As long as you’re using a standard primitive like HMAC, that bad RNG shouldn’t affect your ability to authenticate your messages.
Signatures. The situation with signatures is a bit more complicated. I can’t cover all signatures, but let’s at least go over the popular ones. For reasons that have never been adequately explored, these are (in no particular order): ECDSA, DSA, RSA-PKCS#1v1.5 and RSA-PSS. Of these four signatures, three are randomized.
The major exception is RSA-PKCS#1v1.5 signature padding, which has no random fields at all. While this means you can give your RNG a rest, it doesn’t mean that v1.5 padding is good. It’s more accurate to say that the ‘heuristically-secure’ v1.5 padding scheme remains equally bad whether you have a working RNG or not.
If you’re signing with RSA, a much better choice is to use RSA-PSS, since that scheme actually has a reduction to the hardness of the RSA problem. So far so good, but wait a second: doesn’t the P in PSS stand for Probabilistic? And indeed, a close look at the PSS description (below) reveals the presence of random salt in every signature.
The good news is that this salt is only an optimization. It allows the designers to obtain a tighter reduction to the RSA problem, but the security proof holds up even if you repeat the salt, or just hardcode it to zero.
rsa-pss
The PSS signing algorithm. MGF is constructed from a hash function.
Having dispensed with RSA, we can get down to the serious offenders: DSA and ECDSA.
The problem in a nutshell is that every (EC)DSA signature includes a random nonce value, which must never be repeated. If you ever forget this warning — i.e., create two signatures (on different messages) using the same nonce — then anyone can recover your secret key. This is both easy to detect, and to compute. You could write a script to troll the Internet for repeated nonces (e.g., in X509 certificates), and then outsource the final calculation to a bright eighth-grader.
Usually when DSA/ECDSA go wrong, it’s because someone simply forgot to generate a random nonce in the first place. This appears to be what happened with the Playstation 3. Obviously, this is stupid and you shouldn’t do it. But no matter how careful your implementation, you’re always going to be vulnerable if your RNG starts spitting out repeated values. If this happens to you even once, you need to throw away your key and generate a new one.
There are basically two ways to protect yourself:
• Best: don’t to use (EC)DSA in the first place. It’s a stupid algorithm with no reasonable security proof, and as a special bonus it goes completely pear-shaped in the presence of a bad RNG. Unfortunately, it’s also a standard, used in TLS and elsewhere, so you’re stuck with it.
• Second best: Derive your nonces deterministically from the message and some secret data. If done correctly (big if!), this prevents two messages from being signed with the same nonce. In the extreme case, this approach completely eliminates the need for randomness in (EC)DSA signatures.There are two published proposals that take this approach. The best is Dan Bernstein’s (somewhat complex) EdDSA proposal, which looks like a great replacement for ECDSA. Unfortunately it’s a replacement, not a patch, since EdDSA uses different elliptic curves and is therefore not cross-compatible with existing ECDSA implementations.
Alternatively, Thomas Pornin has a proposal up that simply modifies (EC)DSA by using HMAC to derive the nonces. The best part about Thomas’s proposal is that it doesn’t break compatibility with existing (EC)DSA implementations. I will caution you, however: while Thomas’s work looks reasonable, his proposal is just a draft (and an expired one to boot). Proceed at your own risk.
Encryption
There are various consequences to using a bad RNG for encryption, most of which depend on the scheme you’re using. Once again we’ll assume that the keys themselves are properly-generated. What’s at stake is the encryption itself.
Symmetric encryption. The good news is that symmetric encryption can be done securely with no randomness at all, provided that you have a strong encryption key and the ability to keep state between messages.
An obvious choice is to use CTR mode encryption. Since CTR mode IVs needn’t be unpredictable, you can set your initial IV to zero, then simply make sure that you always hang onto the last counter value between messages. Provided that you never ever re-use a counter value with a given key (even across system restarts) you’ll be fine.**
This doesn’t work with CBC mode, since that actually does require an unpredictable IV at the head of each chain. You can hack around this requirement in various ways, but I’m not going to talk about those here; nothing good will come of it.
Public-key encryption. Unfortunately, public-key encryption is much more difficult to get right without a good RNG.
Here’s the fundamental problem: if an attacker knows the randomness you used to produce a ciphertext, then (in the worst case) she can simply encrypt ‘guess’ messages until she obtains the same ciphertext as you. At that point she knows what you encrypted.***
Obviously this attack only works if the attacker can guess the message you encrypted. Hence it’s possible that high-entropy messages (symmetric keys, for example) will encrypt securely even without good randomness. But there’s no guarantee of this. Elgamal, for example, can fail catastrophically when you encrypt two messages with the same random nonce.****
Although I’m not going to endorse any specific public-key encryption scheme, it seems likely that some schemes will hold up better than others. For example, while predictably-randomized RSA-OAEP and RSA-OAEP+ will both be vulnerable to guessing attacks, there’s some (intuitive) reason to believe that they’ll remain secure for high-entropy messages like keys. I can’t prove this, but it seems like a better bet than using Elgamal (clearly broken) or older padding schemes like RSA-PKCS#1v1.5.
If my intuition isn’t satisfying to you, quite a lot of research is still being done in this area. See for example, recent works on deterministic public-key encryption, or hedged public-key encryption. Note that all of this work makes on the assumption that you’re encrypting high-entropy messages.
Protocols
I can’t conclude this post without at least a token discussion of how a bad RNG can affect cryptographic protocols. The short version is that it depends on the protocol. The shorter version is that it’s almost always bad.
Consider the standard TLS handshake. Both sides use their RNGs to generate nonces. Then the client generates a random ‘Pre-master Secret’ (PMS), encrypts it under the server’s key, and transmits it over the wire. The ‘Master Secret’ (and later, transport key) is derived by hashing together all of the nonces and the PMS.
Since the PMS is the only real ‘secret’ in the protocol (everything else is sent in the clear), predicting it is the same as recovering the transport key. Thus TLS is not safe to use if the client RNG is predictable. What’s interesting is that the protocol is secure (at least, against passive attackers) even if the server’s RNG fails. I can only guess that this was a deliberate choice on the part of TLS’s designers.
sy10660a
SSL handshake (source).
Protocols are already plenty exciting when you have a working RNG. Adding a bad RNG to the mix is like pouring fireworks on a fire. It’s at least possible to build protocols that are resilient to one participant losing their RNG, but it’s very tricky to accomplish — most protocols will fail in unexpected ways.
In Summary
If you take nothing else from this post, I hope it’s this: using a broken RNG is just a bad idea. If you think there’s any chance your generator will stop working, then for god’s sake, fix it! Don’t waste your time doing any of the stuff I mention above.
That said, there legitimately are cases where your RNG can go wrong, or where you don’t have one in the first place. The purpose of this post was to help you understand these scenarios, and the potential consequences for your system. So model it. Think about it. Then spend your time on better things.
Notes:
The classic example is Debian’s 2008 OpenSSL release, which used a 15-bit process ID as the only seed for its PRG. This wasn’t obvious during testing, since the 32,768 possible RNG streams all looked pretty random. It was only after the public release that people noticed that many devices were sharing TLS keys.
** If you’re going to do this, you should also be sure to use a MAC on your ciphertext, including the initial counter value for each message.
*** A great example is unpadded, textbook RSA. If m is random, then it’s quite difficult to recover m given m^e mod N. If, however, you have a few good guesses for m and you know the public key (N, e), you can easily try each of your guesses and compare the results.
**** Given two Elgamal ciphertexts on the same key and randomness (g^r, y^r*M1), (g^r, y^r*M2) you can easily compute M1/M2. A similar thing happens with hash Elgamal. This may or may not be useful to you, depending on how much you know about the content of the various messages.
|
__label__pos
| 0.528018 |
What is Cisco?
in
Tech Talk
What is Cisco?
In 1988, John Morgridge was appointed CEO. The name “Cisco was derived from the city name San Francisco, CISCO SYSTEMS INC. IS THE WORLDWIDE LEADER in networking for the Internet. The company was founded in 1984 by two computer scientists from Stanford University seeking an easier way to connect different types of computer systems. What is Cisco?
Cisco Systems shipped its first product in 1986 and is now a multi-national corporation, with over 35,000 employees in more than 115 countries. Today, Cisco solutions are the networking foundations for service providers, small to medium business and enterprise customers which includes corporations, government agencies, utilities and educational institutions.
Cisco’s networking solutions connect people, computing devices and computer networks, allowing people to access or transfer information without regard to differences in time, place or type of computer system.
If anyone can lay claim to a ‘heritage’ in an industry as young as global networking, then it is Cisco. Not only does 85 per cent of Internet traffic travel across Cisco’s systems, we also use the Internet to run our own business online, from product orders and inventory management through to staff communications and travel expenses.
And It is. a leading networking company best known as a manufacturer and vendor of networking equipment.
what is cisco in computer The Cisco Networking Academy is an innovative education initiative that aims to create a pool of trained computer networking specialists,
what is cisco networking
Cisco Certified Network Professional (CCNP) is an intermediate-level certification in the Cisco certified professional program.
What is the Cisco router?
Cisco IOS (originally Internetwork Operating System) is a family of software used on most Cisco Systems routers and current Cisco network switches.
What does Cisco Systems make?
At its core, Cisco designs, builds, and sells networking technology. This hardware is the backbone that makes the Internet work.
|
__label__pos
| 0.981933 |
/* * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #ifndef SHARE_RUNTIME_VMOPERATION_HPP #define SHARE_RUNTIME_VMOPERATION_HPP #include "memory/allocation.hpp" // The following classes are used for operations // initiated by a Java thread but that must // take place in the VMThread. #define VM_OP_ENUM(type) VMOp_##type, // Note: When new VM_XXX comes up, add 'XXX' to the template table. #define VM_OPS_DO(template) \ template(None) \ template(Cleanup) \ template(ThreadDump) \ template(PrintThreads) \ template(FindDeadlocks) \ template(ClearICs) \ template(ForceSafepoint) \ template(ForceAsyncSafepoint) \ template(DeoptimizeFrame) \ template(DeoptimizeAll) \ template(ZombieAll) \ template(Verify) \ template(HeapDumper) \ template(DeoptimizeTheWorld) \ template(CollectForMetadataAllocation) \ template(CollectForCodeCacheAllocation) \ template(GC_HeapInspection) \ template(GenCollectFull) \ template(GenCollectFullConcurrent) \ template(GenCollectForAllocation) \ template(ParallelGCFailedAllocation) \ template(ParallelGCSystemGC) \ template(G1CollectForAllocation) \ template(G1CollectFull) \ template(G1Concurrent) \ template(G1TryInitiateConcMark) \ template(ZMarkStart) \ template(ZMarkEnd) \ template(ZRelocateStart) \ template(ZVerify) \ template(HandshakeOneThread) \ template(HandshakeAllThreads) \ template(HandshakeFallback) \ template(PopulateDumpSharedSpace) \ template(JNIFunctionTableCopier) \ template(RedefineClasses) \ template(GetObjectMonitorUsage) \ template(GetAllStackTraces) \ template(GetThreadListStackTraces) \ template(VThreadGetStackTrace) \ template(VThreadGetFrameCount) \ template(ChangeBreakpoints) \ template(GetOrSetLocal) \ template(VirtualThreadGetOrSetLocal) \ template(VThreadGetCurrentLocation) \ template(ChangeSingleStep) \ template(HeapWalkOperation) \ template(HeapIterateOperation) \ template(ReportJavaOutOfMemory) \ template(JFRCheckpoint) \ template(ShenandoahFullGC) \ template(ShenandoahInitMark) \ template(ShenandoahFinalMarkStartEvac) \ template(ShenandoahInitUpdateRefs) \ template(ShenandoahFinalUpdateRefs) \ template(ShenandoahFinalRoots) \ template(ShenandoahDegeneratedGC) \ template(Exit) \ template(LinuxDllLoad) \ template(RotateGCLog) \ template(WhiteBoxOperation) \ template(JVMCIResizeCounters) \ template(ClassLoaderStatsOperation) \ template(ClassLoaderHierarchyOperation) \ template(DumpHashtable) \ template(DumpTouchedMethods) \ template(CleanClassLoaderDataMetaspaces) \ template(PrintCompileQueue) \ template(PrintClassHierarchy) \ template(ThreadSuspend) \ template(ICBufferFull) \ template(ScavengeMonitors) \ template(PrintMetadata) \ template(GTestExecuteAtSafepoint) \ template(JFROldObject) \ template(JvmtiPostObjectFree) class Thread; class outputStream; class VM_Operation : public StackObj { public: enum VMOp_Type { VM_OPS_DO(VM_OP_ENUM) VMOp_Terminating }; private: Thread* _calling_thread; // The VM operation name array static const char* _names[]; public: VM_Operation() : _calling_thread(NULL) {} // VM operation support (used by VM thread) Thread* calling_thread() const { return _calling_thread; } void set_calling_thread(Thread* thread); // Called by VM thread - does in turn invoke doit(). Do not override this void evaluate(); // evaluate() is called by the VMThread and in turn calls doit(). // If the thread invoking VMThread::execute((VM_Operation*) is a JavaThread, // doit_prologue() is called in that thread before transferring control to // the VMThread. // If doit_prologue() returns true the VM operation will proceed, and // doit_epilogue() will be called by the JavaThread once the VM operation // completes. If doit_prologue() returns false the VM operation is cancelled. virtual void doit() = 0; virtual bool doit_prologue() { return true; }; virtual void doit_epilogue() {}; // Configuration. Override these appropriately in subclasses. virtual VMOp_Type type() const = 0; virtual bool allow_nested_vm_operations() const { return false; } // You may override skip_thread_oop_barriers to return true if the operation // does not access thread-private oops (including frames). virtual bool skip_thread_oop_barriers() const { return false; } // An operation can either be done inside a safepoint // or concurrently with Java threads running. virtual bool evaluate_at_safepoint() const { return true; } // Debugging virtual void print_on_error(outputStream* st) const; virtual const char* name() const { return _names[type()]; } static const char* name(int type) { assert(type >= 0 && type < VMOp_Terminating, "invalid VM operation type"); return _names[type]; } #ifndef PRODUCT void print_on(outputStream* st) const { print_on_error(st); } #endif }; #endif // SHARE_RUNTIME_VMOPERATION_HPP
|
__label__pos
| 0.961098 |
Loading please wait
The smart way to improve grades
Comprehensive & curriculum aligned
Try an activity or get started for free
Solve Simple Equations (Mixed Decimals)
In this worksheet, students solve simple one-stage equations with decimals.
This content is premium and exclusive to EdPlace subscribers.
'Solve Simple Equations (Mixed Decimals)' worksheet
Key stage: KS 2
Curriculum topic: Maths and Numerical Reasoning
Curriculum subtopic: Equations and Algebra
Difficulty level:
Worksheet Overview
When we solve algebraic equations, our aim is to end up with one letter on one side of the equals sign and one number on the other. This is the solution.
We do this by using inverse operations to undo things that get in the way, but remember that we must do the same thing to both sides.
Example
Solve for a
5.2 - a = 10
Answer
Add a to both sides.
5.2 - a + a = 10 + a
Simplify
5.2 = 10 + a
Subtract 10 from both sides.
5.2 -10 = 10 - 10 + a
Simplify
-4.8 = a
a = -4.8
Example
5b = 24
Answer
Divide both sides by 5
5b ÷ 5 = 24 ÷ 5
Simplify
b = 4.8
Example
b ÷ 5 = 20.2
Answer
Multiply both sides by 5
b ÷ 5 x 5 = 20.2 x 5
Simplify
b = 101
What is EdPlace?
We're your National Curriculum aligned online education content provider helping each child succeed in English, maths and science from year 1 to GCSE. With an EdPlace account you’ll be able to track and measure progress, helping each child achieve their best. We build confidence and attainment by personalising each child’s learning at a level that suits them.
Get started
laptop
Try an activity or get started for free
|
__label__pos
| 0.989013 |
Latest Diablo 3 News
DiabloWiki Updates
Support the site! Become a Diablo: IncGamers PAL - Remove ads and more!
Programming help
Discussion in 'Off-Topic' started by Dondrei, Nov 14, 2006.
1. Dondrei
Dondrei IncGamers Member
Joined:
Aug 1, 2005
Messages:
36,855
Likes Received:
30
Trophy Points:
467
Programming help
Okay, I'm stumped. I'm probably doing something really stupid again, but here goes:
These are the important parts of my code. This is inside a main loop, so the idea is to switch between states in every frame (well, I also built in a delay to do it every five frames, but you get the picture). For some reason the sprite suddenly stayed in state 0.
I found a number of other ways to do this, but I'd still like to know what was wrong with what I had.
2. AJK
AJK IncGamers Member
Joined:
Feb 18, 2006
Messages:
2,447
Likes Received:
3
Trophy Points:
166
Try
Code:
spritestate = 0;
if (spritestate [highlight]==[/highlight] 0) {spritestate = 1;} else {spritestate = 0;}
3. Dondrei
Dondrei IncGamers Member
Joined:
Aug 1, 2005
Messages:
36,855
Likes Received:
30
Trophy Points:
467
Oh, for Christ's sake...
4. DPH
DPH IncGamers Member
Joined:
Mar 21, 2006
Messages:
1,331
Likes Received:
0
Trophy Points:
255
That brought a smile to my face. Thanks
5. myleftfoot
myleftfoot IncGamers Member
Joined:
Oct 8, 2004
Messages:
1,931
Likes Received:
0
Trophy Points:
165
You've got a long way to go Don. Best of luck.
6. SaroDarksbane
SaroDarksbane IncGamers Site Pal
Joined:
Jul 3, 2003
Messages:
8,563
Likes Received:
8
Trophy Points:
467
Haha, nice one Donny.
If you only have 2 states, you might also try a Boolean.
Code:
bool spritestate = 0;
spritestate = !spritestate;
So it'll set the state to the opposite of whatever it was before, and you don't have to use a clunky If/Else structure.
7. DurfBarian
DurfBarian IncGamers Member
Joined:
Jun 22, 2003
Messages:
9,708
Likes Received:
18
Trophy Points:
467
10 ?"DONDREI IS TEH DUM"
20 GOTO 10
I thought you already had a thread for your failures. ;-)
8. bladesyz
bladesyz IncGamers Member
Joined:
Jul 8, 2005
Messages:
2,027
Likes Received:
0
Trophy Points:
255
lol, I can't believe I didn't pick that up at first. It's a classic!
9. SaroDarksbane
SaroDarksbane IncGamers Site Pal
Joined:
Jul 3, 2003
Messages:
8,563
Likes Received:
8
Trophy Points:
467
This one is even better, and it's fun as a prank too!
Can you spot the error?
Code:
for(int x = 0; x < y; x++); {
// Do a bunch of stuff here
}
They'll spend hours trying to figure out why their code isn't working.
10. myleftfoot
myleftfoot IncGamers Member
Joined:
Oct 8, 2004
Messages:
1,931
Likes Received:
0
Trophy Points:
165
These things can be shockingly time wasting :smiley:
11. bladesyz
bladesyz IncGamers Member
Joined:
Jul 8, 2005
Messages:
2,027
Likes Received:
0
Trophy Points:
255
Semi-colon.
That didn't take me long, since I get those so often, I instinctively check for those almost right away.
12. ModeratelyConfused
ModeratelyConfused Banned
Joined:
Sep 12, 2006
Messages:
3,982
Likes Received:
0
Trophy Points:
0
It's been a while since I've done any programming, but that's what stuck out the most.
13. Dondrei
Dondrei IncGamers Member
Joined:
Aug 1, 2005
Messages:
36,855
Likes Received:
30
Trophy Points:
467
Nah, I thought of that but I'm planning to use the same variable for a number of different states.
Alright, even I got that one.
Shut up. Anyway, it's funny you should mention that because I'm still used to QBasic which doesn't have that silly "==" convention.
Share This Page
|
__label__pos
| 0.960858 |
Get Imaginary Part
Description
This node displays the imaginary part of a complex number.
Input
The node has the following input:
• Complex Number: An input from any node that outputs a complex number.
Content
The node displays the imaginary part of the complex number provided in the input.
Output
• Scalar: This node outputs the imaginary part of a complex number
Example
In the example below, the overlap between a Gaussain function and a superposition state of a harmonic oscillator is calculated. The Overlap node displays a complex number whose imaginary part is displayed in the Get Imaginary Part node.
|
__label__pos
| 0.996361 |
1/*-
2 * SPDX-License-Identifier: BSD-2-Clause-FreeBSD
3 *
4 * Copyright (c) 1998 Nicolas Souchu, Marc Bouget
5 * Copyright (c) 2004 Joerg Wunsch
6 * All rights reserved.
7 *
8 * Redistribution and use in source and binary forms, with or without
9 * modification, are permitted provided that the following conditions
10 * are met:
11 * 1. Redistributions of source code must retain the above copyright
12 * notice, this list of conditions and the following disclaimer.
13 * 2. Redistributions in binary form must reproduce the above copyright
14 * notice, this list of conditions and the following disclaimer in the
15 * documentation and/or other materials provided with the distribution.
16 *
17 * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
18 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
19 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
20 * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
21 * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
22 * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
23 * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
24 * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
25 * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
26 * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
27 * SUCH DAMAGE.
28 */
29
30#include <sys/cdefs.h>
31__FBSDID("$FreeBSD$");
32
33#include <sys/param.h>
34#include <sys/bus.h>
35#include <sys/lock.h>
36#include <sys/kernel.h>
37#include <sys/module.h>
38#include <sys/mutex.h>
39#include <sys/systm.h>
40
41#include <machine/bus.h>
42#include <machine/resource.h>
43
44#include <sys/rman.h>
45
46#include <dev/iicbus/iicbus.h>
47#include <dev/iicbus/iiconf.h>
48#include <dev/pcf/pcfvar.h>
49#include "iicbus_if.h"
50
51/* Not so official debugging option. */
52/* #define PCFDEBUG */
53
54static int pcf_wait_byte(struct pcf_softc *pcf);
55static int pcf_noack(struct pcf_softc *pcf, int timeout);
56static void pcf_stop_locked(struct pcf_softc *pcf);
57
58/*
59 * Polling mode for master operations wait for a new
60 * byte incoming or outgoing
61 */
62static int
63pcf_wait_byte(struct pcf_softc *sc)
64{
65 int counter = TIMEOUT;
66
67 PCF_ASSERT_LOCKED(sc);
68 while (counter--) {
69 if ((pcf_get_S1(sc) & PIN) == 0)
70 return (0);
71 }
72
73#ifdef PCFDEBUG
74 printf("pcf: timeout!\n");
75#endif
76
77 return (IIC_ETIMEOUT);
78}
79
80static void
81pcf_stop_locked(struct pcf_softc *sc)
82{
83
84 PCF_ASSERT_LOCKED(sc);
85#ifdef PCFDEBUG
86 device_printf(dev, " >> stop\n");
87#endif
88 /*
89 * Send STOP condition iff the START condition was previously sent.
90 * STOP is sent only once even if an iicbus_stop() is called after
91 * an iicbus_read()... see pcf_read(): the PCF needs to send the stop
92 * before the last char is read.
93 */
94 if (sc->pcf_started) {
95 /* set stop condition and enable IT */
96 pcf_set_S1(sc, PIN|ESO|ENI|STO|ACK);
97
98 sc->pcf_started = 0;
99 }
100}
101
102static int
103pcf_noack(struct pcf_softc *sc, int timeout)
104{
105 int noack;
106 int k = timeout/10;
107
108 PCF_ASSERT_LOCKED(sc);
109 do {
110 noack = pcf_get_S1(sc) & LRB;
111 if (!noack)
112 break;
113 DELAY(10); /* XXX wait 10 us */
114 } while (k--);
115
116 return (noack);
117}
118
119int
120pcf_repeated_start(device_t dev, u_char slave, int timeout)
121{
122 struct pcf_softc *sc = DEVTOSOFTC(dev);
123 int error = 0;
124
125 PCF_LOCK(sc);
126#ifdef PCFDEBUG
127 device_printf(dev, " >> repeated start for slave %#x\n",
128 (unsigned)slave);
129#endif
130 /* repeated start */
131 pcf_set_S1(sc, ESO|STA|STO|ACK);
132
133 /* set slave address to PCF. Last bit (LSB) must be set correctly
134 * according to transfer direction */
135 pcf_set_S0(sc, slave);
136
137 /* wait for address sent, polling */
138 if ((error = pcf_wait_byte(sc)))
139 goto error;
140
141 /* check for ack */
142 if (pcf_noack(sc, timeout)) {
143 error = IIC_ENOACK;
144#ifdef PCFDEBUG
145 printf("pcf: no ack on repeated_start!\n");
146#endif
147 goto error;
148 }
149
150 PCF_UNLOCK(sc);
151 return (0);
152
153error:
154 pcf_stop_locked(sc);
155 PCF_UNLOCK(sc);
156 return (error);
157}
158
159int
160pcf_start(device_t dev, u_char slave, int timeout)
161{
162 struct pcf_softc *sc = DEVTOSOFTC(dev);
163 int error = 0;
164
165 PCF_LOCK(sc);
166#ifdef PCFDEBUG
167 device_printf(dev, " >> start for slave %#x\n", (unsigned)slave);
168#endif
169 if ((pcf_get_S1(sc) & nBB) == 0) {
170#ifdef PCFDEBUG
171 printf("pcf: busy!\n");
172#endif
173 PCF_UNLOCK(sc);
174 return (IIC_EBUSERR);
175 }
176
177 /* set slave address to PCF. Last bit (LSB) must be set correctly
178 * according to transfer direction */
179 pcf_set_S0(sc, slave);
180
181 /* START only */
182 pcf_set_S1(sc, PIN|ESO|STA|ACK);
183
184 sc->pcf_started = 1;
185
186 /* wait for address sent, polling */
187 if ((error = pcf_wait_byte(sc)))
188 goto error;
189
190 /* check for ACK */
191 if (pcf_noack(sc, timeout)) {
192 error = IIC_ENOACK;
193#ifdef PCFDEBUG
194 printf("pcf: no ack on start!\n");
195#endif
196 goto error;
197 }
198
199 PCF_UNLOCK(sc);
200 return (0);
201
202error:
203 pcf_stop_locked(sc);
204 PCF_UNLOCK(sc);
205 return (error);
206}
207
208int
209pcf_stop(device_t dev)
210{
211 struct pcf_softc *sc = DEVTOSOFTC(dev);
212
213#ifdef PCFDEBUG
214 device_printf(dev, " >> stop\n");
215#endif
216 PCF_LOCK(sc);
217 pcf_stop_locked(sc);
218 PCF_UNLOCK(sc);
219
220 return (0);
221}
222
223void
224pcf_intr(void *arg)
225{
226 struct pcf_softc *sc = arg;
227 char data, status, addr;
228 char error = 0;
229
230 PCF_LOCK(sc);
231 status = pcf_get_S1(sc);
232
233 if (status & PIN) {
234 printf("pcf: spurious interrupt, status=0x%x\n",
235 status & 0xff);
236
237 goto error;
238 }
239
240 if (status & LAB)
241 printf("pcf: bus arbitration lost!\n");
242
243 if (status & BER) {
244 error = IIC_EBUSERR;
245 iicbus_intr(sc->iicbus, INTR_ERROR, &error);
246
247 goto error;
248 }
249
250 do {
251 status = pcf_get_S1(sc);
252
253 switch(sc->pcf_slave_mode) {
254 case SLAVE_TRANSMITTER:
255 if (status & LRB) {
256 /* ack interrupt line */
257 dummy_write(sc);
258
259 /* no ack, don't send anymore */
260 sc->pcf_slave_mode = SLAVE_RECEIVER;
261
262 iicbus_intr(sc->iicbus, INTR_NOACK, NULL);
263 break;
264 }
265
266 /* get data from upper code */
267 iicbus_intr(sc->iicbus, INTR_TRANSMIT, &data);
268
269 pcf_set_S0(sc, data);
270 break;
271
272 case SLAVE_RECEIVER:
273 if (status & AAS) {
274 addr = pcf_get_S0(sc);
275
276 if (status & AD0)
277 iicbus_intr(sc->iicbus, INTR_GENERAL, &addr);
278 else
279 iicbus_intr(sc->iicbus, INTR_START, &addr);
280
281 if (addr & LSB) {
282 sc->pcf_slave_mode = SLAVE_TRANSMITTER;
283
284 /* get the first char from upper code */
285 iicbus_intr(sc->iicbus, INTR_TRANSMIT, &data);
286
287 /* send first data byte */
288 pcf_set_S0(sc, data);
289 }
290
291 break;
292 }
293
294 /* stop condition received? */
295 if (status & STS) {
296 /* ack interrupt line */
297 dummy_read(sc);
298
299 /* emulate intr stop condition */
300 iicbus_intr(sc->iicbus, INTR_STOP, NULL);
301
302 } else {
303 /* get data, ack interrupt line */
304 data = pcf_get_S0(sc);
305
306 /* deliver the character */
307 iicbus_intr(sc->iicbus, INTR_RECEIVE, &data);
308 }
309 break;
310
311 default:
312 panic("%s: unknown slave mode (%d)!", __func__,
313 sc->pcf_slave_mode);
314 }
315
316 } while ((pcf_get_S1(sc) & PIN) == 0);
317 PCF_UNLOCK(sc);
318
319 return;
320
321error:
322 /* unknown event on bus...reset PCF */
323 pcf_set_S1(sc, PIN|ESO|ENI|ACK);
324
325 sc->pcf_slave_mode = SLAVE_RECEIVER;
326 PCF_UNLOCK(sc);
327
328 return;
329}
330
331int
332pcf_rst_card(device_t dev, u_char speed, u_char addr, u_char *oldaddr)
333{
334 struct pcf_softc *sc = DEVTOSOFTC(dev);
335
336 PCF_LOCK(sc);
337 if (oldaddr)
338 *oldaddr = sc->pcf_addr;
339
340 /* retrieve own address from bus level */
341 if (!addr)
342 sc->pcf_addr = PCF_DEFAULT_ADDR;
343 else
344 sc->pcf_addr = addr;
345
346 pcf_set_S1(sc, PIN); /* initialize S1 */
347
348 /* own address S'O<>0 */
349 pcf_set_S0(sc, sc->pcf_addr >> 1);
350
351 /* select clock register */
352 pcf_set_S1(sc, PIN|ES1);
353
354 /* select bus speed : 18=90kb, 19=45kb, 1A=11kb, 1B=1.5kb */
355 switch (speed) {
356 case IIC_SLOW:
357 pcf_set_S0(sc, 0x1b); /* XXX Sun uses 0x1f */
358 break;
359
360 case IIC_FAST:
361 pcf_set_S0(sc, 0x19); /* XXX Sun: 0x1d */
362 break;
363
364 case IIC_UNKNOWN:
365 case IIC_FASTEST:
366 default:
367 pcf_set_S0(sc, 0x18); /* XXX Sun: 0x1c */
368 break;
369 }
370
371 /* set bus on, ack=yes, INT=yes */
372 pcf_set_S1(sc, PIN|ESO|ENI|ACK);
373
374 sc->pcf_slave_mode = SLAVE_RECEIVER;
375 PCF_UNLOCK(sc);
376
377 return (0);
378}
379
380int
381pcf_write(device_t dev, const char *buf, int len, int *sent, int timeout /* us */)
382{
383 struct pcf_softc *sc = DEVTOSOFTC(dev);
384 int bytes, error = 0;
385
386#ifdef PCFDEBUG
387 device_printf(dev, " >> writing %d bytes: %#x%s\n", len,
388 (unsigned)buf[0], len > 1? "...": "");
389#endif
390
391 bytes = 0;
392 PCF_LOCK(sc);
393 while (len) {
394 pcf_set_S0(sc, *buf++);
395
396 /* wait for the byte to be send */
397 if ((error = pcf_wait_byte(sc)))
398 goto error;
399
400 /* check if ack received */
401 if (pcf_noack(sc, timeout)) {
402 error = IIC_ENOACK;
403 goto error;
404 }
405
406 len --;
407 bytes ++;
408 }
409
410error:
411 *sent = bytes;
412 PCF_UNLOCK(sc);
413
414#ifdef PCFDEBUG
415 device_printf(dev, " >> %d bytes written (%d)\n", bytes, error);
416#endif
417
418 return (error);
419}
420
421int
422pcf_read(device_t dev, char *buf, int len, int *read, int last,
423 int delay /* us */)
424{
425 struct pcf_softc *sc = DEVTOSOFTC(dev);
426 int bytes, error = 0;
427#ifdef PCFDEBUG
428 char *obuf = buf;
429
430 device_printf(dev, " << reading %d bytes\n", len);
431#endif
432
433 PCF_LOCK(sc);
434 /* trig the bus to get the first data byte in S0 */
435 if (len) {
436 if (len == 1 && last)
437 /* just one byte to read */
438 pcf_set_S1(sc, ESO); /* no ack */
439
440 dummy_read(sc);
441 }
442
443 bytes = 0;
444 while (len) {
445 /* XXX delay needed here */
446
447 /* wait for trigged byte */
448 if ((error = pcf_wait_byte(sc))) {
449 pcf_stop_locked(sc);
450 goto error;
451 }
452
453 if (len == 1 && last)
454 /* ok, last data byte already in S0, no I2C activity
455 * on next pcf_get_S0() */
456 pcf_stop_locked(sc);
457
458 else if (len == 2 && last)
459 /* next trigged byte with no ack */
460 pcf_set_S1(sc, ESO);
461
462 /* receive byte, trig next byte */
463 *buf++ = pcf_get_S0(sc);
464
465 len --;
466 bytes ++;
467 }
468
469error:
470 *read = bytes;
471 PCF_UNLOCK(sc);
472
473#ifdef PCFDEBUG
474 device_printf(dev, " << %d bytes read (%d): %#x%s\n", bytes, error,
475 (unsigned)obuf[0], bytes > 1? "...": "");
476#endif
477
478 return (error);
479}
480
481DRIVER_MODULE(iicbus, pcf, iicbus_driver, iicbus_devclass, 0, 0);
482MODULE_DEPEND(pcf, iicbus, PCF_MINVER, PCF_PREFVER, PCF_MAXVER);
483MODULE_VERSION(pcf, PCF_MODVER);
484
|
__label__pos
| 0.975292 |
Senin, 28 Maret 2016
PENGANTAR KOMPUTASI GRID
Pengertian
Komputasi grid adalah suatu cara penggabungan sumber daya yang dimiliki banyak komputer yang terhubung dalam suatu jaringan sehingga terbentuk suatu kesatuan sistem komputer dengan sumber daya komputasi yang besarnya mendekati jumlah sumber komputasi dari komputer – komputer yang membentuknya. Ide awal komputasi grid dimulai dengan adanya
distributed computing, yaitu mempelajari penggunaan komputer terkoordinasi yang secara fisik terpisah atau terdistribusi. Sistem terdistribusi membutuhkan aplikasi yang berbeda dengan sistem terpusat. Kemudian berkembang lagi menjadi parallel computing yanng merupakan teknik komputasi secara bersamaan dengan memanfaatkan beberapa komputer secara bersamaan. Grid computing menawarkan solusi komputasi yang murah, yaitu dengan memanfaatkan sumber daya yang tersebar dan heterogen serta pengaksesan yang mudah dari mana saja. Globus Toolkit adalah sekumpulan perangkat lunak dan pustaka pembuatan lingkungan komputasi grid yang bersifat open source.
Arsitektur Grid Computing
GRAM (Grid Resource Allocation And Management)
Komponen ini bertanggung jawab dalam mengelola seluruh sumber daya komputasi yang tersedia dalam sistem komputasi grid. Pengelolaan ini mencakup eksekusi program pada seluruh komputer yang tergabung dalam sistem komputasi grid, mulai dari inisiasi, monitoring, sampai penjadwalan dan koordinasi antar proses.
RFT/GridFTP (Reliable File Transfer/Grid File Transfer Protocol)
Komponen ini memungkinkan pengguna mengakses data yang berukuran besar dari simpul – simpul komputasi yang tergabung dalam sistem komputasi grid secara efisien dan dapat diandalkan. Hal ini penting karena kinerja komputasi tidak saja bergantung pada seberapa cepat komputer – komputer yang tergabung dalam sistem komputasi grid ini mengeksekusi program, tetapi juga seberapa cepat data yang dibutuhkan dalam komputasi tersebut dapat diakses. Perlu diingat bahwa, data yang dibutuhkan oleh suatu proses tidak selalu berada pada komputer yang mengeksekusi proses tersebut.
MDS (Monitoring and Discovery Service)
Komponen ini memungkinkan pengguna sistem GT4 melakukan monitoring proses komputasi yang tengah berjalan sehingga masalah yang timbul dapat segera diketahui. Sementara itu, aspek discovery dari komponen ini memungkinkan pengguna mengidentifikasi keberadaan suatu sumber daya komputasi berikut karakteristiknya.
GSI (Grid Security Infrastructure)
Komponen ini bertanggung jawab atas keamanan sistem komputasi grid secara keseluruhan. Komponen ini pula yang merupakan salah satu ciri pembeda teknologi GT4 dengan teknologi - teknologi pendahulunya seperti PVM atau MPI. Dengan diterapkannya mekanisme keamanan yang terintegrasi dengan komponen-komponen komputasi grid lainnya, sistem berbasis teknologi grid computing seperti GT4 dapat diakses oleh publik (WAN) tanpa menurunkan tingkat keamanannya.
Cara Kerja Komputasi GRID
Menurut tulisan singkat oleh Ian Foster ada check-list yang dapat digunakan untuk mengidentifikasi bahwa suatu sistem melakukan komputasi grid yaitu :
1. Sistem tersebut melakukan koordinasi terhadap sumber daya komputasi yang tidak berada dibawah suatu kendali terpusat. Seandainya sumber daya yang digunakan berada dalam satu cakupan domain administratif, maka komputasi tersebut belum dapat dikatakan komputasi grid.
2. Sistem tersebut menggunakan standard dan protokol yang bersifat terbuka (tidak terpaut pada suatu implementasi atau produk tertentu). Komputasi grid disusun dari kesepakatan-kesepakatan terhadap masalah yang fundamental, dibutuhkan untuk mewujudkan komputasi bersama dalam skala besar. Kesepakatan dan standar yang dibutuhkan adalah dalam bidang autentikasi, otorisasi, pencarian sumberdaya, dan akses terhadap sumber daya.
3. Sistem tersebut berusaha untuk mencapai kualitas layanan yang canggih, (nontrivial quality of service) yang jauh diatas kualitas layanan komponen individu dari komputasi grid tersebut.
Kemudian hal yang tidak boleh dilupakan adalah mengenai keberadaan dari elemen-elemen dari grid computing, elemen ini tidak bisa dilepaskan dari grid computing. Elemen grid computing adalah berikut :
· Hardware
· Software
· Brainware
Sumber :
http://febbri-grunge.blogspot.co.id/2015/06/komputasi-grid-grid-computing.html
Masruro,Ahlihi,Komputasi Grid Sebagai Jawaban Keterbatasan Sumber Daya Komputasi,http://febbri-grunge.blogspot.co.id/2015/06/komputasi-grid-grid-computing.html,23 Maret 2016.
Tidak ada komentar:
Posting Komentar
|
__label__pos
| 0.995912 |
How does DNS work? For dummies!
I get a lot of people asking how DNS works, my response is always that your computer asks a server what IP address a domain points to and then goes and makes it's request to that address. But how does DNS really work and why does it work the way it does? That question is a little more involved but i've tried to answer in the simplest way possible without dumbing it down too much.
The Basics:
https://mydomain.org.uk is a domain, when you type that into your address bar, your computer asks a server called a Domain Name Server (DNS) where to load that site from, the DNS Server responds with an IP Address which works just like you might use a house number & postcode to find a house in the real world, an IP address can be used to find a server on the internet.
Your browser then goes to that IP Address and says, what content have you got for https://mydomain.org.uk, and if that server does indeed host the website, you’ll see it load up in your browser.
It’s the same for anything internet related, when you send an email to [email protected] the mail server your sending from asks a DNS Server the IP Address of the Mail server that it needs to send to and then once it has a response, sends the email to it.
The main domain record types:
• A: This is what’s called a host record, it contains an IP address of a server that hosts a service.
• CNAME: This is just a pointer, it represents the same value as another record, for example: www.mydomain.org.uk has been configured as a CNAME because it’s hosted at the same place as mydomain.org.uk (which is an A record). This is done so that if we ever change server IP Address of our website, then we only have to update the one record and the changes will be reflected on both, though it does require an extra lookup.
• MX: A Mail Exchanger record specifies what mail servers are responsible for a domain and where to find the mailboxes for email addresses you might want to send to.
But how does my DNS Server know what server hosts that domain?
A DNS Server might not know who hosts https://mydomain.org.uk so it goes off to another DNS server, usually the one responsible for all .org.uk domains (called a root name server) and asks, who has the records for https://mydomain.org.uk, it returns the address of the DNS Server that knows (because it has a list of servers that are responsible for managing the records on that domain) and then caches the answer for anyone who might ask later on. This makes the lookup a little slow for the first lookup but a lot quicker thereafter.
So why does it cache?
Without caching, websites would load very slowly because of the number of computers in the chain to load a website. If your computer, for every file, image, document and stylesheet had to find out what server hosted it every time, imagine the round trip it’d have to take and how many requests those servers would be answering! It could take seconds, just to find out where a file should be downloaded from. That’s a pretty crappy user experience.
For example, you load an image in your browser, such as https://www.mydomain.org.uk/Images/logo.png, your computer first checks its internal cache, it has nothing, so it will go and ask the DNS Server within the company building if it knows, and guess what? It has no idea either, so it goes and asks the name servers that it’s been told to reference, let’s say for example Google’s domain name servers at the address 8.8.8.8, they don’t know either, so they ask a root name server which has records for the .org.uk domains, they forward the request on to the server that has the records for mydomain.org.uk and then the information is passed down the chain until it reaches your machine. Usually each machine in the chain will then cache the result so it doesn’t have to go off and do the lookup again, it can just go and ask the web server directly.
How does it know how long to cache for?
Each record on a DNS Server (no matter what type) has what’s called a TTL which stands for Time To Live, this tells your computer and any DNS Servers that have cached the record to go and look up a fresh copy of the record after the time has passed, it’s normally a few hours and is stored in milliseconds. This is because the data might have changed in that time, so if we did want to change a server address at some point, we can change the TTL from 24 hours to 1 hour, this tells all servers to update or at least invalidate their cache after 1 hour instead of 24. But be careful, because it was set to 24 hours previously, it’ll take 24 hours for the TTL change to reach all servers/computers involved.
I realise this is a little involved for a newbie, but I tried to make it fairly simple without dumbing down too much, any questions, just ask!
Published at
Tags: DNS,Internet,Web
Luke Alderton
Comments
Post a comment
Sit tight...
We're adding your comment.
Thank you.
Your comment has been added.
There's been a problem.
Please try again later.
Share with
Tags
Latest Comments
By esrgr on JQuery Unobtrusive Validation for dynamically created elements
11 Jun 2019, 06:01 AM
By annualmars on Raspberry Pi - Running Java app on Raspbian
10 May 2019, 15:48 PM
By Shahriar H. Razi on Xamarin Forms - Working with threads
7 May 2019, 09:25 AM
By Person on MVC forms in Umbraco
30 Apr 2019, 13:33 PM
By christopher on How to add a Xamarin Forms Loading Screen/Overlay
27 Mar 2019, 11:59 AM
By Rezi on Xamarin forms - Stop/Cancel back button event
22 Mar 2019, 13:08 PM
By jack of ass on Replacing the Xamarin Header/Navigation bar with a custom view/template
16 Feb 2019, 03:13 AM
By Zal on Xamarin MasterDetailPage has large margin/padding at top
7 Feb 2019, 12:20 PM
Categories
App Development
Event
Game Development
Mapping
Modelling
Programming
Review
Robotics
Tutorial
Web Development
|
__label__pos
| 0.723402 |
Connect with us
Introduction of Circle
Angles are formed when two lines intersect or meet at a point. It can also be defined as the measure of turn between two lines. Angle is measured in degrees or radians. The angles could be of different types.
There Is Four Type Of Circle:
• Central circle
• Tangent Circle
• Unit Circle
• Great Circle
Tangent Circle Formula
The angle formed by the intersection of two secants, two tangents, or one tangent or one secant. In geometry, the tangent of a circle is the straight line that touches circle exactly at a single point and it never enters the interior of the circle. This is a thin line passing through infinitely close points over the circle. The application of tangent circle formula is various theorems or they are used for geometrical constructions or proofs too.
What are the properties of a tangent –
• It will touch the circle exactly at a single point only.
• It intersects the circle radius at the right angle.
The equation of the tangent is written as,
\[\large \huge \left(y-y_{0}\right)=m_{tgt}\left(x-x_{0}\right)\]
Great Circle Formula
The great circle is the largest one drawn over the sphere surface. The minimum distance between two points on the surface of the sphere would be marked as the distance of great circle. Traditionally, the great circle was popular as Romanian circle. The diameter of a sphere will coincide with the diameter of the great circle. It is used for navigation of large ships or aircraft.
Great Circle Formula
Great circle formula = d = r cos -1 [Cos δ1 Cos δ2 cos(λ1 – λ2) + sin δ1 sin δ2]
Where,
r is the radius of the earth
δ is the latitude
λ is the longitude
Question 1: Find the great circle distance if the radius is 4.7 km, latitude is (45o, 32o) and longitude is (24o, 17o) ?
Solution:
Given,
\[\large \sigma_{1},\sigma_{2}=45^{\circ},32^{\circ}\]
\[\large \Lambda_{1},\Lambda_{2}=24^{\circ},17^{\circ}\]
r=4.7 km
r= 4700 m
Using the above given formula,
\[\large d=4700\;cos^{-1}(0.52\times 0.83\times 0.75)+(0.85 \times 0.32)\]
\[\large d=4700\times 0.99\]
D = 4653 m
Unit Circle Formula
Any circle having radius one is termed as unit circle in mathematics. They are useful in trigonometry where the unit circle is the circle whose radius is centered at the origin (0,0) in the Euclidean plane of the Cartesian coordinate system. The example of a unit circle is given below in the diagram –
Unit Circle Formula
The general equation of circle is given below:
\[\large \left(x-h\right)^{2}+\left(y-k\right)^{2}=r^{2}\]
Where (h, k) are center coordinates and r is the radius.
The unit circle formula is:
\[\large x^{2}+y^{2}=1\]
Where x and y are the coordinate values.
Question: Show that the point \[\large P\left[\frac{\sqrt{3}}{3},\,\frac{\sqrt{2}}{\sqrt{3}}\right]\] is on the unit circle.
Solution:
We need to show that this point satisfies the equation of the unit circle, that is: \[\large x^{2}+y^{2}=1\]
\[\large \left[\frac{\sqrt{3}}{3}\right]^{2}+\left[\frac{\sqrt{2}}{\sqrt{3}}\right]^{2}\]
\[\large =\frac{3}{9}+\frac{2}{3}\]
\[\large =\frac{1}{3}+\frac{2}{3}\]
= 1
Therefore P is on the unit circle.
Central Angle of a Circle Formula
A central angle is formed between two radii of a circle where two points intersect and form a segment, and the distance between points is the arc length that is denoted by l in geometry.
Central Angle of a Circle Formula
A central formed at the center of the circle where two radii meet or intersect. The next term that justifies the definition of a central angle is the vertex. A vertex is a point where two points meet to form an angle. The vertex for a central angle would always be the central point of the circle.
Formula to Find the Central Angle of a Circle –
\[\LARGE Central\;Angle\;\theta =\frac{Arc\;Length\times 360}{2\pi r}\]
Example 1: Find the central angle, where the arc length measurement is about 20 cm and the length of the radius measures 10 cm?
Solution:
Given r = 10 cm
Arc length = 20 cm
The formula of central angle is,
Central Angle θ = \[\LARGE \frac{Arc Length \times 360}{2 \times\pi \times r}\]
Central Angle θ = \[\LARGE \frac{20 \times 360}{2 \times 3.14 \times 10}\]
Central Angle θ = \[\LARGE \frac{7200}{62.8}\] = 114.64°
Example 2: If the central angle of a circle is 82.4° and the arc length formed is 23 cm then find out the radius of the circle.
Solution:
Given Arc length = 23 cm
The formula of central angle is,
Central Angle θ = \[\LARGE \frac{Arc\;Length \times 360}{2\times\pi \times r}\]
82.4° =\[\LARGE \frac{23 \times 360}{2\times\pi \times r}\]
The central angle is shown more clearly in the diagram with its formula. This is important to discuss other vertexes too because when two radii meet, there could be more angles like the convex central angle, and reflex angle. If the central angle is measured less than 180-degree then it is a convex central angle. If the central angle is measured more than 180-degree then it is a reflex central angle.
With this discussion, you have a clear understanding of different angles of a circle and their formulas. You just have to put the values in formulas and calculate the angle for real word problems too.
Continue Reading
Advertisement
|
__label__pos
| 0.999469 |
[CalendarServer-changes] [2206] CalendarServer/trunk/twistedcaldav
source_changes at macosforge.org source_changes at macosforge.org
Thu Mar 6 12:12:33 PST 2008
Revision: 2206
http://trac.macosforge.org/projects/calendarserver/changeset/2206
Author: wsanchez at apple.com
Date: 2008-03-06 12:12:32 -0800 (Thu, 06 Mar 2008)
Log Message:
-----------
Store calendar data by GUID.
Modified Paths:
--------------
CalendarServer/trunk/twistedcaldav/directory/calendar.py
CalendarServer/trunk/twistedcaldav/directory/principal.py
CalendarServer/trunk/twistedcaldav/extensions.py
CalendarServer/trunk/twistedcaldav/schedule.py
CalendarServer/trunk/twistedcaldav/static.py
Modified: CalendarServer/trunk/twistedcaldav/directory/calendar.py
===================================================================
--- CalendarServer/trunk/twistedcaldav/directory/calendar.py 2008-03-06 20:12:04 UTC (rev 2205)
+++ CalendarServer/trunk/twistedcaldav/directory/calendar.py 2008-03-06 20:12:32 UTC (rev 2206)
@@ -19,8 +19,10 @@
"""
__all__ = [
+ "uidsResourceName",
"DirectoryCalendarHomeProvisioningResource",
"DirectoryCalendarHomeTypeProvisioningResource",
+ "DirectoryCalendarHomeUIDProvisioningResource",
"DirectoryCalendarHomeResource",
]
@@ -39,6 +41,9 @@
from twistedcaldav.directory.idirectory import IDirectoryService
from twistedcaldav.directory.resource import AutoProvisioningResourceMixIn
+# Use __underbars__ convention to avoid conflicts with directory resource types.
+uidsResourceName = "__uids__"
+
class DirectoryCalendarHomeProvisioningResource (AutoProvisioningResourceMixIn, ReadOnlyResourceMixIn, DAVResource):
"""
Resource which provisions calendar home collections as needed.
@@ -59,10 +64,17 @@
# FIXME: Smells like a hack
directory.calendarHomesCollection = self
+ #
# Create children
+ #
+ def provisionChild(name):
+ self.putChild(name, self.provisionChild(name))
+
for recordType in self.directory.recordTypes():
- self.putChild(recordType, self.provisionChild(recordType))
+ provisionChild(recordType)
+ provisionChild(uidsResourceName)
+
def provisionChild(self, recordType):
raise NotImplementedError("Subclass must implement provisionChild()")
@@ -86,11 +98,11 @@
return self.directory.principalCollection.principalForRecord(record)
def homeForDirectoryRecord(self, record):
- typeResource = self.getChild(record.recordType)
- if typeResource is None:
+ uidResource = self.getChild(uidsResourceName)
+ if uidResource is None:
return None
else:
- return typeResource.getChild(record.shortName)
+ return uidResource.getChild(record.guid)
##
# DAV
@@ -113,8 +125,7 @@
"""
def __init__(self, parent, recordType):
"""
- @param path: the path to the file which will back the resource.
- @param directory: an L{IDirectoryService} to provision calendars from.
+ @param parent: the parent of this resource
@param recordType: the directory record type to provision.
"""
assert parent is not None
@@ -131,7 +142,6 @@
def getChild(self, name, record=None):
self.provision()
-
if name == "":
return self
@@ -139,11 +149,8 @@
record = self.directory.recordWithShortName(self.recordType, name)
if record is None:
return None
- else:
- assert name is None
- name = record.shortName
- return self.provisionChild(name)
+ return self._parent.homeForDirectoryRecord(record)
def listChildren(self):
return (
@@ -173,6 +180,61 @@
return self._parent.principalForRecord(record)
+class DirectoryCalendarHomeUIDProvisioningResource (AutoProvisioningResourceMixIn, ReadOnlyResourceMixIn, DAVResource):
+ def __init__(self, parent):
+ """
+ @param parent: the parent of this resource
+ """
+ assert parent is not None
+
+ DAVResource.__init__(self)
+
+ self.directory = parent.directory
+ self.parent = parent
+
+ def url(self):
+ return joinURL(self.parent.url(), uidsResourceName)
+
+ def getChild(self, name, record=None):
+ self.provision()
+ if name == "":
+ return self
+
+ if record is None:
+ record = self.directory.recordWithGUID(name)
+ if record is None:
+ return None
+
+ return self.provisionChild(name)
+
+ def listChildren(self):
+ return (
+ record.guid
+ for record in self.directory.listRecords(self.recordType)
+ if record.enabledForCalendaring
+ )
+
+ ##
+ # DAV
+ ##
+
+ def isCollection(self):
+ return True
+
+ ##
+ # ACL
+ ##
+
+ def defaultAccessControlList(self):
+ return readOnlyACL
+
+ def principalCollections(self):
+ return self.parent.principalCollections()
+
+ def principalForRecord(self, record):
+ return self.parent.principalForRecord(record)
+
+
class DirectoryCalendarHomeResource (AutoProvisioningResourceMixIn, CalDAVResource):
"""
Calendar home collection resource.
@@ -187,7 +249,7 @@
CalDAVResource.__init__(self)
self.record = record
- self._parent = parent
+ self.parent = parent
# Cache children which must be of a specific type
childlist = (
@@ -243,7 +305,13 @@
raise NotImplementedError("Subclass must implement provisionChild()")
def url(self):
- return joinURL(self._parent.url(), self.record.shortName)
+ return joinURL(self.parent.url(), self.record.guid)
+ ##
+ ## While the underlying primary location is GUID-based, we want
+ ## the canonical user-facing location to be recordType &
+ ## shortName-based, because that's friendlier.
+ ##
+ #return joinURL(self.parent.parent.getChild(self.record.recordType).url(), self.record.shortName)
##
# DAV
@@ -260,8 +328,6 @@
return succeed(davxml.HRef(self.principalForRecord().principalURL()))
def defaultAccessControlList(self):
- # FIXME: directory.principalCollection smells like a hack
- # See DirectoryPrincipalProvisioningResource.__init__()
myPrincipal = self.principalForRecord()
aces = (
@@ -306,10 +372,10 @@
return davxml.ACL(*aces)
def principalCollections(self):
- return self._parent.principalCollections()
+ return self.parent.principalCollections()
def principalForRecord(self):
- return self._parent.principalForRecord(self.record)
+ return self.parent.principalForRecord(self.record)
##
# Quota
Modified: CalendarServer/trunk/twistedcaldav/directory/principal.py
===================================================================
--- CalendarServer/trunk/twistedcaldav/directory/principal.py 2008-03-06 20:12:04 UTC (rev 2205)
+++ CalendarServer/trunk/twistedcaldav/directory/principal.py 2008-03-06 20:12:32 UTC (rev 2206)
@@ -19,8 +19,10 @@
"""
__all__ = [
+ "DirectoryProvisioningResource",
"DirectoryPrincipalProvisioningResource",
"DirectoryPrincipalTypeProvisioningResource",
+ "DirectoryPrincipalUIDProvisioningResource",
"DirectoryPrincipalResource",
"DirectoryCalendarPrincipalResource",
]
@@ -58,11 +60,7 @@
def defaultAccessControlList(self):
return authReadACL
- def accessControlList(self, request, inheritance=True, expanding=False, inherited_aces=None):
- # Permissions here are fixed, and are not subject to inherritance rules, etc.
- return succeed(self.defaultAccessControlList())
-
-class DirectoryProvisioningResource(
+class DirectoryProvisioningResource (
AutoProvisioningFileMixIn,
PermissionsMixIn,
CalendarPrincipalCollectionResource,
@@ -184,6 +182,8 @@
if record is not None:
return self.principalForRecord(record)
+ log.err("No principal for calendar user address: %r" % (address,))
+
return None
##
Modified: CalendarServer/trunk/twistedcaldav/extensions.py
===================================================================
--- CalendarServer/trunk/twistedcaldav/extensions.py 2008-03-06 20:12:04 UTC (rev 2205)
+++ CalendarServer/trunk/twistedcaldav/extensions.py 2008-03-06 20:12:32 UTC (rev 2206)
@@ -620,6 +620,10 @@
def writeProperty(self, property, request):
raise HTTPError(self.readOnlyResponse)
+ def accessControlList(self, request, inheritance=True, expanding=False, inherited_aces=None):
+ # Permissions here are fixed, and are not subject to inherritance rules, etc.
+ return succeed(self.defaultAccessControlList())
+
class XMLResponse (Response):
"""
XML L{Response} object.
Modified: CalendarServer/trunk/twistedcaldav/schedule.py
===================================================================
--- CalendarServer/trunk/twistedcaldav/schedule.py 2008-03-06 20:12:04 UTC (rev 2205)
+++ CalendarServer/trunk/twistedcaldav/schedule.py 2008-03-06 20:12:32 UTC (rev 2206)
@@ -35,7 +35,6 @@
from twisted.web2.dav.util import joinURL
from twistedcaldav import caldavxml
-from twistedcaldav import customxml
from twistedcaldav import itip
from twistedcaldav import logging
from twistedcaldav.resource import CalDAVResource
@@ -355,7 +354,7 @@
# Map recipient to their inbox
inbox = None
if principal is None:
- logging.err("No principal for calendar user address: %s" % (recipient,), system="CalDAV Outbox POST")
+ logging.err("No schedulable principal for calendar user address: %r" % (recipient,), system="CalDAV Outbox POST")
else:
inboxURL = principal.scheduleInboxURL()
if inboxURL:
Modified: CalendarServer/trunk/twistedcaldav/static.py
===================================================================
--- CalendarServer/trunk/twistedcaldav/static.py 2008-03-06 20:12:04 UTC (rev 2205)
+++ CalendarServer/trunk/twistedcaldav/static.py 2008-03-06 20:12:32 UTC (rev 2206)
@@ -22,7 +22,9 @@
"CalDAVFile",
"AutoProvisioningFileMixIn",
"CalendarHomeProvisioningFile",
+ "CalendarHomeUIDProvisioningFile",
"CalendarHomeFile",
+ "ScheduleFile",
"ScheduleInboxFile",
"ScheduleOutboxFile",
"DropBoxHomeFile",
@@ -62,8 +64,10 @@
from twistedcaldav.resource import CalDAVResource, isCalendarCollectionResource, isPseudoCalendarCollectionResource
from twistedcaldav.schedule import ScheduleInboxResource, ScheduleOutboxResource
from twistedcaldav.dropbox import DropBoxHomeResource, DropBoxCollectionResource, DropBoxChildResource
+from twistedcaldav.directory.calendar import uidsResourceName
from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
from twistedcaldav.directory.calendar import DirectoryCalendarHomeTypeProvisioningResource
+from twistedcaldav.directory.calendar import DirectoryCalendarHomeUIDProvisioningResource
from twistedcaldav.directory.calendar import DirectoryCalendarHomeResource
from twistedcaldav.directory.resource import AutoProvisioningResourceMixIn
@@ -439,43 +443,68 @@
DirectoryCalendarHomeProvisioningResource.__init__(self, directory, url)
def provisionChild(self, name):
+ if name == uidsResourceName:
+ return CalendarHomeUIDProvisioningFile(self.fp.child(name).path, self)
+
return CalendarHomeTypeProvisioningFile(self.fp.child(name).path, self, name)
def createSimilarFile(self, path):
raise HTTPError(responsecode.NOT_FOUND)
class CalendarHomeTypeProvisioningFile (AutoProvisioningFileMixIn, DirectoryCalendarHomeTypeProvisioningResource, DAVFile):
- """
- Resource which provisions calendar home collections of a specific
- record type as needed.
- """
def __init__(self, path, parent, recordType):
"""
@param path: the path to the file which will back the resource.
- @param directory: an L{IDirectoryService} to provision calendars from.
+ @param parent: the parent of this resource
@param recordType: the directory record type to provision.
"""
DAVFile.__init__(self, path)
DirectoryCalendarHomeTypeProvisioningResource.__init__(self, parent, recordType)
+class CalendarHomeUIDProvisioningFile (AutoProvisioningFileMixIn, DirectoryCalendarHomeUIDProvisioningResource, DAVFile):
+ def __init__(self, path, parent):
+ """
+ @param path: the path to the file which will back the resource.
+ """
+ DAVFile.__init__(self, path)
+ DirectoryCalendarHomeUIDProvisioningResource.__init__(self, parent)
+
def provisionChild(self, name):
- record = self.directory.recordWithShortName(self.recordType, name)
+ record = self.directory.recordWithGUID(name)
if record is None:
- log.msg("No directory record %r of type %r" % (name, self.recordType))
+ log.msg("No directory record with GUID %r" % (name,))
return None
if not record.enabledForCalendaring:
- log.msg("Directory record %r of type %r is not enabled for calendaring" % (name, self.recordType))
+ log.msg("Directory record %r is not enabled for calendaring" % (record,))
return None
- child = CalendarHomeFile(self.fp.child(name).path, self, record)
+ childPath = self.fp.child(name)
+ child = CalendarHomeFile(childPath.path, self, record)
if not child.exists():
- # NOTE: provisionDefaultCalendars() returns a deferred, which we are ignoring.
- # The result being that the default calendars will be present at some point
- # in the future, not necessarily right now, and we don't have a way to wait
- # on that to finish.
- child.provisionDefaultCalendars()
+ #
+ # Find out if the child exists at the old (pre-1.2)
+ # location (ie. in the types hierarchy instead of the GUID
+ # hierarchy).
+ #
+ oldPath = self.parent.getChild(record.recordType).fp.child(record.shortName)
+ if oldPath.exists():
+ log.msg("Moving calendar home from old location %r to new location %r." % (oldPath, childPath))
+ try:
+ oldPath.moveTo(childPath)
+ except (OSError, IOError), e:
+ log.err("Error moving calendar home %r: %s" % (oldPath, e))
+ raise HTTPError(StatusResponse(
+ responsecode.INTERNAL_SERVER_ERROR,
+ "Unable to move calendar home."
+ ))
+ else:
+ # NOTE: provisionDefaultCalendars() returns a deferred, which we are ignoring.
+ # The result being that the default calendars will be present at some point
+ # in the future, not necessarily right now, and we don't have a way to wait
+ # on that to finish.
+ child.provisionDefaultCalendars()
return child
def createSimilarFile(self, path):
@@ -674,7 +703,7 @@
http_MKCOL = NotificationsCollectionResource.http_MKCOL
http_MKCALENDAR = NotificationsCollectionResource.http_MKCALENDAR
-class NotificationFile(NotificationResource, DAVFile):
+class NotificationFile (NotificationResource, DAVFile):
def __init__(self, path, parent):
super(NotificationFile, self).__init__(path, principalCollections=parent.principalCollections())
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20080306/bb0d483a/attachment-0001.html
More information about the calendarserver-changes mailing list
|
__label__pos
| 0.793272 |
5 Lexical conventions [lex]
5.4 Preprocessing tokens [lex.pptoken]
Each preprocessing token that is converted to a token shall have the lexical form of a keyword, an identifier, a literal, or an operator or punctuator.
A preprocessing token is the minimal lexical element of the language in translation phases 3 through 6.
The categories of preprocessing token are: header names, placeholder tokens produced by preprocessing import and module directives (import-keyword, module-keyword, and export-keyword), identifiers, preprocessing numbers, character literals (including user-defined character literals), string literals (including user-defined string literals), preprocessing operators and punctuators, and single non-whitespace characters that do not lexically match the other preprocessing token categories.
If a ' or a " character matches the last category, the behavior is undefined.
Preprocessing tokens can be separated by whitespace; this consists of comments ([lex.comment]), or whitespace characters (space, horizontal tab, new-line, vertical tab, and form-feed), or both.
As described in [cpp], in certain circumstances during translation phase 4, whitespace (or the absence thereof) serves as more than preprocessing token separation.
White space can appear within a preprocessing token only as part of a header name or between the quotation characters in a character literal or string literal.
If the input stream has been parsed into preprocessing tokens up to a given character:
• If the next character begins a sequence of characters that could be the prefix and initial double quote of a raw string literal, such as R", the next preprocessing token shall be a raw string literal.
Between the initial and final double quote characters of the raw string, any transformations performed in phases 1 and 2 (universal-character-names and line splicing) are reverted; this reversion shall apply before any d-char, r-char, or delimiting parenthesis is identified.
The raw string literal is defined as the shortest sequence of characters that matches the raw-string pattern
• Otherwise, if the next three characters are <:: and the subsequent character is neither : nor >, the < is treated as a preprocessing token by itself and not as the first character of the alternative token <:.
• Otherwise, the next preprocessing token is the longest sequence of characters that could constitute a preprocessing token, even if that would cause further lexical analysis to fail, except that a header-name ([lex.header]) is only formed
[Example 1: #define R "x" const char* s = R"y"; // ill-formed raw string, not "x" "y" — end example]
The import-keyword is produced by processing an import directive ([cpp.import]), the module-keyword is produced by preprocessing a module directive ([cpp.module]), and the export-keyword is produced by preprocessing either of the previous two directives.
[Note 1:
None has any observable spelling.
— end note]
[Example 2:
The program fragment 0xe+foo is parsed as a preprocessing number token (one that is not a valid integer-literal or floating-point-literal token), even though a parse as three preprocessing tokens 0xe, +, and foo can produce a valid expression (for example, if foo is a macro defined as 1).
Similarly, the program fragment 1E1 is parsed as a preprocessing number (one that is a valid floating-point-literal token), whether or not E is a macro name.
— end example]
[Example 3:
The program fragment x+++++y is parsed as x ++ ++ + y, which, if x and y have integral types, violates a constraint on increment operators, even though the parse x ++ + ++ y can yield a correct expression.
— end example]
|
__label__pos
| 0.937089 |
Share/Like This Page
Common Core Standard 5.NF.A.2 Questions
Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. For example, recognize an incorrect result 2/5 + 1/2 = 3/7, by observing that 3/7 < 1/2.
You can create printable tests and worksheets from these questions on Common Core standard 5.NF.A.2! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page.
Previous Page 1 of 3 Next
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
Andrea is [math]5 1/2[/math]feet tall. Ralene is [math]2/3[/math] foot taller than Andrea. How tall is Ralene?
1. [math]5 1/3[/math] feet
2. [math]5 3/5[/math] feet
3. [math]6 1/6[/math] feet
4. [math]6[/math] feet
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
Last year it rained [math]2 1/2[/math] inches in April and [math]1 1/3[/math] inches in May. What is the total rainfall for the two months?
1. [math]2 10/12[/math] inches
2. [math]3 2/5[/math] inches
3. [math]3 5/6[/math] inches
4. [math]3 1/2[/math] inches
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2, MP2, MP7
A bird watcher observes a bird flying at a height of [math]6/10[/math] kilometer. The bird dives down toward the ground then continues flying at a height [math]2/5[/math] kilometer below its original height.
The bird watcher states that the bird is now flying at a height of [math]4/5[/math] kilometer.
Which TWO statements are true?
1. Since [math]6/10[/math] minus [math]2/5[/math] is [math]4/5[/math], the height is reasonable.
2. Since [math]4/5[/math] is greater than [math]6/10[/math], the height is not reasonable.
3. The fraction [math]2/5[/math] is [math]4/10[/math], and [math]4/10[/math] equals [math]4/5[/math], so the height is reasonable.
4. The fraction [math]2/5[/math] is [math]4/10[/math], and [math]6/10[/math] minus [math]4/10[/math] does not equal [math]4/5[/math], so the height is not reasonable.
5. The fraction [math]2/5[/math] is [math]4/10[/math], and [math]6/10[/math] plus [math]4/10[/math] is less than [math]4/5[/math], so the height is not reasonable.
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2, MP4
This question is a part of a group with common instructions. View group »
Which expression can be used to find the sum of the fraction of students that take the bus and fraction of students that walk to school?
1. [math]5/15 + 1/15[/math]
2. [math]11/15 + 10/15[/math]
3. [math]5/18 + 1/18[/math]
4. [math]10/18 + 3/18[/math]
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2, MP1
This question is a part of a group with common instructions. View group »
What fraction of the pea plants sprouted during the first and second weeks of Arden's project?
1. [math]5/16[/math]
2. [math]10/11[/math]
3. [math]7/36[/math]
4. [math]37/63[/math]
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
In one year, Melinda grew [math]2 1/4[/math] inches to [math]48 3/8[/math] inches. What was her height at the beginning of the year?
1. [math]50 5/8 [/math] inches
2. [math]46 1/8[/math] inches
3. [math]43 1/2[/math] inches
4. [math]21 1/2[/math] inches
5. [math]108 27/32[/math] inches
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
Grade 5 Fractions and Ratios CCSS: 5.NF.A.2
Grade 5 Mixed Numbers CCSS: 5.NF.A.2
Barbara walked [math]6 3/4[/math] miles on Thursday. That morning she walked [math]4 5/6[/math] miles. How many miles did she walk that afternoon?
1. [math]11 7/12[/math] miles
2. [math]2 1/12[/math] miles
3. [math]1 11/12[/math] miles
4. [math]1 1/2[/math] miles
Previous Page 1 of 3 Next
Become a Pro subscriber to access Common Core questions
Unlimited premium printables Unlimited online testing Unlimited custom tests
Learn More About Benefits and Options
You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
|
__label__pos
| 0.99701 |
Courier missions and courier contracts
Hello, courier missions among other things are out competing courier contracts that dont pay as much so there is lots of stuff that doesn’t move around. An NPC mission will also never try to scam you, but you run that risk when accepting courier contracts.
To make it worth your time to haul small packages you have to do it in bulk so that means you have to verify many individual contracts from many different people to make sure its not a scam among other things before accepting it. Its lot of work for little reward? Setting up courier contracts is also a tedious process.
Over all this aspect of the game could do with an overhaul imo.
So what is your idea for that?
Some kind of delivery service that is easy to use and cheap for petty cargo?
Well theres plenty of services available like pushx or frog among others
I am aware of those but I think they charge like 1M ISK per jump which is fine some times
Waitaminnit, wasnt there a conversation before about cheap npc couriers cause nobody wanted to use the actual hauling services.
This smells like a rehash of that
1 Like
This is what came up when searching for NPC Courier.
it is and @ISD_Traindriver closed some of his threads last night, im reporting this one for again opening a closed thread/topic
1 Like
As long as it’s about improving player courier jobs - and not the Npc Hauling again - I think that’s fine.
it’s NPC hauling… again…
1 Like
What happened?
Possibly miscommunication and a lack of openness toward issues and new ideas
So how do you propose to improve courier contracts between players?
Setting up a cuirier contract today is a pita tbh, also there is a minimum fee of 10K ISK which is pricing out some stuff.
Ok and… What. Do…you…propose???
Do you want npcs to take the place of courier contracts?
A hybrid system seems optimal but the most important issue is to improve the UX when it comes to creating and accepting courier contracts if you want them to grow
Mildly interesting talk from Fanfest 2023
Some slides from the talk
1 Like
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.928567 |
The Math Forum
Ask Dr. Math - Questions and Answers from our Archives
_____________________________________________
Associated Topics || Dr. Math Home || Search Dr. Math
_____________________________________________
Probability in a Dice Game
Date: 01/04/2000 at 11:18:38
From: Leona
Subject: The Dice Game
Dear Dr. Math,
This problem is on probability and I have no clue as to how to do it.
Please help.
THE DICE GAME
Three Players A, B and C play a game with a single die. The rules of
the game are:
Player A ALWAYS goes first.
A rolls the die. If the die lands showing a 1 then A wins the game.
If A does not throw a 1 then B has a turn.
B rolls a die. If the die landing shows a 2 or 3 then B wins the
game. If B does not throw a 2 or a 3 then C has a turn.
C rolls the die. If the die lands showing a 4, 5 or 6 then C wins
the game. If C does not throw a 4 or a 5 or a 6 then A starts again.
This procedure continues until there is a winner.
Investigate any or all of:
1. The probabilities of each of A, B or C winning the game.
2. Who will be the most likely winner?
3. The most likely length of the game in terms of the number of rolls
of the die to produce the winner
Leona
Date: 01/13/2000 at 12:53:40
From: Doctor TWE
Subject: Re: The Dice Game
Hi Leona - thanks for writing to Dr. Math.
To show how to solve this problem, let's look at a similar but simpler
game. Suppose we have a two player game (A and B), where A rolls first
and wins on a 1 or 2. If A fails to win, B rolls and wins on a 3, 4, 5
or 6. If neither has won, the game continues with A rolling with the
same rules.
Let's investigate what happens one event at a time, and look at the
probabilities. First, A rolls the die. There is a 2/6 = 1/3
probability that A will win on the roll (a roll of a 1 or 2), and a
2/3 probability that the game will continue.
P(A wins) = 1/3
P(cont.) = 2/3
---------------
Total = 3/3 = 1 (that's good - the total should always = 1)
If A didn't win, then B rolls the die. There is a 2/3 chance that B
will win on that roll (a roll of a 3, 4, 5 or 6) - but this is only
2/3 of the 2/3 chance that B even gets to roll. There is also a 1/3 of
2/3 chance that the game will continue.
P(A wins) = 1/3 = 3/9 (I converted to 9ths for addition)
P(B wins) = 2/3 * 2/3 = 4/9
P(cont.) = 2/3 * 1/3 = 2/9
---------------------------
Total = 9/9 = 1
At this point, the game repeats itself, so let's look at what we have
so far. There is a 7/9 chance that the game will be over, and a 2/9
chance that the game will continue. A wins 1/3 of the time, which is
(1/3)/(7/9) of the combinations where the game has ended. Likewise, B
wins 4/9 of the time, which is (4/9)/(7/9) of the combinations where
the game has ended. In other words:
P(A wins|game ended) = (1/3) / (7/9) = 3/7
P(B wins|game ended) = (4/9) / (7/9) = 4/7
-------------------------------------------
Total = 7/7 = 1
Note that P(A wins|game ended) is read as "the probability that A wins
given that the game has ended". Since the game repeats itself at this
point, the odds of A and B winning don't change. We can think of this
as a "draw" and start a new game.
Another way to look at this is that A wins in round one 1/3 of the
time, plus A wins in round two 1/3 of the 2/9 of the time that neither
player won in round one, plus...
Similarly B wins in round one 2/3 of the 2/3 of the time that A didn't
win, plus B wins in round two 2/3 of the 2/3 of the 2/9 of the time
that neither player won in round one, plus...
P(A) = 1/3 + (1/3)*(2/9) + (1/3)*(2/9)^2 + (1/3)*(2/9)^3 + ...
P(B) = (2/3)*(2/3) + (2/3)*(2/3)*(2/9) + (2/3)*(2/3)*(2/9)^2 + ...
These are geometric series. The formula for the sum of a geometric
series is
S = k[1/(1-r)]
where k is the coefficient of each term and r is the ratio of the
terms. P(A) has a coefficient of 1/3 and a ratio of 2/9. P(B) has a
coefficient of (2/3)*(2/3) = 4/9 and a ratio of 2/9. So we get:
P(A) = (1/3)[1/(1 - 2/9)] = (1/3)(9/7) = 3/7
P(B) = (4/9)[1/(1 - 2/9)] = (4/9)(9/7) = 4/7
Which is what we got before. In this game, B is more likely to win.
Your last question was what is the most likely length of the game in
terms of the number of rolls of the dice to produce the winner. In our
game, there is a 1/3 chance that the game will go exactly one roll, a
4/9 chance that the game will go exactly two rolls, and a 2/9 chance
that it will go more than two rolls. Since the chance of it going 3,
4, 5, etc. rolls is non-zero (and can't be negative), we know that
each of these probabilities has to be less than 2/9. So the most
likely number of rolls for our game is 2. Similar logic will work for
your game.
I hope I have explained it well enough for you to apply to your game.
If not, or if you have any more questions, write back.
- Doctor TWE, The Math Forum
http://mathforum.org/dr.math/
Associated Topics:
High School Probability
Search the Dr. Math Library:
Find items containing (put spaces between keywords):
Click only once for faster results:
[ Choose "whole words" when searching for a word like age.]
all keywords, in any order at least one, that exact phrase
parts of words whole words
Submit your own question to Dr. Math
[Privacy Policy] [Terms of Use]
_____________________________________
Math Forum Home || Math Library || Quick Reference || Math Forum Search
_____________________________________
Ask Dr. MathTM
© 1994- The Math Forum at NCTM. All rights reserved.
http://mathforum.org/dr.math/
|
__label__pos
| 0.725763 |
Hvordan forenkle komplekse tall
Algebra innebærer ofte forenkling av uttrykk, men noen uttrykk er mer forvirrende å håndtere enn andre. Komplekse tall involverer mengden kjent somJeg, et "imaginært" nummer med eiendommenJeg= √−1. Hvis du bare trenger å uttrykke et komplekst tall, kan det virke skremmende, men det er ganske enkelt når du lærer deg de grunnleggende reglene.
TL; DR (for lang; Leste ikke)
Forenkle komplekse tall ved å følge reglene for algebra med komplekse tall.
Hva er et komplekst nummer?
Komplekse tall er definert av deres inkludering avJegbegrep, som er kvadratroten til minus en. I grunnleggende matematikk eksisterer ikke kvadratrøtter med negative tall egentlig, men de vises av og til i algebraproblemer. Den generelle formen for et komplekst tall viser strukturen:
z = a + bi
Hvorzmerker det komplekse nummeret,enrepresenterer et hvilket som helst tall (kalt "ekte" delen), ogbrepresenterer et annet tall (kalt den "imaginære" delen), som begge kan være positive eller negative. Så et eksempel på et komplekst tall er:
instagram story viewer
z = 2 −4i
Siden alle kvadratrøtter av negative tall kan representeres av multipler avJeg, dette er skjemaet for alle komplekse tall. Teknisk sett beskriver et vanlig nummer bare et spesielt tilfelle av et komplekst nummer hvorb= 0, så alle tall kan betraktes som komplekse.
Grunnleggende regler for algebra med komplekse tall
For å legge til og trekke komplekse tall, bare legg til eller trekk fra de virkelige og imaginære delene hver for seg. Så for komplekse tallz = 2 – 4Jegogw = 3 + 5Jeg, summen er:
\ begynn {justert} z + w & = (2-4i) + (3 + 5i) \\ & = (2 + 3) + (-4 + 5) i \\ & = 5 + 1i \\ & = 5 + jeg \ slutt {justert}
Å trekke fra tallene fungerer på samme måte:
\ begin {justert} z- w & = (2 - 4i) - (3 + 5i) \\ & = (2-3) + (-4 - 5) i \\ & = -1 -9i \ end {justert }
Multiplikasjon er en annen enkel operasjon med komplekse tall, fordi den fungerer som vanlig multiplikasjon, bortsett fra at du må huske detJeg2 = −1. Så for å beregne 3Jeg × −4Jeg:
3i × -4i = -12i ^ 2
Men sidenJeg2= −1, deretter:
-12i ^ 2 = -12 × -1 = 12
Med kompliserte komplette tall (ved hjelp avz = 2 – 4Jegogw = 3 + 5Jegigjen) multipliserer du dem på samme måte som du ville gjort med vanlige tall som (en + b) (c + d), ved å bruke “første, indre, ytre, siste” (FOIL) -metoden, for å gi (en + b) (c + d) = ac + bc + annonse + bd. Alt du trenger å huske er å forenkle forekomster avJeg2. Så for eksempel:
\ begin {justert} z × w & = (2 -4i) (3 + 5i) \\ & = (2 × 3) + (-4i × 3) + (2 × 5i) + (−4i × 5i) \ \ & = 6-12i + 10i - 20i ^ 2 \\ & = 6 -2i + 20 \\ & = 26 + 2i \ end {aligned}
Deler komplekse tall
Å dele komplekse tall innebærer å multiplisere teller og nevner av brøkdelen med det komplekse konjugatet av nevneren. Komplekskonjugatet betyr bare versjonen av det komplekse nummeret med den imaginære delen omvendt i tegnet. Så forz = 2 – 4Jeg, det komplekse konjugatetz = 2 + 4Jeg, og forw = 3 + 5Jeg, w = 3 −5Jeg. For problemet:
\ frac {z} {w} = \ frac {2 -4i} {3 + 5i}
Konjugatet som trengs erw*. Del teller og nevner med dette for å gi:
\ frac {z} {w} = \ frac {(2 -4i) (3 -5i)} {(3 + 5i) (3-5i)}
Og så jobber du deg gjennom som i forrige avsnitt. Telleren gir:
\ begin {justert} (2 -4i) (3 -5i) & = 6 -12i- 10i + 20i ^ 2 \\ & = -14-22i \ slutt {justert}
Og nevneren gir:
\ begin {justert} (3 + 5i) (3-5i) & = 9 + 15i - 15i -25i ^ 2 \\ & = 9 + 25 \\ & = 34 \ slutt {justert}
Dette betyr:
\ begin {align} \ frac {z} {w} & = \ frac {-14 - 22i} {34} \\ \, \\ & = \ frac {-14} {34} - \ frac {22i} { 34} \\ \, \\ & = \ frac {-7} {17} - \ frac {11i} {17} \ end {aligned}
Forenkle komplekse tall
Bruk reglene ovenfor etter behov for å forenkle komplekse uttrykk. For eksempel:
z = \ frac {(4 + 2i) + (2 -i)} {(2 + 2i) (2+ i)}
Dette kan forenkles ved å bruke tilleggsregelen i telleren, multiplikasjonsregelen i nevneren, og deretter fullføre divisjonen. For telleren:
(4 + 2i) + (2 - i) = 6 + i
For nevneren:
\ begynn {justert} (2 + 2i) (2+ i) & = 4 + 4i + 2i + 2i ^ 2 \\ & = (4 -2) + 6i \\ & = 2 + 6i \ slutt {justert}
Å sette disse på plass gir:
z = \ frac {6 + i} {2 + 6i}
Å multiplisere begge deler med konjugatet av nevneren fører til:
\ begynn {justert} z & = \ frac {(6 + i) (2 - 6i)} {(2 + 6i) (2-6i)} \\ \, \\ & = \ frac {12 + 2i -36i -6i ^ 2} {4 + 12i -12i -36i ^ 2} \\ \, \\ & = \ frac {18 - 34i} {40} \\ \, \\ & = \ frac {9 - 17i} {20} \\ \, \\ & = \ frac {9} {20} - \ frac {17i} {20} \\ \ end {justert}
Så dette betyrzforenkler som følger:
\ begin {align} z & = \ frac {(4 + 2i) + (2 - i)} {(2 + 2i) (2+ i)} \\ & = \ frac {9} {20} - \ frac {17i} {20} \\ \ end {justert}
Teachs.ru
• Dele
instagram viewer
|
__label__pos
| 0.997544 |
0
I am attempting to create code where the user can alter the A elements in the list continuously. If another letter in the list is replaced however, it should ask for new input from the user, and go back to the code where the user can alter A elements in the list. My code isn't working however. I am new to coding, so please don't explain things in a complicated manner. I need help!
Here is my code:
list = [['A', 'B', 'A'],['C', 'D', 'B']]
print m
f=input("Index for sublist ")
g=input("Index for sublist item ")
def errordetection ():
print "Please select another sublist item to replace"
f=input("Index for sublist ")
g=input("Index for sublist item ")
while (m[f][g]!='A'):
errordetection()
if (m[f][g]=='A'):
m[f][g]='C'
print list
Edited by James0l9l: n/a
4
Contributors
10
Replies
11
Views
7 Years
Discussion Span
Last Post by pyTony
0
I am attempting to create code where the user can alter the A elements in the list continuously. If another letter in the list is replaced however, it should ask for new input from the user, and go back to the code where the user can alter A elements in the list. My code isn't working however. I am new to coding, so please don't explain things in a complicated manner. I need help!
Here is my code:
list = [['A', 'B', 'A'],['C', 'D', 'B']]
print m
f=input("Index for sublist ")
g=input("Index for sublist item ")
def errordetection ():
print "Please select another sublist item to replace"
f=input("Index for sublist ")
g=input("Index for sublist item ")
while (m[f][g]!='A'):
errordetection()
if (m[f][g]=='A'):
m[f][g]='C'
print list
Edited by James0l9l: n/a
0
Even changing it to this still does not work:
lst = [['A', 'B', 'A'],['C', 'D', 'B']]
print lst
f=input("Index for sublist ")
g=input("Index for sublist item ")
def errordetection ():
print "Please select another sublist item to replace"
f=input("Index for sublist ")
g=input("Index for sublist item ")
while (m[f][g]!='A'):
errordetection()
if (m[f][g]=='A'):
m[f][g]='C'
print list
0
You should change 'lst' to 'm' because your lines 11 and 14 are still referring to 'm'.
0
Ug. How did you manage to get rid of the line numbers and code coloring? Use the (CODE) button next time, please.
There are a lot of problems, but lets start with just three:
1. What you show can't even be running: It would stop with a problem on the second line: print m So I expect you didn't paste all your code...
2. You should be using raw_input(thePrompt) not input(thePrompt)
3. You should not use a list named 'list': It is confusing to both readers and sometimes to the Python interpreter ('list' is a built in type and a conversion function). Name it 'myList' or something more descriptive.
The general rule for new programmers (and wise old ones, too) is to write a very small program that works, then gradually modify it so it does what you want. Start with this program
myList = [['A', 'B', 'A'],['C', 'D', 'B']]
print(myList)
Then add a loop:
myList = [['A', 'B', 'A'],['C', 'D', 'B']]
while True:
user_input = raw_input("Type 'Q' to quit ")
if user_input.upper().startswith('Q'):
break
print(myList)
And continue making small changes, testing that your program still works, and improving until you are satisfied (or the deadline arrives)
Edited by mike_2000_17: Fixed formatting
Votes + Comments
No, bad words given, good advice.
Please be kinder in your responses.
0
The code sample written above however,doesn't meet the intention of what I am trying to program. I've changed my list name to 'm' and still experience issues.
m = [['A', 'B', 'A'],['C', 'D', 'B']]
print m
f=input("Index for sublist ")
g=input("Index for sublist item ")
def errordetection ():
print "Please select another sublist item to replace"
f=input("Index for sublist ")
g=input("Index for sublist item ")
while (m[f][g]!='A'):
errordetection()
if (m[f][g]=='A'):
m[f][g]='C'
print list
Edited by James0l9l: n/a
0
because of strange last line and not try except handling. Start small and add functionality with one step and test also it is good to prepare testing material in advance and run the tests after each change (unittest).
Print the values in your program for debugging!
Edited by pyTony: n/a
0
Of course the code sample doesn't do what you want. But it runs and does what you would expect from reading it. Your code, on the other hand also doesn't do what you want, and it isn't obvious what it does from reading it. I'm suggesting that you start with something that
• Is simple enough to understand easily
• Doesn't do what you want
• Can be "mutated" toward what you want
And then mutate it until it does do what you want.
Does your code run? If not, what is the exact error you get (please cut and paste it)... bet before you do that, look at it closely, read what it is saying and try to figure out what it means by that. This is an important skill to learn, and you have to actually think hard sometimes to get it.
If it does run, then what is the problem with how it runs? Say what you wanted to see and what you did see. And think hard about what could be causing it to act that way. Put a print statement somewhere (or several places) so when the program passes through that spot in your code you can see it happening on the screen.
Programming is (among other things) making your understanding of the problem so precise that you can describe how to solve it to an idiot savant that always does exactly what you tell it to do. (or complains that it doesn't understand what you tried to say: A syntax error). You need to be able to state things clearly to yourself at least. If you can do that, it is not so hard to make a clear statement to other people.
I don't understand what your stated problem is: The English language description was confusing to me. I don't understand what you are 'complaining' about your code: Your 'complaint' is not specific enough.
Tony and I have both made the suggestion that you start small and grow the program. Please think about why that seems the wrong thing to you... when two experienced programmers are both saying it is the right thing.
I pointed to three issues (and Tony mentioned two of the same). You have changed your code to partly (there is a remaining 'list' on the last line) fix two of the issues... with one change: Good! But you need to address the rest, too.
Edited by griswolf: n/a
0
I suggest you leave griswolf's code for later and do those three exercises on functions that you had not time to do (yourself).
Then complete this before you start the current assignment:
for number in ('a','b','1','','234234', '-234','4','-1'):
if check(number):
if in_correct_range(int(number), low, high):
print('Good! You gave right size of number')
else:
print('The number you gave was outside the limits')
else:
print('That was not number')
Here is missing two functions you should define, fixed values for range check. Write those. After that change this loop to while, which only exits the loop when correct input is given. Then put everything in function that returns that correct value out from function as integer.
Edited by pyTony: n/a
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.
|
__label__pos
| 0.60123 |
Title:
Telecommunications assignment system
Kind Code:
A1
Abstract:
A telecommunications assignment system is provided. The system typically includes assignment logic, collection logic and graphical user interface logic. The assignment logic assigns telecommunications equipment and ports to network elements. The collection logic receives the assignments from the assignment logic and stores the assignments in a database. The graphical user interface logic retrieves assignments from the database, and displays the assignments to a user in a graphical format. The graphical format includes displaying the telecommunications equipment substantially similar to the physical construction of the telecommunications equipment. Methods and other systems are also provided.
Inventors:
Lyles, Lawrence E. (Snellville, GA, US)
Kriebel, Andrew M. (Acworth, GA, US)
Barnhurst, David (Bethlehem, GA, US)
Dimaggio, Greg (Powder Springs, GA, US)
Phillips, Stephen (Alpharetta, GA, US)
Curtis, Jerrard (Decatur, GA, US)
Worley, Aimee (Marietta, GA, US)
Yen, Vincent (Duluth, GA, US)
Application Number:
10/729403
Publication Date:
06/09/2005
Filing Date:
12/05/2003
Assignee:
LYLES LAWRENCE E.
KRIEBEL ANDREW M.
BARNHURST DAVID
DIMAGGIO GREG
PHILLIPS STEPHEN
CURTIS JERRARD
WORLEY AIMEE
YEN VINCENT
Primary Class:
International Classes:
G06F15/16; H04L29/08; (IPC1-7): G06F15/16
View Patent Images:
Related US Applications:
20060031522Method and network entity for session initiation protocol (SIP) based network traffic measurementsFebruary, 2006Soulhi et al.
20040059812Topology information system for a managed worldMarch, 2004Assa
20020007405Data importerJanuary, 2002Bahadur
20090055491Electronic mail delay adaptionFebruary, 2009Agarwal et al.
20080065721PROXIMITY-BASED WEB PAGE CONTENT PLACEMENT MECHANISMMarch, 2008Cragun
20040044892Content based messaging for e-learningMarch, 2004Dorner et al.
20070168490Distributed Web PublishingJuly, 2007Kent et al.
20090006606COMMAND QUEUE LOADINGJanuary, 2009Lubbers et al.
20070016646Universal calendar event handlingJanuary, 2007Tendjoukian et al.
20030208567Configuration of network appliances via e-mailNovember, 2003Gross
20090031023Digital File Path Tracker (DFPT) Technology for Tracking Comsumption of Digital ContentJanuary, 2009Golovinski
Primary Examiner:
MACILWINEN, JOHN MOORE JAIN
Attorney, Agent or Firm:
AT&T Legal Department - HFZ (ATTN. Patent Docketing One AT&T Way Room 2A-207, Bedminster, NJ, 07921, US)
Claims:
1. A telecommunications assignment system, comprising: assignment logic operable to assign a plurality of telecommunications equipment and ports to a plurality of network elements; collection logic operable to receive assignments from the assignment logic and store the assignments in a database; graphical user interface logic operable retrieve assignments from the database, and to display the assignments to a user in a graphical format which includes displaying the telecommunications equipment in a graphical format substantially similar to a physical construction of the telecommunications equipment.
2. The system of claim 1, wherein the graphical user interface logic is further operable to display the plurality of network elements in a graphical format substantially similar to a physical construction of the network element.
3. The system of claim 1, wherein the graphical user interface logic is operable to provide the graphical format to a remote client on a computer associated with the user over a network.
4. The system of claim 3, wherein the remote client is a web browser operable to view any of a plurality of web formats.
5. The system of claim 3, wherein the remote client is a telecommunications assignment system application.
6. The system of claim 1, wherein the assignment logic is operable to remove assignments, add assignments, remove cards, and add cards on the telecommunications equipment.
7. The system of claim 6, wherein the assignment logic is operable to track cards installed into telecommunications equipment.
8. The system of claim 1, wherein the database is a centralized database which is further operable to store telecommunication equipment and network element graphical format configurations.
9. A method of assigning telecommunications equipment, comprising: providing a graphical user interface to a user, the graphical user interface comprising a plurality of telecommunications equipment and network elements which are displayed to the user in a format substantially similar to the physical construction of the telecommunications equipment, the graphical user interface being further operable to allow the user to make telecommunication equipment assignments; receiving telecommunications equipment assignments from the user via the graphical user interface; and storing the telecommunications equipment assignments received from the user in a database for later retrieval.
10. The method of claim 9, wherein the format for the telecommunications equipment and network elements display are stored in the database with the telecommunications equipment assignments.
11. The method of claim 9, further comprising: providing the graphical user interface to the user over a network to a remote client associated with the user.
12. The method of claim 11, further comprising using a web browser as the remote client, the web browser being operable to view any of a plurality of web formats.
13. The method of claim 11, further comprising using a telecommunications assignment application as the remote client.
14. The method of claim 9, wherein the graphical user interface is operable to allow the user to remove cards, add cards, remove assignments, and add assignments on the telecommunications equipment.
15. The method of claim 14, wherein graphical user interface is further operable to allow the user to change plug-in cards installed into the telecommunications equipment.
16. The method of claim 9, wherein the database is a centralized database which is operable to provide assignment information and display information to the user.
17. A computer readable medium having a program for assigning telecommunications equipment, the program operable to perform the steps of: providing a graphical user interface to a user, the graphical user interface comprising a plurality of telecommunications equipment and network elements which are displayed to the user in a format substantially similar to the physical construction of the telecommunications equipment, the graphical user interface being further operable to allow the user to make telecommunication equipment assignments; receiving telecommunications equipment assignments from the user via the graphical user interface; and storing the telecommunications equipment assignments received from the user in a database for later retrieval.
18. The program of claim 17, wherein the format for the telecommunications equipment and network elements display are stored in the database with the telecommunications equipment assignments.
19. The program of claim 17, further comprising: providing the graphical user interface to the user over a network to a remote client associated with the user.
20. The program of claim 19, further comprising using a web browser as the remote client, the web browser being operable to view any of a plurality of web formats.
21. The program of claim 19, further comprising using a telecommunications assignment application as the remote client.
22. The program of claim 17, wherein the graphical user interface is operable to allow the user to remove ports, remove assignments, and create alarms on the telecommunications equipment.
23. The program of claim 22, wherein graphical user interface is further operable to allow the user to change plug-in cards installed into the telecommunications equipment.
24. The program of claim 17, wherein the database is a centralized database which is operable to provide assignment information and display information to the user.
Description:
TECHNICAL FIELD
The present disclosure is generally related to telecommunications and more particularly to assigning equipment in a telecom network.
DESCRIPTION OF THE RELATED ART
The popularity of the internet and the proliferation of cellular phones has lead to an ever-increasing demand upon telecommunications networks. Most telecommunications carriers now offer networking solutions over their networks that range from business-type T1 access to consumer-type digital subscriber line and integrated services digital network (ISDN)-access to the internet. These solutions consume a seemingly ever-increasing amount of bandwidth on carrier networks. Moreover, cellular phones, while wireless from the user's perspective, also exhaust bandwidth on the wired networks of the carriers.
Such service typically necessitates an ever increasing complexity in the carrier's networks. However, this increasingly complex network should be tracked in order to accurately plan for future needs of customers and corporate expenditures on the network. Moreover, due to federal regulation, customer problems (trouble tickets) should be solved within a specified period of time. For this reason, telecommunications equipment typically includes telemetry equipment that recognizes events and signals alarms that are sent to a network reliability center (NRC). At the NRC there is typically a network monitoring and analysis (NMA) database which senses the alarms and creates a trouble ticket related to the alarm. However, the telemetry equipment used to collect alarms typically adds to the complexity of the network. It is difficult to accurately track the telecommunications network due to the complexity that exists within the network. Therefore, there is a need for systems and methods that address these and/or other perceived shortcomings of prior systems.
SUMMARY OF THE DISCLOSURE
One preferred embodiment, among others, of the present disclosure provides for a telecommunications assignment system. A representative system, among others, includes assignment logic, collection logic and graphical user interface logic. The assignment logic is employed by a user to assign telecommunications equipment and ports to network elements. The collection logic receives assignments from the assignment logic and stores the assignments in a database. The graphical user interface logic retrieves assignments from the database, and displays the assignments to a user in a graphical format. The graphical format includes displaying the telecommunications equipment substantially similar to the physical construction of the telecommunications equipment.
Another preferred embodiment, among others, of the present disclosure provides methods for assigning telecommunications equipment. A representative method, among others, can include the following steps: providing a graphical user interface to a user, the graphical user interface comprising a plurality of telecommunications equipment and network elements which are displayed to the user in a format substantially similar to the physical construction of the telecommunications equipment, the graphical user interface being further operable to allow the user to make telecommunication equipment assignments; receiving telecommunications equipment assignments from the user via the graphical user interface; and, storing the telecommunications equipment assignments received from the user in a database for later retrieval.
Other systems, methods, and/or computer programs products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional system, methods, and/or computer program products be included within this description, be within the scope of the present disclosure, and be protected by the accompanying claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
FIG. 1A is a block diagram illustrating a telecommunications system including an embodiment, among others, of the present disclosure.
FIG. 1B is a block diagram illustrating the architecture of the telecommunications assignment system of FIG. 1A.
FIG. 1C is a block diagram illustrating the architecture of the user computer of FIG. 1A.
FIG. 2 is a screen shot of an embodiment, among others, of the telecommunications assignment system opening screen representation.
FIG. 3 is a flowchart illustrating the choices presented to the user upon the user selecting any of the menu representations shown in FIG. 2.
FIG. 4 is a screen shot of an embodiment, among others, illustrating a network element profile of the telecommunications assignment system of FIG. 1A.
FIG. 5 is a screen shot of an embodiment, among others, illustrating a “Site Information Screen” of the telecommunications assignment system of FIG. 1A.
FIG. 6 is a screen shot of an embodiment, among others, illustrating an “AI 180 Switch Configuration” screen of the telecommunications assignment system of FIG. 1A.
FIG. 7 is a screen shot of an embodiment, among others, illustrating an “AI 180 Switch Shelf” screen of the telecommunications assignment system of FIG. 1A.
FIG. 8 is a screen shot of an embodiment, among others, illustrating an add card screen of the telecommunications assignment system of FIG. 1A.
FIG. 9 is a screen shot of an embodiment, among others, illustrating a “COWAN” screen of the telecommunications assignment system of FIG. 1A.
FIG. 10 is a screen shot of an embodiment, among others, illustrating a “Router Configuration” screen of the telecommunications assignment system of FIG. 1A.
FIG. 11 is a screen shot of an embodiment, among others, illustrating a “Hub Info” screen of the telecommunications assignment system of FIG. 1A.
FIG. 12 is a screen shot of an embodiment, among others, illustrating a “Network Element Telemetry Assignment” screen of the telecommunications assignment system of FIG. 1A.
FIG. 13 is a screen shot of an embodiment, among others, illustrating a “Router Assignment” screen of the telecommunications assignment system of FIG. 1A.
FIG. 14 is a screen shot of an embodiment, among others, illustrating a “Hub Assignment” screen of the telecommunications assignment system of FIG. 1A.
FIG. 15 is a screen shot of an embodiment, among others, illustrating a “Patch Panel Assignment” screen of the telecommunications assignment system of FIG. 1A.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The disclosure now will be described more fully with reference to the accompanying drawings. The disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are intended to convey the scope of the disclosure to those skilled in the art. Furthermore, all “examples” given herein are intended to be non-limiting.
Referring to FIG. 1A, shown is a block diagram illustrating a system in which an embodiment, among others, of the present disclosure operates. The telecommunications assignment system 100 typically includes an application and a database at a data center, as shown in FIG. 1B. The database stores information regarding assignments for the numerous network elements such as a loop equipment inventory module (LEIM) 130, routers and hubs 135, or switches 140 that comprise the telecommunications network (not shown). The telecommunications assignment system 100 preferably communicates with the network elements 130-140 through a business process server 110 which operates to translate to the various protocols that are used by the different network elements 130-140 for the telecommunications assignment system 100.
A remote user 115 can typically access the telecommunications assignment system 100 using a computer 120 through a network 125. The network 125 is typically a secured intranet to prevent unapproved users from viewing or changing network assignments and settings. The computer 120 can typically include an application which would allow the user to view the telecommunications assignment system 100, as shown in FIG. 1C. In an embodiment, among others, of the disclosure, the application on computer 120 is a client application, and the telecommunications assignment system 100 includes a server application. The client application is operable to communicate requests for data and requests to modify data stored in the database to the server application. The server application is operable to query the database and provide data to the client and modify data in the database in response to requests received from the client. One such example would include configuring an Access database platform, available from Microsoft Corp., of Redmond, Wash., on the computer 120 to parse raw data received from the telecommunications assignment system 100 database.
However, one skilled in the art should also recognize that the client could alternatively run an application such as Citrix remote access client/server applications, available from Citrix of Fort Lauderdale, Fla., which allows the client computer 120 to remotely run applications and receive a graphical interface from the server. The graphical interface typically represents an emulation of a screen shot, and the client sends control signals to the server based upon input devices at the client computer 120. Thus, all of the database processing is performed at the server, and the client computer 120 is merely receiving the graphical interface information from the server which represents the applications running on the server. In such cases, the telecommunications assignment system 100 would run the application to be provided to the user.
Moreover, one skilled in the art should recognize that a web client (browser) could be used at the computer 120, and the telecommunications assignment system 100 could be a web server operable to send web pages to the web client via hypertext transfer protocol. One skilled in the art should appreciate that web pages could be coded in hypertext markup language (HTML), extensible markup language (XML), java or any other suitable transfer protocol. It should also be apparent to one skilled in the art that there exist numerous variations on the transfer protocol between the user 115 and the database, each of which are intended to be included within the present disclosure.
As mentioned above, the telecommunications equipment assignment system 100 is typically connected to a number of telecommunications devices 130, 135, 140 through a business process server 110. The telecommunication devices 130, 135, 140 are typically alarm collectors, which are operable to communicate alarms to a network monitoring and analysis database 145. The business process server 115 is typically operable to receive internet protocol (IP) packets from the telecommunications assignment system 100, and translate the packets to X.25 for alarm collectors coupled to the network. In alternative embodiments, the BPS 115 is further operable to translate the packets to asynchronous format for alarm collectors coupled to the network. The business process server can also receive, in various embodiments, X.25, asynchronous, IP or discrete alarm information from the alarm collectors and translate the information for the telecommunications assignment system 100. As is known in the art, X.25 is a communication protocol widely used in telecom networks, and typically occurs over a datakit network (not shown).
In some embodiments, among others, the alarm collectors 130, 135, 140 are in communication with a central office wide area network (COWAN) as well. The COWAN allows the alarm collectors to communicate alarms to a network monitoring and analysis (NMA) database 145 at a network reliability center (NRC) 150. The NMA database 145 further comprises a synchronous optical network (SONET) carrier identification (SCID) database 155 and a discrete database 160. The SCID database (DBSCID) 155 is typically operable to store information regarding alarms on a circuit identified by a SCID. The discrete database 160 is typically operable to store information regarding alarms collected by a discrete alarm collector from a network element. A discrete alarm typically has two states, on and off. The discrete alarm typically warns the NRC 150 that something is wrong with a network element, but typically does not give any indication of what the problem involves. For example, among others, a discrete alarm could indicate that a network element is down, but does not give an indication whether the problem is physical or logical. One skilled in the art should furthermore recognize that there are typically more than one NRC and NMA associated with a carrier network. Furthermore, one skilled in the art should recognize that the NMA database and the NRC typically create a trouble ticket upon receiving an alarm.
Referring now to FIG. 1B, shown is block diagram of the telecommunications assignment system 100 including an embodiment, among others, of the present disclosure. Generally, in terms of hardware architecture, as shown in FIG. 1B, the system 100 includes a processor 170, memory 172, and one or more input and/or output (I/O) devices 174 (or peripherals) that are communicatively coupled via a local interface 176. The local interface 176 can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface 176 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
The processor 170 is a hardware device for executing software, particularly that stored in memory 172. The processor 170 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the system 100, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, or generally any device for executing software instructions.
The memory 172 includes any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory 172 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 172 has a distributed architecture, in some implementations, where various components are situated remote from one another, but can be accessed by the processor 170.
The software in memory 172 includes one or more separate programs 178, 180, each of which comprises an ordered listing of executable instructions for implementing logical functions. In the example of FIG. 1B, the software in the memory 172 includes a telecommunications assignment application 180 and a suitable operating system (O/S) 178. A nonexhaustive list of examples of suitable commercially available operating systems 178 is as follows: (a) a Windows operating system available from Microsoft Corporation; (b) a Netware operating system available from Novell, Inc.; (c) a Macintosh operating system available from Apple Computer, Inc.; (e) a UNIX operating system, which is available for purchase from many vendors, such as the Hewlett-Packard Company, Sun Microsystems, Inc., and AT&T Corporation; (d) a LINUX operating system, which is freeware that is readily available on the Internet; or (e) an appliance-based operating system, such as that implemented in handheld computers or personal data assistants (PDAs) (e.g., PalmOS available from Palm Computing, Inc., and Windows CE available from Microsoft Corporation). The operating system 178 essentially controls the execution of other computer programs, such as the telecommunications assignment application 180, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.
The telecommunications assignment application 180 includes, in various embodiments, source programs, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program needs to be translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the memory 172, so as to operate properly in connection with the O/S 178. Furthermore, the telecommunications assignment application 180 is preferably written as (a) an object oriented programming language, which has classes of data and methods, or (b) a procedure programming language, which has routines, subroutines, and/or functions, for example but not limited to, C, C++, Pascal, Basic, Fortran, Cobol, Perl, Java, and Ada. Alternatively, the telecommunications assignment application 180 can be written using hyper-text markup language. The telecommunications assignment application 180 would then provide the screens to the user at the remote computer 120 via the network.
The I/O devices 174 preferably include input devices, for example but not limited to, a keyboard, mouse, scanner, microphone, etc. Furthermore, the I/O devices 174 preferably include output devices, for example but not limited to, a printer, display, etc. Finally, the I/O devices 174 further preferably include devices that communicate both inputs and outputs, for instance but not limited to, a modulator/demodulator (modem; for accessing another device, system, or network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc.
If the telecommunications assignment system 100 is a PC, workstation, or the like, the software in the memory 172 may further include a basic input output system (BIOS) (omitted for simplicity). The BIOS is a set of essential software routines that initialize and test hardware at startup, start the O/S 178, and support the transfer of data among the hardware devices. The BIOS is stored in ROM so that the BIOS can be executed when the system 100 is activated.
When the telecommunication s assignment system 100 is in operation, the processor 170 is configured to execute software stored within the memory 172, to communicate data to and from the memory 172, and to generally control operations of the system 100 pursuant to the software. The telecommunications assignment application 180 and the O/S 178, in whole or in part, but typically the latter, are read by the processor 170, perhaps buffered within the processor 170, and then executed.
When the telecommunications assignment application 180 is implemented in software, as is shown in FIG. 1B, it should be noted that the telecommunications assignment application 180 can be stored on any computer readable medium for use by or in connection with any computer related system or method. Moreover, the telecommunications assignment application 180 can interact with a storage device 182 to store and retrieve information used in conjunction with the application 180. In the context of this document, a computer readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method. The telecommunications assignment application 180 can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
As described above, the telecommunications assignment system 100 is configured such that the application 180 provides a plurality of page representations to the user. These page representations are typically navigable via a telecommunications assignment client application running on the client computer 120, as described in FIG. 1C. In alternative embodiments, among others, the client application includes the page representations and merely receives data from the telecommunications assignment system 100.
Referring now to FIG. 1C, shown is a generic block diagram of the client computer 120 of FIG. 1A, including an embodiment, among others, of the present disclosure. Similarly to FIG. 1B, in terms of hardware architecture, as shown in FIG. 1C, the computer 120 includes a processor 184, memory 186, and one or more input and/or output (I/O) devices 188 (or peripherals) that are communicatively coupled via a local interface 190. Each of the elements in the computer are similar to those as described with respect to FIG. 1B, and as known in the art.
The software in memory 186 typically includes one or more separate programs 192, 194, each of which comprises an ordered listing of executable instructions for implementing logical functions. In the example of FIG. 1C, the software in the memory 186 includes a telecommunications assignment client application 194 and a suitable operating system (O/S) 192. Examples of suitable operating systems are the same as those described with respect to FIG. 1B.
The telecommunication assignment client application 194 is a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program needs to be translated via a compiler, assembler, interpreter, or the like, which may or may not be included within the memory 186, so as to operate properly in connection with the O/S 192. Furthermore, the telecommunication assignment client application 194 in various implementations written as (a) an object oriented programming language, which has classes of data and methods, or (b) a procedure programming language, which has routines, subroutines, and/or functions, for example but not limited to, TCL/TK with Expect, C, C++, Pascal, Basic, Fortran, Cobol, Perl, Java, and Ada.
When the telecommunication assignment client application 194 is implemented in software, as is shown in FIG. 1C, it should be noted that the telecommunication assignment client application 194 in some implementations, among others, is stored on any computer readable medium for use by or in connection with any computer related system or method. In the context of this document, a computer readable medium is an electronic, magnetic, optical, or other physical device or means that contains or stores a computer program for use by or in connection with a computer related system or method. The telecommunication assignment client application 194 is typically embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that fetches the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” is any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium is, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium in some embodiments, among others, is paper or another suitable medium upon which the program is printed, as the program is operable to be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The telecommunications assignment client application 194 is typically operable to receive information from the telecommunications assignment application 180 and render it on the remote computer 120 for a user. Moreover, the telecommunications assignment client application 194 is operable to receive instructions from the user and relay the instructions to the telecommunications assignment application 180.
The telecommunications assignment application 180 typically operates to assign network elements 130-140 to both track alarms on the network and to assign telemetry equipment to the various network elements on the network. Moreover, the telecommunications assignment application 180 operates to provide a plurality of screen representations to a remote computer 120. Referring now to FIG. 2, shown is an embodiment, among others, of a sample screen shot of a telecommunications assignment system opening screen representation 200 as rendered by client computer 120. It should be recognized that the application 180 is preferably password protected to prevent unauthorized users from changing capacity and/or assignments.
The opening screen representation 200 typically includes a plurality of menu representations 205-240. These menu representations can include, among others: a “File” menu representation 205; an “Edit” menu representation 210; an “Add Capacity” menu representation 215; an “Assign” menu representation 220; a “Maintenance” menu representation 225; a “Reports” menu representation 230; a “Transmission Sketch” menu representation 235; and, a “Help” menu representation 240. As one skilled in the art should recognize, each of these menu representations can be selected by moving the mouse cursor representation over the menu representation and pressing the select button on the mouse.
Referring now to FIG. 3, shown is a flowchart illustrating the choices presented to the user upon the user selecting any of the menu representations listed above. Upon selecting the “File” menu representation 205, a pulldown menu representation will appear with “File” selection representations 310. The “File” selection representation 310 can include, among others: a “Load” selection, a “Run” selection and an “Edit” selection. The “Load” selection enables a user to manually load a screen. The “Run” selection can typically enable a user to run a screen that was previously loaded. The “Edit” selection currently has no functionality, but could be used in some embodiments, among others, to edit a previously loaded screen.
Similarly, upon selecting the “Edit” menu representation 210, a pulldown menu representation will appear with “Edit” selection representations 315. The “Edit” selection representations 315 can include, among others: an “Undo” selection, a “Cut” selection, a “Copy” selection, a “Paste” selection, and a “Clear” selection. The “Undo” selection typically undoes the last action performed by the user. The “Cut” selection typically allows the user to “cut” a selected/highlighted piece of text from the screen representation. The “Copy” selection allows the user to store in memory a selected/highlighted piece of text from the screen representation. The “Paste” selection allows the user to “paste” a “copied” portion in memory into the space the cursor representation currently occupies. The “Clear” selection allows the user to clear the highlighted text from the screen.
Similarly, upon selecting the “Add Capacity” menu representation 215, a pulldown menu representation will appear with “Assign” selection representations 320. The “Add Capacity” selection representations 320 can include, among others: an “AI 130 Switch” selection, an “AI 180 Switch” selection, a “COWAN” selection, a “Dantel (Auto)” selection, a “Dantel (Manual)” selection, a “Dantel (from Assignments)” selection, a “Misc. E2A/Serial” selection, and a “Patch Panel” selection. Each of the selections represent alarm collection devices that can be added to the telemetry system to increase capacity.
Similarly, upon selecting the “Assign” menu representation 220, a pulldown menu representation will appear with “Assign” selection representations 325. The “Assign” selection representations 325 can include, among others: a “CCM” selection and a “OSPE” selection. “CCM” stands for circuit capacity manager, and allows the user to add telemetry capacity to the system. “OSPE” stands for outside plant engineering and allows the user to add capacity for outside plant engineering network elements.
Similarly, upon selecting the “Maintenance” menu representation 225, a pulldown menu representation will appear with “Maintenance” selection representations 330. The “Maintenance” selection representations 330 can include, among others: an “AI Server Queue Monitor” selection, an “Alarm Wiring Figure” selection, a “Router Queue Monitor” selection, a “Network Element Type” selection, a “Contacts” selection, a “5939 Form” selection, a “Site Profile, NE Profile” selection, a “LEIM System Type” selection, a “Provision AI Ports” selection, a “Router Password Maintenance” selection, a “Circuit Reservation” selection, a “View/Delete NE Info” selection, a “Valid OSS” selection, a “Validate IP Range” selection, a “Transmission Sketch” selection, a “SCID Maintenance” selection, a “Structure” selection, and a “Change NE Info” selection. The “AI Server Queue Monitor” selection typically allows users to view and update the status of AI switch provisioning. The “Alarm Wiring Figure” selection can typically enable the telecommunications assignment system project manager to add, delete, or change records that identify valid alarm wiring figures. The table can include site profile information like location type, power, standby engine, feed, and structure type. The site profile along with the multiplexer type typically indicate what alarm wiring figure to use. The “Router Queue Monitor” selection typically enables users to view and update the status of router and hub provisioning. The “Network Element Type” selection can typically enable a telecommunications assignment system project manager to add, delete, or change records that identify valid vendor and network element names so that they can be described on a “Network Element Profile” screen representation. The “Contacts” selection typically retrieve a “Contacts” screen representation which can provide users with a mechanism to create new contacts or edit existing contact information for a central office/wire center. The telecommunications assignment system 100 uses this information when creating 5939 forms. The “5939 Form” selection allows users to create 5939 Circuit Request Forms which can typically be submitted to a corporate-communication group (Corp-Com). The “Site Profile” representation provides a mechanism to create new central office, special, remote terminal, and customer premise site profiles. Users can modify or delete existing site profile records. Central office records contain telemetry related information like node type (Hub, End, or Remote), the default AI Switch location, and the serving datakit hub location. The telecommunications assignment system 100 uses this information when making telemetry assignments. The “NE Profile” representation typically enables the telecommunications assignment system project manager to add, delete, or change records that describe the telemetry methods available for a particular network element. The telecommunications assignment system 100 uses this information to make assignments and produce 5939 forms where required. The “LEIM System Type” selection enables the telecommunications assignment system project manager to add, delete, or change records that identify valid relationships between LEIM and BTAS network element names. The “Provision AI Ports” selection allows user to specify cards, or ports on a card, that they wish to have the telecommunications assignment system 100 re-provision. The “Router Password Maintenance” allows authorized users to update passwords and names associated with COWAN routers and hubs. The screen also provides the ability to have the telecommunications assignment system 100 update the passwords automatically when desired. The “Circuit Reservation” selection allows users to add or delete circuit IDs or a range of circuit IDs. The selection can also displays the status of each circuit as returned by search criteria. The “View/Delete NE Info” selection can allow users to view all telemetry assignment associated with a network element. It can also allow the user to delete a NE without first having to remove all assignments. The “Valid OSS” selection provides the telecommunications assignment system project manager with the capability to add, delete, or change records that identify valid OSS's than can be associated with an LCN that is input for a network element in a “Network Element Profile” screen representation. The “Validate IP Range” selection enables the telecommunications assignment system project manager to add, delete, or change the range of IP addresses that are associated with a particular equipment class. The telecommunications assignment system 100 uses this information when determining the list of available IP addresses for a NE assignment. The “Transmission Sketch” selection can typically be designed to enable users to create and plan network element rings. Users can view a detailed transmission drawing of the ring and manipulate network elements on the ring. A “Transmission Sketch” typically incorporates information from the telecommunications assignment system 100 and a digital service cross-connect/fiber optic cross-connect (DSX/FOX) system when creating the ring drawings. The “SCID Maintenance” selection allows users to manage the nodes that are associated with an SCID. The “Structure” selection allows the telecommunications assignment system project manager to add, delete, or change structure types. This information can then be used to populate the Site Profile screen. The “Change NE Info” selection allows user to convert an NE to a different NE Class/Vendor/NE Type/Generic combination of the same telemetry protocol.
Similarly, upon selecting the “Reports” menu representation 230, a pulldown menu representation will appear with “Reports” selection representations 335. The “Reports” selection representations 335 can include, among others: an “Assignments” selection, a “Capacity Planning” selection, an “EWO Report” selection, a “SONET Center Report” selection, a “Global IP Address Assignment” selection, an “NRC Status Report” selection, an “OSI Detail Report” selection, an “OSI Summary Report” selection, an “OSPE Monthly Status” selection, an “OSPE Report” selection, an “SCID Report” selection, a “Release Notes” selection, an “RTOC Status Report” selection, a “IP Address Assignments” selection, a “TEO Report” selection, a “Transaction Log” selection, a “User Information” selection, and a “Vendor Report” selection. Each of these selections typically generates a report based upon the category of the selection.
Similarly, upon selecting the “Transmission Sketch” menu representation 235, a pulldown menu representation will appear with “Transmission Sketch” selection representations 340. The “Transmission Sketch” selection representations 340 can include, among others: an “Equipment Search” selection and a “Transmission Paths” selection. The “Transmission Sketch” selection can typically enable users to create and plan network rings. The user can view a detailed transmission drawing of the ring and manipulate network elements on the ring. The “Transmission Sketch” screen representation typically incorporates information retrieved from the telecommunications assignment system 100 and from DSX/FOX.
Similarly, upon selecting the “Help” menu representation 240, a pulldown menu representation will appear with “Help” selection representations 345. The “Help” selection representations 345 can include, among others: a “Contents” selection and an “About BTAS” selection. The “Contents” selection will typically allow-the user to search for a help topic, while the “About BTAS” selection will give the users information about the BTAS program.
One skilled in the art should recognize that the menu system recited above is only an example of one of many menu systems that could be used in conjunction with the telecommunications assignment system 100. It should be clear that there exist numerous other telemetry devices that could be added to the menu, other types of reports that could be included, other maintenance topics, etc. that could be included within the scope of the present disclosure. Thus, each of these alternative menu designs is intended to be included within the scope of the present disclosure.
Referring now to FIG. 4, shown is an embodiment, among others, of a sample screen shot of a network element profile. This screen representation 400 can typically be accessed by selecting the “NE profile” selection under the “Maintenance” menu representation 225. The profile shown is for a DDM2000 network element. The DDM2000 is an OC3 multiplexer available from Lucent Technologies, of Murray Hill, N.J. From this window the user can create and modify the types of network elements that can be tracked using telemetry equipment. Similarly, any other network elements on which the user wishes to track telemetry can be added to the system by adding a profile here. The button representations 405, 410 allow the user to request the various changes that can be made to a network element. The field representations 415, 420, 425, 430 allow the user to enter various types of information into the database 182, such as software, physical layer connections, link layer configurations, and network layer configurations. The tabbed field representation 435 allows the user to enter further information regarding the network element profile according to which of the plurality of tab representations 440 is selected. On the screen representation 400 shown, the “IP Address” tab is selected. Thus, the user is able to view information about the network element IP address. Each of the tab representations 440 allow the user to configure a different field associated with the network element.
Referring now to FIG. 5, shown is an embodiment, among others of a screen shot of a “Site Information Screen” representation 500. The “Site Information Screen” representation 500 allows the user to search for various central offices (COs) according to search parameters specified by the user. Typically the user would enter the information regarding the CO into the “Search” field representation 505. The telecommunications assignment system 100 would then retrieve a CO matching the CO specified by the search terms entered into the “Search” field representation 505. The location and address of the CO would typically populate the “Location” field representation 510. The “Structure AWF” field representation 515 would typically allow the user to retrieve alarm wiring figures (AWF) associated with the CO. The “Telemetry” field 520 would allow the user to view information about the telemetry equipment located at the CO. Referring now to FIG. 6, shown is an embodiment, among others, of a sample screen shot of an “AI 180 Switch Configuration” screen representation 600. The “AI 180 Switch Configuration” screen representation 600 allows the user to view the configuration of the AI 180 switches associated with a CO. AI 180 switches are alarm collectors which are widely used in the telecom industry, and are available from Applied Innovation, Inc., of Dublin, Ohio. The user would typically search for these switches by entering search parameters into the search field 605. Upon entering the search parameters into the search field 605, the user would typically select the “Retrieve Systems” button representation 620.
Upon selecting the “Retrieve Systems” button representation 620, the client computer 120 would retrieve information about AI 180 switches matching the search parameters from the teleconmunications assignment system database 182. Upon receiving the information from the telecommunications assignment system database 182 the client would display the search results in a results pane representation 615. The user may select any of the AI 180 switches returned by the search, and display, modify, or delete the system using button representations 620. The user choose to create a new system or submit a 5939 form using the button representations 620. Furthermore, the “AI 180 Switch Configuration” screen representation 600 further includes a plurality of button representations that can include, among others: a “Save” button representation 625, a “Clear” button representation 630, a “Delete” button representation 635, a “Print” button representation 640, and an “Exit” button representation 645. The “Save” button representation 625 is operable to command the database 182 to store any changes the user has made. The “Clear” button representation 630 is operable to clear the fields on the screen representation 600. The “Delete” button representation 635 is operable to instruct the database 182 to remove the currently selected record from memory. The “Print” button representation 640 is operable to instruct the client computer to print the screen representation out to a connected printer or to a file. The “Exit” button representation is operable to instruct the screen to close.
Referring now to FIG. 7, shown is an embodiment, among others, of a sample screen shot of an “AI 180 Switch Shelf“ screen representation 700. The “AI 180 Switch Shelf” screen representation 700 is obtained by selecting the display system button representation of FIG. 6. The screen representation 700 includes a graphical representation 705 of the AI 180 switch. One skilled in the art should immediately recognize that the graphical representation of the AI 180 switch includes the same organization as the physical AI 180 switch. In particular, the graphical representation 705 includes a processor section representation 710 and an interface card section representation 715. The processor section representation 710 includes two slots for cards. In the present example, among others, the processor section comprises two “198” processor cards. The 198 processor card is a standard processor card that is available from Applied Innovation, Inc. The interface card section representation can include up to eight network interface cards in slots numbered 1 to 8. Moreover, the new cards appear in bold, and in the plug-in list representation 720.
The user can also change the cards installed at the switch by moving the mouse cursor representation over the card and clicking the left button. Similarly the user can switch processor cards by moving the mouse cursor representation over the processor card, and selecting to remove the processor. The processor can also be changed by selecting the “Processor” button representation 725. Upon selecting the “Processor” button representation 725, a screen representation can appear to enable the user to choose the processor type.
The screen representation 700 also includes a plurality of other fields and button representations. The “Estimated Svc. Date” field representation 730 includes the estimated service date for the AI 180 switch. The “Plug Status” field representation 735 alerts the user as to the status of the currently selected network interface card representation. The plurality of button representations 740-760 can include, among others: a “5939 Form” button representation 740, a “Switch Info” button representation 745, an “OSI Info” button representation 750, an “Okay” button representation 755, a “Cancel” button representation 760, and a “Print” button representation 765. The “5939 Form” button representation 740 allows the user to submit a 5939 form. The 5939 form is a form that is typically sent to corporate command to establish a data circuit. The “Switch Info” button 745, when selected, produces a screen with information about the AI 180 switch. The “OSI Info” button representation 750, when selected, will produce OSI information regarding the AI 180 switch. The “Okay” button representation 755, when selected, will save the information that was changed by the user. The “Cancel” button, when selected, will cancel any changes made by the user. The “Print” button representation 765 allows the user to print the current configuration of the AI 180 switch in the graphical “Shelf” form.
Referring now to FIG. 8, shown is a sample screen shot of an embodiment, among others, of an add card screen representation 800. The screen representation 800 is similar to the “AI 180 Switch Shelf” screen representation, however, the card in the last slot 805 has been removed. In order to add a card to the AI 180 switch, the user would move the mouse cursor representation over the empty slot and press the left mouse button to obtain a menu. The user would then select the add card option from the menu. A “Common Plug-Ins” window representation 810 would appear. The user could then move the mouse cursor over any of the plug-ins listed and select the plug-in by pressing the left mouse button. By selecting the “Okay” button representation from the “Common Plug-Ins” window representation 810, the card would appear in the empty slot 805. If the user selected the “Cancel” button representation, the empty slot 805 would remain empty. The “Override” button representation typically enables administrators to override a telecommunications assignment system rule that removes port assignments before the card type can be changed. An administrator can thus select a new card and the new card will have the same assignments as the old card (provided that the card has the same port types).
Referring now to FIG. 9, shown is a sample screen shot of an embodiment, among others, of a “COWAN” screen representation 900. This screen is typically accessed by selecting the “Add Capacity” menu representation 215 on the opening screen representation 200 and then selecting the “COWAN” selection as shown in FIG. 3. The user can view all of the routers contained at a central office using the “COWAN” screen representation 900. Typically the user would enter search parameters into the location field representations 905. Particularly, the user typically enters parameters into the “Entity” field representation 910, the “Component” field representation 915, and the “CLLI” field representation 920. After entering these parameters, the search results will appear in the “Routers” data table representation 925.
The user can also use the button representations 930 to request the client perform some action. These button representations 930 can include, among others: a “Display” button representation 935, a “Router Info” button representation 940, a “Hub Info” button representation 945, an “IP Address” button representation 950, a “5939 Form” button representation 955, an “Add” button representation 960, an “Edit” button representation 965, a “Delete” button representation 970, and a “Clear” button representation 975. The “Display” button representation 935 typically displays the highlighted router in graphical form as shown in FIG. 10. The “Router Info” button representation 940 typically displays information about the highlighted router. The “Hub Info” button representation 945 typically retrieves a diagram of the highlighted Router's connection to any connected Hub(s) as shown in FIG. 11. Moreover, the diagram allows the user to select and view a graphical representation of a Hub, as shown in FIG. 14. The “IP Address” button representation 950 allows the user to view IP addresses associated with the highlighted router. The “5939 Form” button representation 955 allows the user to submit a 5939 form as described above. The “Add” button representation 960 allows the user to add a router to the central office. The “Edit” button representation 965 allows the user to enter information to edit a profile associated with the highlighted router. The “Delete” button representation 970 allows the user to delete the highlighted router. The “Clear” button representation 975 allows the user to clear the “Location” search fields 905 and the search results 925 before performing a new search.
Referring now to FIG. 10, shown is a sample screen shot of an embodiment, among others, of a “Router Configuration” screen representation 1000. This screen representation 1000 is typically accessed by selecting the “Display” button representation 935 of FIG. 9. The “Router Configuration” screen representation 1000 allows the user to view the cards installed in a router. Like FIGS. 7 and 8, the cards are presented in a graphical representation 1005. Similarly, the graphical representation 1005 is configured substantially similar to the physical layout of the Cisco 3662 router shown in this example, among others. The Cisco 3662 router is a common router available from Cisco Systems, Inc., of San Jose, Calif. The Cisco router has six slots, numbered from right to left, bottom to top, respectively.
The “Router Configuration” screen representation 1000 can also include, among others: “Router Info” field representations 1010, “Slot Information” field representations 1015, 1020, and button representations “Print” 1025, “Patch Panel” 1030, and “Save” 1035. The “Router Info” field representations 1010 typically include various information about the router whose configuration is displayed. The “Slot Information” field representations 1015, 1020 typically include information regarding each of the slots on the router. The “Print” button representation 1025 allows the user to print the router configuration. The “Patch Panel” button representation allows the user to view the patch panels installed in the current central office. The “Save” button representation 1035 allows the user to save any changes made to the router configuration.
Referring now to FIG. 11, shown is a screen shot of an embodiment, among others, of a “Hub Info” screen representation 1 100. The “Hub Info” screen representation shows a schematic of the hubs connected to the router highlighted on the “COWAN” screen representation 900. The “Hub Info“screen representation 1100 also includes the IP addresses of each of the connections between the router and the hubs. Furthermore, the “Hub Info” screen representation 1100 includes information about the location of each of the hubs and the router. This information includes the CLLI code of the CO, the type of hub/router, the software being used on the hub/router, the floor/bay/RR/unit number, and the estimated service date. The user can view more details of any of the hub(s) or router by moving the mouse cursor representation over the desired unit and pressing the right mouse button to produce a menu representation. Typically the menu representation will include a variety of options for the user. In particular the menu representation includes, among others, a view assignments selection. The view assignments selection can be used to view the assignments of a hub/router/patch panel as shown with respect to FIGS. 13-15.
Referring now to FIG. 12, shown is a screen shot of an embodiment, among others, of a “Network Element Telemetry Assignment” screen representation 1200. Typically the user could locate network elements at a CO using the “Mode,” “Component,” and “Entity” field representations 1210, 1215, 1220, respectively, and the “Equipment” pane representation 1205 to select the CO CLLI code.
The “Network Element Telemetry Assignment” screen representation 1200 also includes a plurality of button representations 1225, including, among others: a “New Site” button representation, an “NE Info” button representation, a “Patch Panel” button representation, an “OSI” button representation, a “5939” button representation, a “Telemetry” button representation, and an “Exit” button representation. The “New Site” button representation typically allows a user to request that a new CLLI code be added to the telecommunications assignment system database 182. The “NE Info” button representation typically allows the user to view information about the selected network element (NE). The “Patch Panel” button representation allows the user to view the patch panel to which the NE is connected. The “OSI” button representation allows the user to view the OSI card to which a network element is connected. The “5939” button representation allows the user to submit a 5939 form as described above. The “Telemetry” button representation allows the user to assign telemetry elements to the selected NE. The “Exit” button representation allows the user to exit the current screen representation.
The “Network Element Telemetry Assignment” screen representation can also typically include, among others: “Location” field representations 1230, “Job Information” field representations 1235, and SONET information field representations 1240. The “Location” field representations 1230 specifies the location of the network element. The “Job Information” field representations 1235 typically includes information about the TEO and estimated service date. The SONET information field representations include information about the SCID associated with the network element and the target identifier (TID) of the network element.
Referring now to FIG. 13, shown is a sample screen shot of an embodiment, among others, of a “Router Assignment” screen representation 1300. The “Router Assignment” screen representation 1300 can typically include, among others: “Network Element” field representations 1305, “Available Routers” field representation 1310, and a graphical representation 1315 of the router highlighted in the “Available Routers” field representation 1310. Again, one skilled in the art should recognize that the graphical representation 1315 of the router is substantially similar to the physical layout of a real Cisco 3662 router. Moreover, even the cards installed into the 3662 router contain the correct layout to match the layout of the physical cards. The graphical representation 1315 farther includes notation at each of the card ports as to whether the card is “Assigned,” “Available,” “Pending,” or “RMA” (Requires Manual Attention). A user can “mouse over” any of the ports to determine what network element is assigned to that port. Furthermore, the user can assign the network element listed in the “Network Element” field representations 1305 to a selected port of the router.
Referring now to FIG. 14, shown is a sample screen shot of an embodiment, among others, of a “Hub Assignment” screen representation 1400. The “Hub Assignment” screen representation 1400 can typically include, among others: “Network Element” field representations 1405, “Available Hubs” field representation 1410, and a graphical representation 1415 of the hub highlighted in the “Available Hubs” field representation 1410. Again, one skilled in the art should recognize that the graphical representation 1415 of the hub is substantially similar to the physical layout of a real 1924 hub. The 1924 hub is available from Cisco Systems, Inc. of San Jose, Calif. The graphical representation 1415 further includes notation at each of the ports of the hub as to whether the card is “Assigned,” “Available,” or “Pending.” A user can “mouse over” any of the ports to determine what network element is assigned to that port. Furthermore, the user can assign the network element listed in the “Network Element” field representations 1405 to a selected port of the hub.
Referring now to FIG. 15, shown is a sample screen shot of an embodiment, among others, of a “Patch Panel Assignment” screen representation 1500. The “Patch Panel Assignment” screen representation 1500 can typically include, among others: “Network Element” field representations 1505, “Available Patch Panels” field representation 1510, and a graphical representation 1515 of the patch panel highlighted in the “Available Patch Panels” field representation 1510. Again, one skilled in the art should recognize that the graphical representation 1515 of the patch panel is substantially similar to the physical layout of a real patch panel. The graphical representation 1515 further includes notation at each of the ports of the patch panel as to whether the card is “Assigned,” “Available,” or “Pending.” A user can “mouse over” any of the ports to determine what network element is assigned to that port. Furthermore, the user can assign the network element listed in the “network Element” field representations 1505 to a selected port of the patch panel.
It should be recognized by one skilled in the art that graphical models for any assignments screens can be similarly added to the telecommunications assignment system 100. In particular, the graphical representations can help technicians troubleshoot a problem more easily than a text representation. Further, a network planner can more accurately design systems and plan for future needs using the graphical representations of the network elements.
Process and function descriptions and blocks in flow charts can be understood as representing, in some embodiments, modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present disclosure. In addition, such functional elements can be implemented as logic embodied in hardware, software, firmware, or a combination thereof, among others. In some embodiments involving software implementations, such software comprises an ordered listing of executable instructions for implementing logical functions and can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a computer-readable medium can be any means that can contain, store, communicate, propagate, or transport the software for use by or in connection with the instruction execution system, apparatus, or device.
It should also be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) of the disclosure without departing substantially from the principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and the present disclosure and protected by the following claims.
|
__label__pos
| 0.821952 |
1
Tenho uma api que ao receber os dados preenche uma classe, eu quero fazer um while nesta classe tb_dados_api e tb_carrinho, pegar os dados para gravar no banco, como eu poderia fazer isso? Agradeço
Aqui estou recebendo os dados
[HttpPost]
[Route("unidade/carrinho/ConsultaUnidadeAtendimento")]
public HttpResponseMessage ConsultaUnidadeAtendimento(TB_DADOS_API consultaAtendimento)
{
try
{
string numeroCarrinho = consultaAtendimento.NumeroCarrinho.ToString();
string cep = consultaAtendimento.Cep;
bool retiraLocal = consultaAtendimento.RetiraNoLocal;
var tTabela = new ConsultaUnidadeEstoque();
var listar = tTabela.SelecionaUnidadeAtendimento(cep);
return Request.CreateResponse(HttpStatusCode.OK, new { dados = listar.ToArray() });
}
catch (Exception ex)
{
return Request.CreateResponse(HttpStatusCode.BadRequest, ex.Message);
}
}
Eles preenche as classe:
public class TB_DADOS_API
{
[JsonProperty("numeroCarrinho")]
public long NumeroCarrinho { get; set; }
[JsonProperty("itens")]
public List<TB_CARRINHO> Itens { get; set; }
[JsonProperty("cep")]
public string Cep { get; set; }
[JsonProperty("retiraNoLocal")]
public bool RetiraNoLocal { get; set; }
}
public class TB_CARRINHO
{
[JsonProperty("codigo")]
public string Codigo { get; set; }
[JsonProperty("qtd")]
public int Qtd { get; set; }
}
• ta usando Entity framework ou outro orm ? – Eduardo Sampaio 8/12/17 às 1:39
• Não estou usando Entity framework, eu tenho a classe preenchida, quero ler os dados e gravar no banco – itasouza 8/12/17 às 1:43
• pode usar foreach para ter acesso aos objetos e salvar todo em uma tabela coloquei o exemplo ai na resposta – Eduardo Sampaio 8/12/17 às 1:46
• Agradeço imensamente a ajuda, minha ideia era fazer este foreach dentro do metodo ConsultaUnidadeEstoque e não neste local, porque lá eu tenho uma rotina que vai selecionar onde vai ser gravado os itens – itasouza 8/12/17 às 1:50
1
No seu caso pode usar foreach
[HttpPost]
[Route("unidade/carrinho/ConsultaUnidadeAtendimento")]
public HttpResponseMessage ConsultaUnidadeAtendimento(TB_DADOS_API consultaAtendimento)
{
try
{
string numeroCarrinho = consultaAtendimento.NumeroCarrinho.ToString();
string cep = consultaAtendimento.Cep;
bool retiraLocal = consultaAtendimento.RetiraNoLocal;
var tTabela = new ConsultaUnidadeEstoque();
var listar = tTabela.SelecionaUnidadeAtendimento(cep);
foreach(var item in consultaAtendimento.itens){
// chamar seu repositorio para gravar item
//item aqui ja é objeto exemplo você ja pode pegar
//item.codigo ou item.qtd
}
return Request.CreateResponse(HttpStatusCode.OK, new { dados = listar.ToArray() });
}
catch (Exception ex)
{
return Request.CreateResponse(HttpStatusCode.BadRequest, ex.Message);
}
}
• Eduardo, eu conseguir fazer da forma como queria, eu só precisei enviar : var listar = tTabela.SelecionaUnidadeAtendimento(consultaAtendimento); – itasouza 8/12/17 às 17:28
• 1
Mais a sua ideia também deu certo. – itasouza 8/12/17 às 17:29
Sua resposta
Ao clicar em “Publique sua resposta”, você concorda com os termos de serviço, política de privacidade e política de Cookies
Esta não é a resposta que você está procurando? Pesquise outras perguntas com a tag ou faça sua própria pergunta.
|
__label__pos
| 0.798814 |
LtpaToken Generator for Multi-server SSO Configurations
/**
* LtpaToken Generator V1.1
*
* This Java class generates a valid LtpaToken valid for any user name.
*
* To use it on SSJS:
* -------------------
* importPackage(com.developi.openntf);
* var ltpa:LtpaGenerator=new LtpaGenerator();
* ltpa.initByConfiguration(sessionAsSigner, "Developi:LtpaToken");
* token=ltpa.generateLtpaToken("CN=Serdar Basegmez/O=developi");
*
* To use the token (make sure replace '.developi.info' with your SSO domain):
* -------------------------------------------------------------------------
* response=facesContext.getExternalContext().getResponse();
* response.setHeader("Set-Cookie", "LtpaToken=" + token + "; domain=.developi.info; path=/");
* facesContext.getExternalContext().redirect(someUrl);
*
* 1. "Developi:LtpaToken" is the SSO configuration key. If you are using Internet site configuration, it will be
* "Organization:TokenName". Otherwise, it will be "TokenName" only. You may check "($WebSSOConfigs)"
* view in the names.nsf database.
* 2. sessionAsSigner should be given as parameter to the initByConfiguration method.
* 3. The signer of the database design should be listed as 'Owner' or 'Administrator' in the SSO configuration.
* 4. Current version only supports Domino keys. Tokens imported from Websphere will not generate valid tokens.
*
* Important Note:
* You will see "LMBCS" encoding below. This is because of that Domino encodes user names in LMBCS charset.
* As long as you use standard ANSI characters, it's OK. However if you use other languages (like Turkish) in
* user names, it will be encoded in default charset (ISO-8859-1). Normally, Domino JVM does not support LMBCS
* encoding. So you have to install a supporting library. I have found ICU (International Components for Unicode) library.
* However, it cannot be attached into NSF. So you have to install it into Domino JVM. To do this;
*
* - Go to ICU Project site (http://www.icu-project.org)
* - Download "icu4j-49_1.jar" and "icu4j-charset-49_1.jar" (or latest versions)
* - Put those files into "{Domino Program Folder}\jvm\lib\ext"
* - Restart your HTTP task
*
* This will install ICU library into your server. This library is licensed under X-License and can be used commercially.
* I didn't try but it can also be installed via OSGi plugin. Let me know if you do it :)
* Direct link for download: http://apps.icu-project.org/icu-jsp/downloadPage.jsp?ver=49.1&base=j&svn=release-49-1
*
*/
package com.developi.openntf;
import java.io.UnsupportedEncodingException;
import java.nio.charset.Charset;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Arrays;
import java.util.GregorianCalendar;
import javax.xml.bind.DatatypeConverter;
import lotus.domino.Database;
import lotus.domino.Document;
import lotus.domino.NotesException;
import lotus.domino.Session;
import lotus.domino.View;
/*
* @author Serdar Basegmez, Developi (http://lotusnotus.com/en)
*/
public class LtpaGenerator {
public final String NAMESDB="names.nsf";
public final String SSOVIEW="($WebSSOConfigs)";
public final String SSO_DOMINO_SECRETFIELD="LTPA_DominoSecret";
public final String SSO_DOMINO_DURATIONFIELD="LTPA_TokenExpiration";
private boolean ready=false;
private int duration=300;
private String ltpaSecret="";
public LtpaGenerator() {
}
public LtpaGenerator(String ltpaSecret) {
setLtpaSecret(ltpaSecret);
}
public LtpaGenerator(String ltpaSecret, int duration) {
setLtpaSecret(ltpaSecret);
setDuration(duration);
}
public void initByConfiguration(Session session, String configName) throws Exception {
Database dbNames=null;
View ssoView=null;
Document ssoDoc=null;
try {
String currentServer=session.getCurrentDatabase().getServer();
dbNames=session.getDatabase(currentServer, NAMESDB, false);
ssoView=dbNames.getView(SSOVIEW);
ssoDoc=ssoView.getDocumentByKey(configName, true);
if(ssoDoc==null) {
throw new IllegalArgumentException("Unable to find SSO configuration with the given configName.");
}
setLtpaSecret(ssoDoc.getItemValueString(SSO_DOMINO_SECRETFIELD));
setDuration(ssoDoc.getItemValueInteger(SSO_DOMINO_DURATIONFIELD));
} catch (NotesException ex) {
throw new Exception("Notes Error: "+ex);
} finally {
try {
if(dbNames!=null) dbNames.recycle();
if(ssoView!=null) ssoView.recycle();
if(ssoDoc!=null) ssoDoc.recycle();
} catch(NotesException exc) {
//ignore
}
}
}
public String generateLtpaToken(String userName) {
if(!isReady()) {
throw new IllegalStateException("LtpaGenerator is not ready.");
}
MessageDigest sha1 = null;
GregorianCalendar creationDate=new GregorianCalendar();
GregorianCalendar expiringDate=new GregorianCalendar();
byte[] userNameArray=userName.getBytes();
expiringDate.add(GregorianCalendar.MINUTE, duration);
try {
sha1 = MessageDigest.getInstance( "SHA-1" );
} catch (NoSuchAlgorithmException e) {
e.printStackTrace(System.err);
}
byte[] secretDecoded=DatatypeConverter.parseBase64Binary(ltpaSecret);
// Look at important notes above...
try {
if(Charset.isSupported("LMBCS")) {
userNameArray=userName.getBytes("LMBCS");
}
} catch (UnsupportedEncodingException e) {
// Not supposed to fall here.
}
byte[] tokenBase=concatBytes(("\000\001\002\003"+getHexRep(creationDate)+getHexRep(expiringDate)).getBytes(), userNameArray);
byte[] digest=sha1.digest(concatBytes(tokenBase, secretDecoded));
return DatatypeConverter.printBase64Binary(concatBytes(tokenBase, digest));
}
public static byte[] concatBytes(byte[] arr1, byte[] arr2) {
byte[] result=Arrays.copyOf(arr1, arr1.length+arr2.length);
System.arraycopy(arr2, 0, result, arr1.length, arr2.length);
return result;
}
public static String getHexRep(GregorianCalendar date) {
int timeVal=(int)(date.getTimeInMillis()/1000);
String hex=Integer.toHexString(timeVal).toUpperCase();
if(hex.length()>=8) {
return hex;
} else {
return String.format("%0"+(8-hex.length())+"d", 0)+hex;
}
}
public void setDuration(int duration) {
this.duration = duration;
}
public void setLtpaSecret(String ltpaSecret) {
this.ltpaSecret = ltpaSecret;
this.ready=true;
}
public boolean isReady() {
return ready;
}
}
This Java class generates a valid LtpaToken valid for any user name.
When using, you should sign the database with a user that is listed as 'Owner' or 'Administrator' in the SSO configuration. This version only supports Domino keys. Tokens imported from Websphere will not generate valid tokens. Since WAS token generation is not based on public API, it doesn't seem possible to do it without native libraries.
This class can be used for any customized login operations. You may specify any user with any name while creating the token.
UPDATE on 9th July, 2012:
- LMBCS encoding support has been added (see comments for details)
Java
Serdar Basegmez
May 26, 2012 5:27 PM
Rating
531
All code submitted to OpenNTF XSnippets, whether submitted as a "Snippet" or in the body of a Comment, is provided under the Apache License Version 2.0. See Terms of Use for full details.
4 comment(s)Login first to comment...
an Sharma
(at 09:27 on 23.01.2013)
Simple to use and works great.
Any progress with LtpaToken2? We have Web SSO with imported WebSphere key and LtpaToken does not work on that server.
Any suggestions as how to proceed.
Thimo Jansen
(at 05:05 on 09.08.2012)
Wonderful piece of code, works straight out of the box. Thanks!
Serdar Basegmez
(at 06:51 on 09.07.2012)
Thanks Enrique...
The problem with WAS tokens is that IBM is using a series of non-public encryption/signing algorithms for the token generation. So the only way to generate a valid token seems using native C API calls at the moment.
I'll try some other techniques later...
Enrique M Munoz
(at 04:03 on 03.07.2012)
Hello Mr Basegmez.
Very good work! We are very interested on it. In fact we are looking forward the way you can solve the last part when Websphere token can be added. I offer to you for any help or test.
We are working in a IBMi system with Domino and Websphere but beyond this we are using X.509 Certificates.
Thank you and congrats for your work
|
__label__pos
| 0.503964 |
I want to count how many cells have a date.
I am trying to find a way to count how many cells in my column have a date in it.
I use the formula =COUNT([Target Launch Date]6:[Target Launch Date]24) but it always appears the error code #DATE EXPECTED.
I tried the same formula on other columns with text and it works perfectly fine.
I also tried =COUNTIF([Target Launch Date]6:[Target Launch Date]24, <>"") but it gives the same result.
Can someone help me?
Tags:
Answers
Help Article Resources
Want to practice working with formulas directly in Smartsheet?
Check out the Formula Handbook template!
|
__label__pos
| 0.999963 |
Warning, file /frameworks/kcontacts/src/sound.cpp was not indexed or was modified since last indexation (in which case cross-reference links may be missing, inaccurate or erroneous).
0001 /*
0002 This file is part of the KContacts framework.
0003 SPDX-FileCopyrightText: 2002 Tobias Koenig <[email protected]>
0004
0005 SPDX-License-Identifier: LGPL-2.0-or-later
0006 */
0007
0008 #include "sound.h"
0009
0010 #include <QDataStream>
0011 #include <QSharedData>
0012
0013 using namespace KContacts;
0014
0015 class Q_DECL_HIDDEN Sound::Private : public QSharedData
0016 {
0017 public:
0018 Private()
0019 : mIntern(false)
0020 {
0021 }
0022
0023 Private(const Private &other)
0024 : QSharedData(other)
0025 {
0026 mUrl = other.mUrl;
0027 mData = other.mData;
0028 mIntern = other.mIntern;
0029 }
0030
0031 QString mUrl;
0032 QByteArray mData;
0033
0034 bool mIntern;
0035 };
0036
0037 Sound::Sound()
0038 : d(new Private)
0039 {
0040 }
0041
0042 Sound::Sound(const QString &url)
0043 : d(new Private)
0044 {
0045 d->mUrl = url;
0046 }
0047
0048 Sound::Sound(const QByteArray &data)
0049 : d(new Private)
0050 {
0051 d->mIntern = true;
0052 d->mData = data;
0053 }
0054
0055 Sound::Sound(const Sound &other)
0056 : d(other.d)
0057 {
0058 }
0059
0060 Sound::~Sound()
0061 {
0062 }
0063
0064 Sound &Sound::operator=(const Sound &other)
0065 {
0066 if (this != &other) {
0067 d = other.d;
0068 }
0069
0070 return *this;
0071 }
0072
0073 bool Sound::operator==(const Sound &other) const
0074 {
0075 if (d->mIntern != other.d->mIntern) {
0076 return false;
0077 }
0078
0079 if (d->mIntern) {
0080 if (d->mData != other.d->mData) {
0081 return false;
0082 }
0083 } else {
0084 if (d->mUrl != other.d->mUrl) {
0085 return false;
0086 }
0087 }
0088
0089 return true;
0090 }
0091
0092 bool Sound::operator!=(const Sound &other) const
0093 {
0094 return !(other == *this);
0095 }
0096
0097 void Sound::setUrl(const QString &url)
0098 {
0099 d->mIntern = false;
0100 d->mUrl = url;
0101 }
0102
0103 void Sound::setData(const QByteArray &data)
0104 {
0105 d->mIntern = true;
0106 d->mData = data;
0107 }
0108
0109 bool Sound::isIntern() const
0110 {
0111 return d->mIntern;
0112 }
0113
0114 bool Sound::isEmpty() const
0115 {
0116 return (d->mIntern && d->mData.isEmpty()) || (!d->mIntern && d->mUrl.isEmpty());
0117 }
0118
0119 QString Sound::url() const
0120 {
0121 return d->mUrl;
0122 }
0123
0124 QByteArray Sound::data() const
0125 {
0126 return d->mData;
0127 }
0128
0129 QString Sound::toString() const
0130 {
0131 QString str = QLatin1String("Sound {\n");
0132 str += QStringLiteral(" IsIntern: %1\n").arg(d->mIntern ? QStringLiteral("true") : QStringLiteral("false"));
0133 if (d->mIntern) {
0134 str += QStringLiteral(" Data: %1\n").arg(QString::fromLatin1(d->mData.toBase64()));
0135 } else {
0136 str += QStringLiteral(" Url: %1\n").arg(d->mUrl);
0137 }
0138 str += QLatin1String("}\n");
0139
0140 return str;
0141 }
0142
0143 QDataStream &KContacts::operator<<(QDataStream &s, const Sound &sound)
0144 {
0145 return s << sound.d->mIntern << sound.d->mUrl << sound.d->mData;
0146 }
0147
0148 QDataStream &KContacts::operator>>(QDataStream &s, Sound &sound)
0149 {
0150 s >> sound.d->mIntern >> sound.d->mUrl >> sound.d->mData;
0151
0152 return s;
0153 }
|
__label__pos
| 0.723276 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.