content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
CircuitBreakingException Data too large
Hi,
I have an ES stack running on AWS spot instances, so shard reallocation happens quite frequently.
Occasionally I receive the following error message and then the shard remains unassigned without further allocation attempts:
nested: RemoteTransportException[[ip-172-30-2-197.ec2.internal][172.30.2.197:9300][internal:index/shard/recovery/filesInfo]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [32428797638/30.2gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32428793232/30.2gb], new bytes reserved: [4406/4.3kb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=4406/4.3kb, accounting=708708/692kb]]; ","allocation_status":"no_attempt"}}
1. How can I avoid these errors?
2. Do these errors take into account the number of allocation retries? Because I have set the max_retries to 20
Thanks!
Cluster info:
Version 7.7.1
9 data nodes
1 large index of 100 million docs (196 GB) - 21 shards (7 primary with 2 replicas)
The other indices are very small
JVM settings:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms31g
-Xmx31g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1GC is only supported on JDK version 10 or later.
# To use G1GC uncomment the lines below.
# 10-:-XX:-UseConcMarkSweepGC
# 10-:-XX:-UseCMSInitiatingOccupancyOnly
# 10-:-XX:+UseG1GC
# 10-:-XX:InitiatingHeapOccupancyPercent=75
## DNS cache policy
# cache ttl in seconds for positive DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.ttl; set to -1 to cache forever
-Des.networkaddress.cache.ttl=60
# cache ttl in seconds for negative DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.negative ttl; set to -1 to cache
# forever
-Des.networkaddress.cache.negative.ttl=10
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# explicitly set the stack size
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=${ES_TMPDIR}
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
## JDK 8 GC logging
#8:-XX:+PrintGCDetails
#8:-XX:+PrintGCDateStamps
#8:-XX:+PrintTenuringDistribution
#8:-XX:+PrintGCApplicationStoppedTime
#8:-Xloggc:/var/log/elasticsearch/gc.log
#8:-XX:+UseGCLogFileRotation
#8:-XX:NumberOfGCLogFiles=32
#8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
#9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
#9-:-Djava.locale.providers=COMPAT
# temporary workaround for C2 bug with JDK 10 on hardware with AVX-512
10-:-XX:UseAVX=2
I've been experiencing the same issue off and on. My solution is to increase how much ram java can use. But that's not exactly ideal.
Is there a way to set the upper limit on how large an individual shard can be?
I'm off to research that.
I doubt it. As the index grows, your only option to reduce the shard size will be re-sharding
Only way to limit is add more shards which mean reindexing - how big are the shards? Looks like 196GB / 7 = 28GB each which is reasonable, but given all that's going on in the cluster, must be too large to move around. Not clear which breaker this is tripping as usages listed are small - how many indexes/shards are in the cluster, as that can eat RAM if you have thousands.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.959601 |
e4cb7db6d6800e47ec4eb90a1be144f0454cbdae
[reactos.git] / ntoskrnl / io / pnpmgr / pnpmgr.c
1 /*
2 * PROJECT: ReactOS Kernel
3 * COPYRIGHT: GPL - See COPYING in the top level directory
4 * FILE: ntoskrnl/io/pnpmgr/pnpmgr.c
5 * PURPOSE: Initializes the PnP manager
6 * PROGRAMMERS: Casper S. Hornstrup ([email protected])
7 * Copyright 2007 Hervé Poussineau ([email protected])
8 */
9
10 /* INCLUDES ******************************************************************/
11
12 #include <ntoskrnl.h>
13 #define NDEBUG
14 #include <debug.h>
15
16 /* GLOBALS *******************************************************************/
17
18 PDEVICE_NODE IopRootDeviceNode;
19 KSPIN_LOCK IopDeviceTreeLock;
20 ERESOURCE PpRegistryDeviceResource;
21 KGUARDED_MUTEX PpDeviceReferenceTableLock;
22 RTL_AVL_TABLE PpDeviceReferenceTable;
23
24 extern ERESOURCE IopDriverLoadResource;
25 extern ULONG ExpInitializationPhase;
26 extern BOOLEAN ExpInTextModeSetup;
27 extern BOOLEAN PnpSystemInit;
28
29 #define MAX_DEVICE_ID_LEN 200
30 #define MAX_SEPARATORS_INSTANCEID 0
31 #define MAX_SEPARATORS_DEVICEID 1
32
33 /* DATA **********************************************************************/
34
35 PDRIVER_OBJECT IopRootDriverObject;
36 PIO_BUS_TYPE_GUID_LIST PnpBusTypeGuidList = NULL;
37 LIST_ENTRY IopDeviceActionRequestList;
38 WORK_QUEUE_ITEM IopDeviceActionWorkItem;
39 BOOLEAN IopDeviceActionInProgress;
40 KSPIN_LOCK IopDeviceActionLock;
41
42 typedef struct _DEVICE_ACTION_DATA
43 {
44 LIST_ENTRY RequestListEntry;
45 PDEVICE_OBJECT DeviceObject;
46 DEVICE_RELATION_TYPE Type;
47 } DEVICE_ACTION_DATA, *PDEVICE_ACTION_DATA;
48
49 /* FUNCTIONS *****************************************************************/
50 NTSTATUS
51 NTAPI
52 IopCreateDeviceKeyPath(IN PCUNICODE_STRING RegistryPath,
53 IN ULONG CreateOptions,
54 OUT PHANDLE Handle);
55
56 VOID
57 IopCancelPrepareDeviceForRemoval(PDEVICE_OBJECT DeviceObject);
58
59 NTSTATUS
60 IopPrepareDeviceForRemoval(PDEVICE_OBJECT DeviceObject, BOOLEAN Force);
61
62 PDEVICE_OBJECT
63 IopGetDeviceObjectFromDeviceInstance(PUNICODE_STRING DeviceInstance);
64
65 PDEVICE_NODE
66 FASTCALL
67 IopGetDeviceNode(PDEVICE_OBJECT DeviceObject)
68 {
69 return ((PEXTENDED_DEVOBJ_EXTENSION)DeviceObject->DeviceObjectExtension)->DeviceNode;
70 }
71
72 VOID
73 IopFixupDeviceId(PWCHAR String)
74 {
75 SIZE_T Length = wcslen(String), i;
76
77 for (i = 0; i < Length; i++)
78 {
79 if (String[i] == L'\\')
80 String[i] = L'#';
81 }
82 }
83
84 VOID
85 NTAPI
86 IopInstallCriticalDevice(PDEVICE_NODE DeviceNode)
87 {
88 NTSTATUS Status;
89 HANDLE CriticalDeviceKey, InstanceKey;
90 OBJECT_ATTRIBUTES ObjectAttributes;
91 UNICODE_STRING CriticalDeviceKeyU = RTL_CONSTANT_STRING(L"\\Registry\\Machine\\System\\CurrentControlSet\\Control\\CriticalDeviceDatabase");
92 UNICODE_STRING CompatibleIdU = RTL_CONSTANT_STRING(L"CompatibleIDs");
93 UNICODE_STRING HardwareIdU = RTL_CONSTANT_STRING(L"HardwareID");
94 UNICODE_STRING ServiceU = RTL_CONSTANT_STRING(L"Service");
95 UNICODE_STRING ClassGuidU = RTL_CONSTANT_STRING(L"ClassGUID");
96 PKEY_VALUE_PARTIAL_INFORMATION PartialInfo;
97 ULONG HidLength = 0, CidLength = 0, BufferLength;
98 PWCHAR IdBuffer, OriginalIdBuffer;
99
100 /* Open the device instance key */
101 Status = IopCreateDeviceKeyPath(&DeviceNode->InstancePath, REG_OPTION_NON_VOLATILE, &InstanceKey);
102 if (Status != STATUS_SUCCESS)
103 return;
104
105 Status = ZwQueryValueKey(InstanceKey,
106 &HardwareIdU,
107 KeyValuePartialInformation,
108 NULL,
109 0,
110 &HidLength);
111 if (Status != STATUS_BUFFER_OVERFLOW && Status != STATUS_BUFFER_TOO_SMALL)
112 {
113 ZwClose(InstanceKey);
114 return;
115 }
116
117 Status = ZwQueryValueKey(InstanceKey,
118 &CompatibleIdU,
119 KeyValuePartialInformation,
120 NULL,
121 0,
122 &CidLength);
123 if (Status != STATUS_BUFFER_OVERFLOW && Status != STATUS_BUFFER_TOO_SMALL)
124 {
125 CidLength = 0;
126 }
127
128 BufferLength = HidLength + CidLength;
129 BufferLength -= (((CidLength != 0) ? 2 : 1) * FIELD_OFFSET(KEY_VALUE_PARTIAL_INFORMATION, Data));
130
131 /* Allocate a buffer to hold data from both */
132 OriginalIdBuffer = IdBuffer = ExAllocatePool(PagedPool, BufferLength);
133 if (!IdBuffer)
134 {
135 ZwClose(InstanceKey);
136 return;
137 }
138
139 /* Compute the buffer size */
140 if (HidLength > CidLength)
141 BufferLength = HidLength;
142 else
143 BufferLength = CidLength;
144
145 PartialInfo = ExAllocatePool(PagedPool, BufferLength);
146 if (!PartialInfo)
147 {
148 ZwClose(InstanceKey);
149 ExFreePool(OriginalIdBuffer);
150 return;
151 }
152
153 Status = ZwQueryValueKey(InstanceKey,
154 &HardwareIdU,
155 KeyValuePartialInformation,
156 PartialInfo,
157 HidLength,
158 &HidLength);
159 if (Status != STATUS_SUCCESS)
160 {
161 ExFreePool(PartialInfo);
162 ExFreePool(OriginalIdBuffer);
163 ZwClose(InstanceKey);
164 return;
165 }
166
167 /* Copy in HID info first (without 2nd terminating NULL if CID is present) */
168 HidLength = PartialInfo->DataLength - ((CidLength != 0) ? sizeof(WCHAR) : 0);
169 RtlCopyMemory(IdBuffer, PartialInfo->Data, HidLength);
170
171 if (CidLength != 0)
172 {
173 Status = ZwQueryValueKey(InstanceKey,
174 &CompatibleIdU,
175 KeyValuePartialInformation,
176 PartialInfo,
177 CidLength,
178 &CidLength);
179 if (Status != STATUS_SUCCESS)
180 {
181 ExFreePool(PartialInfo);
182 ExFreePool(OriginalIdBuffer);
183 ZwClose(InstanceKey);
184 return;
185 }
186
187 /* Copy CID next */
188 CidLength = PartialInfo->DataLength;
189 RtlCopyMemory(((PUCHAR)IdBuffer) + HidLength, PartialInfo->Data, CidLength);
190 }
191
192 /* Free our temp buffer */
193 ExFreePool(PartialInfo);
194
195 InitializeObjectAttributes(&ObjectAttributes,
196 &CriticalDeviceKeyU,
197 OBJ_KERNEL_HANDLE | OBJ_CASE_INSENSITIVE,
198 NULL,
199 NULL);
200 Status = ZwOpenKey(&CriticalDeviceKey,
201 KEY_ENUMERATE_SUB_KEYS,
202 &ObjectAttributes);
203 if (!NT_SUCCESS(Status))
204 {
205 /* The critical device database doesn't exist because
206 * we're probably in 1st stage setup, but it's ok */
207 ExFreePool(OriginalIdBuffer);
208 ZwClose(InstanceKey);
209 return;
210 }
211
212 while (*IdBuffer)
213 {
214 USHORT StringLength = (USHORT)wcslen(IdBuffer) + 1, Index;
215
216 IopFixupDeviceId(IdBuffer);
217
218 /* Look through all subkeys for a match */
219 for (Index = 0; TRUE; Index++)
220 {
221 ULONG NeededLength;
222 PKEY_BASIC_INFORMATION BasicInfo;
223
224 Status = ZwEnumerateKey(CriticalDeviceKey,
225 Index,
226 KeyBasicInformation,
227 NULL,
228 0,
229 &NeededLength);
230 if (Status == STATUS_NO_MORE_ENTRIES)
231 break;
232 else if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
233 {
234 UNICODE_STRING ChildIdNameU, RegKeyNameU;
235
236 BasicInfo = ExAllocatePool(PagedPool, NeededLength);
237 if (!BasicInfo)
238 {
239 /* No memory */
240 ExFreePool(OriginalIdBuffer);
241 ZwClose(CriticalDeviceKey);
242 ZwClose(InstanceKey);
243 return;
244 }
245
246 Status = ZwEnumerateKey(CriticalDeviceKey,
247 Index,
248 KeyBasicInformation,
249 BasicInfo,
250 NeededLength,
251 &NeededLength);
252 if (Status != STATUS_SUCCESS)
253 {
254 /* This shouldn't happen */
255 ExFreePool(BasicInfo);
256 continue;
257 }
258
259 ChildIdNameU.Buffer = IdBuffer;
260 ChildIdNameU.MaximumLength = ChildIdNameU.Length = (StringLength - 1) * sizeof(WCHAR);
261 RegKeyNameU.Buffer = BasicInfo->Name;
262 RegKeyNameU.MaximumLength = RegKeyNameU.Length = (USHORT)BasicInfo->NameLength;
263
264 if (RtlEqualUnicodeString(&ChildIdNameU, &RegKeyNameU, TRUE))
265 {
266 HANDLE ChildKeyHandle;
267
268 InitializeObjectAttributes(&ObjectAttributes,
269 &ChildIdNameU,
270 OBJ_KERNEL_HANDLE | OBJ_CASE_INSENSITIVE,
271 CriticalDeviceKey,
272 NULL);
273
274 Status = ZwOpenKey(&ChildKeyHandle,
275 KEY_QUERY_VALUE,
276 &ObjectAttributes);
277 if (Status != STATUS_SUCCESS)
278 {
279 ExFreePool(BasicInfo);
280 continue;
281 }
282
283 /* Check if there's already a driver installed */
284 Status = ZwQueryValueKey(InstanceKey,
285 &ClassGuidU,
286 KeyValuePartialInformation,
287 NULL,
288 0,
289 &NeededLength);
290 if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
291 {
292 ExFreePool(BasicInfo);
293 continue;
294 }
295
296 Status = ZwQueryValueKey(ChildKeyHandle,
297 &ClassGuidU,
298 KeyValuePartialInformation,
299 NULL,
300 0,
301 &NeededLength);
302 if (Status != STATUS_BUFFER_OVERFLOW && Status != STATUS_BUFFER_TOO_SMALL)
303 {
304 ExFreePool(BasicInfo);
305 continue;
306 }
307
308 PartialInfo = ExAllocatePool(PagedPool, NeededLength);
309 if (!PartialInfo)
310 {
311 ExFreePool(OriginalIdBuffer);
312 ExFreePool(BasicInfo);
313 ZwClose(InstanceKey);
314 ZwClose(ChildKeyHandle);
315 ZwClose(CriticalDeviceKey);
316 return;
317 }
318
319 /* Read ClassGUID entry in the CDDB */
320 Status = ZwQueryValueKey(ChildKeyHandle,
321 &ClassGuidU,
322 KeyValuePartialInformation,
323 PartialInfo,
324 NeededLength,
325 &NeededLength);
326 if (Status != STATUS_SUCCESS)
327 {
328 ExFreePool(BasicInfo);
329 continue;
330 }
331
332 /* Write it to the ENUM key */
333 Status = ZwSetValueKey(InstanceKey,
334 &ClassGuidU,
335 0,
336 REG_SZ,
337 PartialInfo->Data,
338 PartialInfo->DataLength);
339 if (Status != STATUS_SUCCESS)
340 {
341 ExFreePool(BasicInfo);
342 ExFreePool(PartialInfo);
343 ZwClose(ChildKeyHandle);
344 continue;
345 }
346
347 Status = ZwQueryValueKey(ChildKeyHandle,
348 &ServiceU,
349 KeyValuePartialInformation,
350 NULL,
351 0,
352 &NeededLength);
353 if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
354 {
355 ExFreePool(PartialInfo);
356 PartialInfo = ExAllocatePool(PagedPool, NeededLength);
357 if (!PartialInfo)
358 {
359 ExFreePool(OriginalIdBuffer);
360 ExFreePool(BasicInfo);
361 ZwClose(InstanceKey);
362 ZwClose(ChildKeyHandle);
363 ZwClose(CriticalDeviceKey);
364 return;
365 }
366
367 /* Read the service entry from the CDDB */
368 Status = ZwQueryValueKey(ChildKeyHandle,
369 &ServiceU,
370 KeyValuePartialInformation,
371 PartialInfo,
372 NeededLength,
373 &NeededLength);
374 if (Status != STATUS_SUCCESS)
375 {
376 ExFreePool(BasicInfo);
377 ExFreePool(PartialInfo);
378 ZwClose(ChildKeyHandle);
379 continue;
380 }
381
382 /* Write it to the ENUM key */
383 Status = ZwSetValueKey(InstanceKey,
384 &ServiceU,
385 0,
386 REG_SZ,
387 PartialInfo->Data,
388 PartialInfo->DataLength);
389 if (Status != STATUS_SUCCESS)
390 {
391 ExFreePool(BasicInfo);
392 ExFreePool(PartialInfo);
393 ZwClose(ChildKeyHandle);
394 continue;
395 }
396
397 DPRINT("Installed service '%S' for critical device '%wZ'\n", PartialInfo->Data, &ChildIdNameU);
398 }
399 else
400 {
401 DPRINT1("Installed NULL service for critical device '%wZ'\n", &ChildIdNameU);
402 }
403
404 ExFreePool(OriginalIdBuffer);
405 ExFreePool(PartialInfo);
406 ExFreePool(BasicInfo);
407 ZwClose(InstanceKey);
408 ZwClose(ChildKeyHandle);
409 ZwClose(CriticalDeviceKey);
410
411 /* That's it */
412 return;
413 }
414
415 ExFreePool(BasicInfo);
416 }
417 else
418 {
419 /* Umm, not sure what happened here */
420 continue;
421 }
422 }
423
424 /* Advance to the next ID */
425 IdBuffer += StringLength;
426 }
427
428 ExFreePool(OriginalIdBuffer);
429 ZwClose(InstanceKey);
430 ZwClose(CriticalDeviceKey);
431 }
432
433 NTSTATUS
434 FASTCALL
435 IopInitializeDevice(PDEVICE_NODE DeviceNode,
436 PDRIVER_OBJECT DriverObject)
437 {
438 PDEVICE_OBJECT Fdo;
439 NTSTATUS Status;
440
441 if (!DriverObject)
442 {
443 /* Special case for bus driven devices */
444 DeviceNode->Flags |= DNF_ADDED;
445 return STATUS_SUCCESS;
446 }
447
448 if (!DriverObject->DriverExtension->AddDevice)
449 {
450 DeviceNode->Flags |= DNF_LEGACY_DRIVER;
451 }
452
453 if (DeviceNode->Flags & DNF_LEGACY_DRIVER)
454 {
455 DeviceNode->Flags |= (DNF_ADDED | DNF_STARTED);
456 return STATUS_SUCCESS;
457 }
458
459 /* This is a Plug and Play driver */
460 DPRINT("Plug and Play driver found\n");
461 ASSERT(DeviceNode->PhysicalDeviceObject);
462
463 DPRINT("Calling %wZ->AddDevice(%wZ)\n",
464 &DriverObject->DriverName,
465 &DeviceNode->InstancePath);
466 Status = DriverObject->DriverExtension->AddDevice(
467 DriverObject, DeviceNode->PhysicalDeviceObject);
468 if (!NT_SUCCESS(Status))
469 {
470 DPRINT1("%wZ->AddDevice(%wZ) failed with status 0x%x\n",
471 &DriverObject->DriverName,
472 &DeviceNode->InstancePath,
473 Status);
474 IopDeviceNodeSetFlag(DeviceNode, DNF_DISABLED);
475 DeviceNode->Problem = CM_PROB_FAILED_ADD;
476 return Status;
477 }
478
479 Fdo = IoGetAttachedDeviceReference(DeviceNode->PhysicalDeviceObject);
480
481 /* Check if we have a ACPI device (needed for power management) */
482 if (Fdo->DeviceType == FILE_DEVICE_ACPI)
483 {
484 static BOOLEAN SystemPowerDeviceNodeCreated = FALSE;
485
486 /* There can be only one system power device */
487 if (!SystemPowerDeviceNodeCreated)
488 {
489 PopSystemPowerDeviceNode = DeviceNode;
490 ObReferenceObject(PopSystemPowerDeviceNode->PhysicalDeviceObject);
491 SystemPowerDeviceNodeCreated = TRUE;
492 }
493 }
494
495 ObDereferenceObject(Fdo);
496
497 IopDeviceNodeSetFlag(DeviceNode, DNF_ADDED);
498
499 return STATUS_SUCCESS;
500 }
501
502 static
503 NTSTATUS
504 NTAPI
505 IopSendEject(IN PDEVICE_OBJECT DeviceObject)
506 {
507 IO_STACK_LOCATION Stack;
508 PVOID Dummy;
509
510 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
511 Stack.MajorFunction = IRP_MJ_PNP;
512 Stack.MinorFunction = IRP_MN_EJECT;
513
514 return IopSynchronousCall(DeviceObject, &Stack, &Dummy);
515 }
516
517 static
518 VOID
519 NTAPI
520 IopSendSurpriseRemoval(IN PDEVICE_OBJECT DeviceObject)
521 {
522 IO_STACK_LOCATION Stack;
523 PVOID Dummy;
524
525 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
526 Stack.MajorFunction = IRP_MJ_PNP;
527 Stack.MinorFunction = IRP_MN_SURPRISE_REMOVAL;
528
529 /* Drivers should never fail a IRP_MN_SURPRISE_REMOVAL request */
530 IopSynchronousCall(DeviceObject, &Stack, &Dummy);
531 }
532
533 static
534 NTSTATUS
535 NTAPI
536 IopQueryRemoveDevice(IN PDEVICE_OBJECT DeviceObject)
537 {
538 PDEVICE_NODE DeviceNode = IopGetDeviceNode(DeviceObject);
539 IO_STACK_LOCATION Stack;
540 PVOID Dummy;
541 NTSTATUS Status;
542
543 ASSERT(DeviceNode);
544
545 IopQueueTargetDeviceEvent(&GUID_DEVICE_REMOVE_PENDING,
546 &DeviceNode->InstancePath);
547
548 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
549 Stack.MajorFunction = IRP_MJ_PNP;
550 Stack.MinorFunction = IRP_MN_QUERY_REMOVE_DEVICE;
551
552 Status = IopSynchronousCall(DeviceObject, &Stack, &Dummy);
553
554 IopNotifyPlugPlayNotification(DeviceObject,
555 EventCategoryTargetDeviceChange,
556 &GUID_TARGET_DEVICE_QUERY_REMOVE,
557 NULL,
558 NULL);
559
560 if (!NT_SUCCESS(Status))
561 {
562 DPRINT1("Removal vetoed by %wZ\n", &DeviceNode->InstancePath);
563 IopQueueTargetDeviceEvent(&GUID_DEVICE_REMOVAL_VETOED,
564 &DeviceNode->InstancePath);
565 }
566
567 return Status;
568 }
569
570 static
571 NTSTATUS
572 NTAPI
573 IopQueryStopDevice(IN PDEVICE_OBJECT DeviceObject)
574 {
575 IO_STACK_LOCATION Stack;
576 PVOID Dummy;
577
578 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
579 Stack.MajorFunction = IRP_MJ_PNP;
580 Stack.MinorFunction = IRP_MN_QUERY_STOP_DEVICE;
581
582 return IopSynchronousCall(DeviceObject, &Stack, &Dummy);
583 }
584
585 static
586 VOID
587 NTAPI
588 IopSendRemoveDevice(IN PDEVICE_OBJECT DeviceObject)
589 {
590 IO_STACK_LOCATION Stack;
591 PVOID Dummy;
592 PDEVICE_NODE DeviceNode = IopGetDeviceNode(DeviceObject);
593
594 /* Drop all our state for this device in case it isn't really going away */
595 DeviceNode->Flags &= DNF_ENUMERATED | DNF_PROCESSED;
596
597 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
598 Stack.MajorFunction = IRP_MJ_PNP;
599 Stack.MinorFunction = IRP_MN_REMOVE_DEVICE;
600
601 /* Drivers should never fail a IRP_MN_REMOVE_DEVICE request */
602 IopSynchronousCall(DeviceObject, &Stack, &Dummy);
603
604 IopNotifyPlugPlayNotification(DeviceObject,
605 EventCategoryTargetDeviceChange,
606 &GUID_TARGET_DEVICE_REMOVE_COMPLETE,
607 NULL,
608 NULL);
609 ObDereferenceObject(DeviceObject);
610 }
611
612 static
613 VOID
614 NTAPI
615 IopCancelRemoveDevice(IN PDEVICE_OBJECT DeviceObject)
616 {
617 IO_STACK_LOCATION Stack;
618 PVOID Dummy;
619
620 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
621 Stack.MajorFunction = IRP_MJ_PNP;
622 Stack.MinorFunction = IRP_MN_CANCEL_REMOVE_DEVICE;
623
624 /* Drivers should never fail a IRP_MN_CANCEL_REMOVE_DEVICE request */
625 IopSynchronousCall(DeviceObject, &Stack, &Dummy);
626
627 IopNotifyPlugPlayNotification(DeviceObject,
628 EventCategoryTargetDeviceChange,
629 &GUID_TARGET_DEVICE_REMOVE_CANCELLED,
630 NULL,
631 NULL);
632 }
633
634 static
635 VOID
636 NTAPI
637 IopSendStopDevice(IN PDEVICE_OBJECT DeviceObject)
638 {
639 IO_STACK_LOCATION Stack;
640 PVOID Dummy;
641
642 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
643 Stack.MajorFunction = IRP_MJ_PNP;
644 Stack.MinorFunction = IRP_MN_STOP_DEVICE;
645
646 /* Drivers should never fail a IRP_MN_STOP_DEVICE request */
647 IopSynchronousCall(DeviceObject, &Stack, &Dummy);
648 }
649
650 VOID
651 NTAPI
652 IopStartDevice2(IN PDEVICE_OBJECT DeviceObject)
653 {
654 IO_STACK_LOCATION Stack;
655 PDEVICE_NODE DeviceNode;
656 NTSTATUS Status;
657 PVOID Dummy;
658 DEVICE_CAPABILITIES DeviceCapabilities;
659
660 /* Get the device node */
661 DeviceNode = IopGetDeviceNode(DeviceObject);
662
663 ASSERT(!(DeviceNode->Flags & DNF_DISABLED));
664
665 /* Build the I/O stack location */
666 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
667 Stack.MajorFunction = IRP_MJ_PNP;
668 Stack.MinorFunction = IRP_MN_START_DEVICE;
669
670 Stack.Parameters.StartDevice.AllocatedResources =
671 DeviceNode->ResourceList;
672 Stack.Parameters.StartDevice.AllocatedResourcesTranslated =
673 DeviceNode->ResourceListTranslated;
674
675 /* Do the call */
676 Status = IopSynchronousCall(DeviceObject, &Stack, &Dummy);
677 if (!NT_SUCCESS(Status))
678 {
679 /* Send an IRP_MN_REMOVE_DEVICE request */
680 IopRemoveDevice(DeviceNode);
681
682 /* Set the appropriate flag */
683 DeviceNode->Flags |= DNF_START_FAILED;
684 DeviceNode->Problem = CM_PROB_FAILED_START;
685
686 DPRINT1("Warning: PnP Start failed (%wZ) [Status: 0x%x]\n", &DeviceNode->InstancePath, Status);
687 return;
688 }
689
690 DPRINT("Sending IRP_MN_QUERY_CAPABILITIES to device stack (after start)\n");
691
692 Status = IopQueryDeviceCapabilities(DeviceNode, &DeviceCapabilities);
693 if (!NT_SUCCESS(Status))
694 {
695 DPRINT("IopInitiatePnpIrp() failed (Status 0x%08lx)\n", Status);
696 }
697
698 /* Invalidate device state so IRP_MN_QUERY_PNP_DEVICE_STATE is sent */
699 IoInvalidateDeviceState(DeviceObject);
700
701 /* Otherwise, mark us as started */
702 DeviceNode->Flags |= DNF_STARTED;
703 DeviceNode->Flags &= ~DNF_STOPPED;
704
705 /* We now need enumeration */
706 DeviceNode->Flags |= DNF_NEED_ENUMERATION_ONLY;
707 }
708
709 NTSTATUS
710 NTAPI
711 IopStartAndEnumerateDevice(IN PDEVICE_NODE DeviceNode)
712 {
713 PDEVICE_OBJECT DeviceObject;
714 NTSTATUS Status;
715 PAGED_CODE();
716
717 /* Sanity check */
718 ASSERT((DeviceNode->Flags & DNF_ADDED));
719 ASSERT((DeviceNode->Flags & (DNF_RESOURCE_ASSIGNED |
720 DNF_RESOURCE_REPORTED |
721 DNF_NO_RESOURCE_REQUIRED)));
722
723 /* Get the device object */
724 DeviceObject = DeviceNode->PhysicalDeviceObject;
725
726 /* Check if we're not started yet */
727 if (!(DeviceNode->Flags & DNF_STARTED))
728 {
729 /* Start us */
730 IopStartDevice2(DeviceObject);
731 }
732
733 /* Do we need to query IDs? This happens in the case of manual reporting */
734 #if 0
735 if (DeviceNode->Flags & DNF_NEED_QUERY_IDS)
736 {
737 DPRINT1("Warning: Device node has DNF_NEED_QUERY_IDS\n");
738 /* And that case shouldn't happen yet */
739 ASSERT(FALSE);
740 }
741 #endif
742
743 /* Make sure we're started, and check if we need enumeration */
744 if ((DeviceNode->Flags & DNF_STARTED) &&
745 (DeviceNode->Flags & DNF_NEED_ENUMERATION_ONLY))
746 {
747 /* Enumerate us */
748 IoSynchronousInvalidateDeviceRelations(DeviceObject, BusRelations);
749 Status = STATUS_SUCCESS;
750 }
751 else
752 {
753 /* Nothing to do */
754 Status = STATUS_SUCCESS;
755 }
756
757 /* Return */
758 return Status;
759 }
760
761 NTSTATUS
762 IopStopDevice(
763 PDEVICE_NODE DeviceNode)
764 {
765 NTSTATUS Status;
766
767 DPRINT("Stopping device: %wZ\n", &DeviceNode->InstancePath);
768
769 Status = IopQueryStopDevice(DeviceNode->PhysicalDeviceObject);
770 if (NT_SUCCESS(Status))
771 {
772 IopSendStopDevice(DeviceNode->PhysicalDeviceObject);
773
774 DeviceNode->Flags &= ~(DNF_STARTED | DNF_START_REQUEST_PENDING);
775 DeviceNode->Flags |= DNF_STOPPED;
776
777 return STATUS_SUCCESS;
778 }
779
780 return Status;
781 }
782
783 NTSTATUS
784 IopStartDevice(
785 PDEVICE_NODE DeviceNode)
786 {
787 NTSTATUS Status;
788 HANDLE InstanceHandle = NULL, ControlHandle = NULL;
789 UNICODE_STRING KeyName, ValueString;
790 OBJECT_ATTRIBUTES ObjectAttributes;
791
792 if (DeviceNode->Flags & DNF_DISABLED)
793 return STATUS_SUCCESS;
794
795 Status = IopAssignDeviceResources(DeviceNode);
796 if (!NT_SUCCESS(Status))
797 goto ByeBye;
798
799 /* New PnP ABI */
800 IopStartAndEnumerateDevice(DeviceNode);
801
802 /* FIX: Should be done in new device instance code */
803 Status = IopCreateDeviceKeyPath(&DeviceNode->InstancePath, REG_OPTION_NON_VOLATILE, &InstanceHandle);
804 if (!NT_SUCCESS(Status))
805 goto ByeBye;
806
807 /* FIX: Should be done in IoXxxPrepareDriverLoading */
808 // {
809 RtlInitUnicodeString(&KeyName, L"Control");
810 InitializeObjectAttributes(&ObjectAttributes,
811 &KeyName,
812 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
813 InstanceHandle,
814 NULL);
815 Status = ZwCreateKey(&ControlHandle, KEY_SET_VALUE, &ObjectAttributes, 0, NULL, REG_OPTION_VOLATILE, NULL);
816 if (!NT_SUCCESS(Status))
817 goto ByeBye;
818
819 RtlInitUnicodeString(&KeyName, L"ActiveService");
820 ValueString = DeviceNode->ServiceName;
821 if (!ValueString.Buffer)
822 RtlInitUnicodeString(&ValueString, L"");
823 Status = ZwSetValueKey(ControlHandle, &KeyName, 0, REG_SZ, ValueString.Buffer, ValueString.Length + sizeof(UNICODE_NULL));
824 // }
825
826 ByeBye:
827 if (ControlHandle != NULL)
828 ZwClose(ControlHandle);
829
830 if (InstanceHandle != NULL)
831 ZwClose(InstanceHandle);
832
833 return Status;
834 }
835
836 NTSTATUS
837 NTAPI
838 IopQueryDeviceCapabilities(PDEVICE_NODE DeviceNode,
839 PDEVICE_CAPABILITIES DeviceCaps)
840 {
841 IO_STATUS_BLOCK StatusBlock;
842 IO_STACK_LOCATION Stack;
843 NTSTATUS Status;
844 HANDLE InstanceKey;
845 UNICODE_STRING ValueName;
846
847 /* Set up the Header */
848 RtlZeroMemory(DeviceCaps, sizeof(DEVICE_CAPABILITIES));
849 DeviceCaps->Size = sizeof(DEVICE_CAPABILITIES);
850 DeviceCaps->Version = 1;
851 DeviceCaps->Address = -1;
852 DeviceCaps->UINumber = -1;
853
854 /* Set up the Stack */
855 RtlZeroMemory(&Stack, sizeof(IO_STACK_LOCATION));
856 Stack.Parameters.DeviceCapabilities.Capabilities = DeviceCaps;
857
858 /* Send the IRP */
859 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
860 &StatusBlock,
861 IRP_MN_QUERY_CAPABILITIES,
862 &Stack);
863 if (!NT_SUCCESS(Status))
864 {
865 if (Status != STATUS_NOT_SUPPORTED)
866 {
867 DPRINT1("IRP_MN_QUERY_CAPABILITIES failed with status 0x%lx\n", Status);
868 }
869 return Status;
870 }
871
872 DeviceNode->CapabilityFlags = *(PULONG)((ULONG_PTR)&DeviceCaps->Version + sizeof(DeviceCaps->Version));
873
874 if (DeviceCaps->NoDisplayInUI)
875 DeviceNode->UserFlags |= DNUF_DONT_SHOW_IN_UI;
876 else
877 DeviceNode->UserFlags &= ~DNUF_DONT_SHOW_IN_UI;
878
879 Status = IopCreateDeviceKeyPath(&DeviceNode->InstancePath, REG_OPTION_NON_VOLATILE, &InstanceKey);
880 if (NT_SUCCESS(Status))
881 {
882 /* Set 'Capabilities' value */
883 RtlInitUnicodeString(&ValueName, L"Capabilities");
884 Status = ZwSetValueKey(InstanceKey,
885 &ValueName,
886 0,
887 REG_DWORD,
888 &DeviceNode->CapabilityFlags,
889 sizeof(ULONG));
890
891 /* Set 'UINumber' value */
892 if (DeviceCaps->UINumber != MAXULONG)
893 {
894 RtlInitUnicodeString(&ValueName, L"UINumber");
895 Status = ZwSetValueKey(InstanceKey,
896 &ValueName,
897 0,
898 REG_DWORD,
899 &DeviceCaps->UINumber,
900 sizeof(ULONG));
901 }
902
903 ZwClose(InstanceKey);
904 }
905
906 return Status;
907 }
908
909 static
910 VOID
911 NTAPI
912 IopDeviceActionWorker(
913 _In_ PVOID Context)
914 {
915 PLIST_ENTRY ListEntry;
916 PDEVICE_ACTION_DATA Data;
917 KIRQL OldIrql;
918
919 KeAcquireSpinLock(&IopDeviceActionLock, &OldIrql);
920 while (!IsListEmpty(&IopDeviceActionRequestList))
921 {
922 ListEntry = RemoveHeadList(&IopDeviceActionRequestList);
923 KeReleaseSpinLock(&IopDeviceActionLock, OldIrql);
924 Data = CONTAINING_RECORD(ListEntry,
925 DEVICE_ACTION_DATA,
926 RequestListEntry);
927
928 IoSynchronousInvalidateDeviceRelations(Data->DeviceObject,
929 Data->Type);
930
931 ObDereferenceObject(Data->DeviceObject);
932 ExFreePoolWithTag(Data, TAG_IO);
933 KeAcquireSpinLock(&IopDeviceActionLock, &OldIrql);
934 }
935 IopDeviceActionInProgress = FALSE;
936 KeReleaseSpinLock(&IopDeviceActionLock, OldIrql);
937 }
938
939 NTSTATUS
940 IopGetSystemPowerDeviceObject(PDEVICE_OBJECT *DeviceObject)
941 {
942 KIRQL OldIrql;
943
944 if (PopSystemPowerDeviceNode)
945 {
946 KeAcquireSpinLock(&IopDeviceTreeLock, &OldIrql);
947 *DeviceObject = PopSystemPowerDeviceNode->PhysicalDeviceObject;
948 KeReleaseSpinLock(&IopDeviceTreeLock, OldIrql);
949
950 return STATUS_SUCCESS;
951 }
952
953 return STATUS_UNSUCCESSFUL;
954 }
955
956 USHORT
957 NTAPI
958 IopGetBusTypeGuidIndex(LPGUID BusTypeGuid)
959 {
960 USHORT i = 0, FoundIndex = 0xFFFF;
961 ULONG NewSize;
962 PVOID NewList;
963
964 /* Acquire the lock */
965 ExAcquireFastMutex(&PnpBusTypeGuidList->Lock);
966
967 /* Loop all entries */
968 while (i < PnpBusTypeGuidList->GuidCount)
969 {
970 /* Try to find a match */
971 if (RtlCompareMemory(BusTypeGuid,
972 &PnpBusTypeGuidList->Guids[i],
973 sizeof(GUID)) == sizeof(GUID))
974 {
975 /* Found it */
976 FoundIndex = i;
977 goto Quickie;
978 }
979 i++;
980 }
981
982 /* Check if we have to grow the list */
983 if (PnpBusTypeGuidList->GuidCount)
984 {
985 /* Calculate the new size */
986 NewSize = sizeof(IO_BUS_TYPE_GUID_LIST) +
987 (sizeof(GUID) * PnpBusTypeGuidList->GuidCount);
988
989 /* Allocate the new copy */
990 NewList = ExAllocatePool(PagedPool, NewSize);
991
992 if (!NewList) {
993 /* Fail */
994 ExFreePool(PnpBusTypeGuidList);
995 goto Quickie;
996 }
997
998 /* Now copy them, decrease the size too */
999 NewSize -= sizeof(GUID);
1000 RtlCopyMemory(NewList, PnpBusTypeGuidList, NewSize);
1001
1002 /* Free the old list */
1003 ExFreePool(PnpBusTypeGuidList);
1004
1005 /* Use the new buffer */
1006 PnpBusTypeGuidList = NewList;
1007 }
1008
1009 /* Copy the new GUID */
1010 RtlCopyMemory(&PnpBusTypeGuidList->Guids[PnpBusTypeGuidList->GuidCount],
1011 BusTypeGuid,
1012 sizeof(GUID));
1013
1014 /* The new entry is the index */
1015 FoundIndex = (USHORT)PnpBusTypeGuidList->GuidCount;
1016 PnpBusTypeGuidList->GuidCount++;
1017
1018 Quickie:
1019 ExReleaseFastMutex(&PnpBusTypeGuidList->Lock);
1020 return FoundIndex;
1021 }
1022
1023 /*
1024 * DESCRIPTION
1025 * Creates a device node
1026 *
1027 * ARGUMENTS
1028 * ParentNode = Pointer to parent device node
1029 * PhysicalDeviceObject = Pointer to PDO for device object. Pass NULL
1030 * to have the root device node create one
1031 * (eg. for legacy drivers)
1032 * DeviceNode = Pointer to storage for created device node
1033 *
1034 * RETURN VALUE
1035 * Status
1036 */
1037 NTSTATUS
1038 IopCreateDeviceNode(PDEVICE_NODE ParentNode,
1039 PDEVICE_OBJECT PhysicalDeviceObject,
1040 PUNICODE_STRING ServiceName,
1041 PDEVICE_NODE *DeviceNode)
1042 {
1043 PDEVICE_NODE Node;
1044 NTSTATUS Status;
1045 KIRQL OldIrql;
1046 UNICODE_STRING FullServiceName;
1047 UNICODE_STRING LegacyPrefix = RTL_CONSTANT_STRING(L"LEGACY_");
1048 UNICODE_STRING UnknownDeviceName = RTL_CONSTANT_STRING(L"UNKNOWN");
1049 UNICODE_STRING KeyName, ClassName;
1050 PUNICODE_STRING ServiceName1;
1051 ULONG LegacyValue;
1052 UNICODE_STRING ClassGUID;
1053 HANDLE InstanceHandle;
1054
1055 DPRINT("ParentNode 0x%p PhysicalDeviceObject 0x%p ServiceName %wZ\n",
1056 ParentNode, PhysicalDeviceObject, ServiceName);
1057
1058 Node = ExAllocatePoolWithTag(NonPagedPool, sizeof(DEVICE_NODE), TAG_IO_DEVNODE);
1059 if (!Node)
1060 {
1061 return STATUS_INSUFFICIENT_RESOURCES;
1062 }
1063
1064 RtlZeroMemory(Node, sizeof(DEVICE_NODE));
1065
1066 if (!ServiceName)
1067 ServiceName1 = &UnknownDeviceName;
1068 else
1069 ServiceName1 = ServiceName;
1070
1071 if (!PhysicalDeviceObject)
1072 {
1073 FullServiceName.MaximumLength = LegacyPrefix.Length + ServiceName1->Length + sizeof(UNICODE_NULL);
1074 FullServiceName.Length = 0;
1075 FullServiceName.Buffer = ExAllocatePool(PagedPool, FullServiceName.MaximumLength);
1076 if (!FullServiceName.Buffer)
1077 {
1078 ExFreePoolWithTag(Node, TAG_IO_DEVNODE);
1079 return STATUS_INSUFFICIENT_RESOURCES;
1080 }
1081
1082 RtlAppendUnicodeStringToString(&FullServiceName, &LegacyPrefix);
1083 RtlAppendUnicodeStringToString(&FullServiceName, ServiceName1);
1084 RtlUpcaseUnicodeString(&FullServiceName, &FullServiceName, FALSE);
1085
1086 Status = PnpRootCreateDevice(&FullServiceName, NULL, &PhysicalDeviceObject, &Node->InstancePath);
1087 if (!NT_SUCCESS(Status))
1088 {
1089 DPRINT1("PnpRootCreateDevice() failed with status 0x%08X\n", Status);
1090 ExFreePool(FullServiceName.Buffer);
1091 ExFreePoolWithTag(Node, TAG_IO_DEVNODE);
1092 return Status;
1093 }
1094
1095 /* Create the device key for legacy drivers */
1096 Status = IopCreateDeviceKeyPath(&Node->InstancePath, REG_OPTION_VOLATILE, &InstanceHandle);
1097 if (!NT_SUCCESS(Status))
1098 {
1099 ExFreePool(FullServiceName.Buffer);
1100 ExFreePoolWithTag(Node, TAG_IO_DEVNODE);
1101 return Status;
1102 }
1103
1104 Node->ServiceName.MaximumLength = ServiceName1->Length + sizeof(UNICODE_NULL);
1105 Node->ServiceName.Length = 0;
1106 Node->ServiceName.Buffer = ExAllocatePool(PagedPool, Node->ServiceName.MaximumLength);
1107 if (!Node->ServiceName.Buffer)
1108 {
1109 ZwClose(InstanceHandle);
1110 ExFreePool(FullServiceName.Buffer);
1111 ExFreePoolWithTag(Node, TAG_IO_DEVNODE);
1112 return Status;
1113 }
1114
1115 RtlCopyUnicodeString(&Node->ServiceName, ServiceName1);
1116
1117 if (ServiceName)
1118 {
1119 RtlInitUnicodeString(&KeyName, L"Service");
1120 Status = ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_SZ, ServiceName->Buffer, ServiceName->Length + sizeof(UNICODE_NULL));
1121 }
1122
1123 if (NT_SUCCESS(Status))
1124 {
1125 RtlInitUnicodeString(&KeyName, L"Legacy");
1126 LegacyValue = 1;
1127 Status = ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_DWORD, &LegacyValue, sizeof(LegacyValue));
1128
1129 RtlInitUnicodeString(&KeyName, L"ConfigFlags");
1130 LegacyValue = 0;
1131 ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_DWORD, &LegacyValue, sizeof(LegacyValue));
1132
1133 if (NT_SUCCESS(Status))
1134 {
1135 RtlInitUnicodeString(&KeyName, L"Class");
1136 RtlInitUnicodeString(&ClassName, L"LegacyDriver");
1137 Status = ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_SZ, ClassName.Buffer, ClassName.Length + sizeof(UNICODE_NULL));
1138 if (NT_SUCCESS(Status))
1139 {
1140 RtlInitUnicodeString(&KeyName, L"ClassGUID");
1141 RtlInitUnicodeString(&ClassGUID, L"{8ECC055D-047F-11D1-A537-0000F8753ED1}");
1142 Status = ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_SZ, ClassGUID.Buffer, ClassGUID.Length + sizeof(UNICODE_NULL));
1143 if (NT_SUCCESS(Status))
1144 {
1145 // FIXME: Retrieve the real "description" by looking at the "DisplayName" string
1146 // of the corresponding CurrentControlSet\Services\xxx entry for this driver.
1147 RtlInitUnicodeString(&KeyName, L"DeviceDesc");
1148 Status = ZwSetValueKey(InstanceHandle, &KeyName, 0, REG_SZ, ServiceName1->Buffer, ServiceName1->Length + sizeof(UNICODE_NULL));
1149 }
1150 }
1151 }
1152 }
1153
1154 ZwClose(InstanceHandle);
1155 ExFreePool(FullServiceName.Buffer);
1156
1157 if (!NT_SUCCESS(Status))
1158 {
1159 ExFreePool(Node->ServiceName.Buffer);
1160 ExFreePoolWithTag(Node, TAG_IO_DEVNODE);
1161 return Status;
1162 }
1163
1164 IopDeviceNodeSetFlag(Node, DNF_LEGACY_DRIVER);
1165 IopDeviceNodeSetFlag(Node, DNF_PROCESSED);
1166 IopDeviceNodeSetFlag(Node, DNF_ADDED);
1167 IopDeviceNodeSetFlag(Node, DNF_STARTED);
1168 }
1169
1170 Node->PhysicalDeviceObject = PhysicalDeviceObject;
1171
1172 ((PEXTENDED_DEVOBJ_EXTENSION)PhysicalDeviceObject->DeviceObjectExtension)->DeviceNode = Node;
1173
1174 if (ParentNode)
1175 {
1176 KeAcquireSpinLock(&IopDeviceTreeLock, &OldIrql);
1177 Node->Parent = ParentNode;
1178 Node->Sibling = NULL;
1179 if (ParentNode->LastChild == NULL)
1180 {
1181 ParentNode->Child = Node;
1182 ParentNode->LastChild = Node;
1183 }
1184 else
1185 {
1186 ParentNode->LastChild->Sibling = Node;
1187 ParentNode->LastChild = Node;
1188 }
1189 KeReleaseSpinLock(&IopDeviceTreeLock, OldIrql);
1190 Node->Level = ParentNode->Level + 1;
1191 }
1192
1193 PhysicalDeviceObject->Flags &= ~DO_DEVICE_INITIALIZING;
1194
1195 *DeviceNode = Node;
1196
1197 return STATUS_SUCCESS;
1198 }
1199
1200 NTSTATUS
1201 IopFreeDeviceNode(PDEVICE_NODE DeviceNode)
1202 {
1203 KIRQL OldIrql;
1204 PDEVICE_NODE PrevSibling = NULL;
1205
1206 /* All children must be deleted before a parent is deleted */
1207 ASSERT(!DeviceNode->Child);
1208 ASSERT(DeviceNode->PhysicalDeviceObject);
1209
1210 KeAcquireSpinLock(&IopDeviceTreeLock, &OldIrql);
1211
1212 /* Get previous sibling */
1213 if (DeviceNode->Parent && DeviceNode->Parent->Child != DeviceNode)
1214 {
1215 PrevSibling = DeviceNode->Parent->Child;
1216 while (PrevSibling->Sibling != DeviceNode)
1217 PrevSibling = PrevSibling->Sibling;
1218 }
1219
1220 /* Unlink from parent if it exists */
1221 if (DeviceNode->Parent)
1222 {
1223 if (DeviceNode->Parent->LastChild == DeviceNode)
1224 {
1225 DeviceNode->Parent->LastChild = PrevSibling;
1226 if (PrevSibling)
1227 PrevSibling->Sibling = NULL;
1228 }
1229 if (DeviceNode->Parent->Child == DeviceNode)
1230 DeviceNode->Parent->Child = DeviceNode->Sibling;
1231 }
1232
1233 /* Unlink from sibling list */
1234 if (PrevSibling)
1235 PrevSibling->Sibling = DeviceNode->Sibling;
1236
1237 KeReleaseSpinLock(&IopDeviceTreeLock, OldIrql);
1238
1239 RtlFreeUnicodeString(&DeviceNode->InstancePath);
1240
1241 RtlFreeUnicodeString(&DeviceNode->ServiceName);
1242
1243 if (DeviceNode->ResourceList)
1244 {
1245 ExFreePool(DeviceNode->ResourceList);
1246 }
1247
1248 if (DeviceNode->ResourceListTranslated)
1249 {
1250 ExFreePool(DeviceNode->ResourceListTranslated);
1251 }
1252
1253 if (DeviceNode->ResourceRequirements)
1254 {
1255 ExFreePool(DeviceNode->ResourceRequirements);
1256 }
1257
1258 if (DeviceNode->BootResources)
1259 {
1260 ExFreePool(DeviceNode->BootResources);
1261 }
1262
1263 ((PEXTENDED_DEVOBJ_EXTENSION)DeviceNode->PhysicalDeviceObject->DeviceObjectExtension)->DeviceNode = NULL;
1264 ExFreePoolWithTag(DeviceNode, TAG_IO_DEVNODE);
1265
1266 return STATUS_SUCCESS;
1267 }
1268
1269 NTSTATUS
1270 NTAPI
1271 IopSynchronousCall(IN PDEVICE_OBJECT DeviceObject,
1272 IN PIO_STACK_LOCATION IoStackLocation,
1273 OUT PVOID *Information)
1274 {
1275 PIRP Irp;
1276 PIO_STACK_LOCATION IrpStack;
1277 IO_STATUS_BLOCK IoStatusBlock;
1278 KEVENT Event;
1279 NTSTATUS Status;
1280 PDEVICE_OBJECT TopDeviceObject;
1281 PAGED_CODE();
1282
1283 /* Call the top of the device stack */
1284 TopDeviceObject = IoGetAttachedDeviceReference(DeviceObject);
1285
1286 /* Allocate an IRP */
1287 Irp = IoAllocateIrp(TopDeviceObject->StackSize, FALSE);
1288 if (!Irp) return STATUS_INSUFFICIENT_RESOURCES;
1289
1290 /* Initialize to failure */
1291 Irp->IoStatus.Status = IoStatusBlock.Status = STATUS_NOT_SUPPORTED;
1292 Irp->IoStatus.Information = IoStatusBlock.Information = 0;
1293
1294 /* Special case for IRP_MN_FILTER_RESOURCE_REQUIREMENTS */
1295 if (IoStackLocation->MinorFunction == IRP_MN_FILTER_RESOURCE_REQUIREMENTS)
1296 {
1297 /* Copy the resource requirements list into the IOSB */
1298 Irp->IoStatus.Information =
1299 IoStatusBlock.Information = (ULONG_PTR)IoStackLocation->Parameters.FilterResourceRequirements.IoResourceRequirementList;
1300 }
1301
1302 /* Initialize the event */
1303 KeInitializeEvent(&Event, SynchronizationEvent, FALSE);
1304
1305 /* Set them up */
1306 Irp->UserIosb = &IoStatusBlock;
1307 Irp->UserEvent = &Event;
1308
1309 /* Queue the IRP */
1310 Irp->Tail.Overlay.Thread = PsGetCurrentThread();
1311 IoQueueThreadIrp(Irp);
1312
1313 /* Copy-in the stack */
1314 IrpStack = IoGetNextIrpStackLocation(Irp);
1315 *IrpStack = *IoStackLocation;
1316
1317 /* Call the driver */
1318 Status = IoCallDriver(TopDeviceObject, Irp);
1319 if (Status == STATUS_PENDING)
1320 {
1321 /* Wait for it */
1322 KeWaitForSingleObject(&Event,
1323 Executive,
1324 KernelMode,
1325 FALSE,
1326 NULL);
1327 Status = IoStatusBlock.Status;
1328 }
1329
1330 /* Remove the reference */
1331 ObDereferenceObject(TopDeviceObject);
1332
1333 /* Return the information */
1334 *Information = (PVOID)IoStatusBlock.Information;
1335 return Status;
1336 }
1337
1338 NTSTATUS
1339 NTAPI
1340 IopInitiatePnpIrp(IN PDEVICE_OBJECT DeviceObject,
1341 IN OUT PIO_STATUS_BLOCK IoStatusBlock,
1342 IN UCHAR MinorFunction,
1343 IN PIO_STACK_LOCATION Stack OPTIONAL)
1344 {
1345 IO_STACK_LOCATION IoStackLocation;
1346
1347 /* Fill out the stack information */
1348 RtlZeroMemory(&IoStackLocation, sizeof(IO_STACK_LOCATION));
1349 IoStackLocation.MajorFunction = IRP_MJ_PNP;
1350 IoStackLocation.MinorFunction = MinorFunction;
1351 if (Stack)
1352 {
1353 /* Copy the rest */
1354 RtlCopyMemory(&IoStackLocation.Parameters,
1355 &Stack->Parameters,
1356 sizeof(Stack->Parameters));
1357 }
1358
1359 /* Do the PnP call */
1360 IoStatusBlock->Status = IopSynchronousCall(DeviceObject,
1361 &IoStackLocation,
1362 (PVOID)&IoStatusBlock->Information);
1363 return IoStatusBlock->Status;
1364 }
1365
1366 NTSTATUS
1367 IopTraverseDeviceTreeNode(PDEVICETREE_TRAVERSE_CONTEXT Context)
1368 {
1369 PDEVICE_NODE ParentDeviceNode;
1370 PDEVICE_NODE ChildDeviceNode;
1371 NTSTATUS Status;
1372
1373 /* Copy context data so we don't overwrite it in subsequent calls to this function */
1374 ParentDeviceNode = Context->DeviceNode;
1375
1376 /* Call the action routine */
1377 Status = (Context->Action)(ParentDeviceNode, Context->Context);
1378 if (!NT_SUCCESS(Status))
1379 {
1380 return Status;
1381 }
1382
1383 /* Traversal of all children nodes */
1384 for (ChildDeviceNode = ParentDeviceNode->Child;
1385 ChildDeviceNode != NULL;
1386 ChildDeviceNode = ChildDeviceNode->Sibling)
1387 {
1388 /* Pass the current device node to the action routine */
1389 Context->DeviceNode = ChildDeviceNode;
1390
1391 Status = IopTraverseDeviceTreeNode(Context);
1392 if (!NT_SUCCESS(Status))
1393 {
1394 return Status;
1395 }
1396 }
1397
1398 return Status;
1399 }
1400
1401
1402 NTSTATUS
1403 IopTraverseDeviceTree(PDEVICETREE_TRAVERSE_CONTEXT Context)
1404 {
1405 NTSTATUS Status;
1406
1407 DPRINT("Context 0x%p\n", Context);
1408
1409 DPRINT("IopTraverseDeviceTree(DeviceNode 0x%p FirstDeviceNode 0x%p Action %p Context 0x%p)\n",
1410 Context->DeviceNode, Context->FirstDeviceNode, Context->Action, Context->Context);
1411
1412 /* Start from the specified device node */
1413 Context->DeviceNode = Context->FirstDeviceNode;
1414
1415 /* Recursively traverse the device tree */
1416 Status = IopTraverseDeviceTreeNode(Context);
1417 if (Status == STATUS_UNSUCCESSFUL)
1418 {
1419 /* The action routine just wanted to terminate the traversal with status
1420 code STATUS_SUCCESS */
1421 Status = STATUS_SUCCESS;
1422 }
1423
1424 return Status;
1425 }
1426
1427
1428 /*
1429 * IopCreateDeviceKeyPath
1430 *
1431 * Creates a registry key
1432 *
1433 * Parameters
1434 * RegistryPath
1435 * Name of the key to be created.
1436 * Handle
1437 * Handle to the newly created key
1438 *
1439 * Remarks
1440 * This method can create nested trees, so parent of RegistryPath can
1441 * be not existant, and will be created if needed.
1442 */
1443 NTSTATUS
1444 NTAPI
1445 IopCreateDeviceKeyPath(IN PCUNICODE_STRING RegistryPath,
1446 IN ULONG CreateOptions,
1447 OUT PHANDLE Handle)
1448 {
1449 UNICODE_STRING EnumU = RTL_CONSTANT_STRING(ENUM_ROOT);
1450 HANDLE hParent = NULL, hKey;
1451 OBJECT_ATTRIBUTES ObjectAttributes;
1452 UNICODE_STRING KeyName;
1453 PCWSTR Current, Last;
1454 USHORT Length;
1455 NTSTATUS Status;
1456
1457 /* Assume failure */
1458 *Handle = NULL;
1459
1460 /* Create a volatile device tree in 1st stage so we have a clean slate
1461 * for enumeration using the correct HAL (chosen in 1st stage setup) */
1462 if (ExpInTextModeSetup) CreateOptions |= REG_OPTION_VOLATILE;
1463
1464 /* Open root key for device instances */
1465 Status = IopOpenRegistryKeyEx(&hParent, NULL, &EnumU, KEY_CREATE_SUB_KEY);
1466 if (!NT_SUCCESS(Status))
1467 {
1468 DPRINT1("ZwOpenKey('%wZ') failed with status 0x%08lx\n", &EnumU, Status);
1469 return Status;
1470 }
1471
1472 Current = KeyName.Buffer = RegistryPath->Buffer;
1473 Last = &RegistryPath->Buffer[RegistryPath->Length / sizeof(WCHAR)];
1474
1475 /* Go up to the end of the string */
1476 while (Current <= Last)
1477 {
1478 if (Current != Last && *Current != L'\\')
1479 {
1480 /* Not the end of the string and not a separator */
1481 Current++;
1482 continue;
1483 }
1484
1485 /* Prepare relative key name */
1486 Length = (USHORT)((ULONG_PTR)Current - (ULONG_PTR)KeyName.Buffer);
1487 KeyName.MaximumLength = KeyName.Length = Length;
1488 DPRINT("Create '%wZ'\n", &KeyName);
1489
1490 /* Open key */
1491 InitializeObjectAttributes(&ObjectAttributes,
1492 &KeyName,
1493 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
1494 hParent,
1495 NULL);
1496 Status = ZwCreateKey(&hKey,
1497 Current == Last ? KEY_ALL_ACCESS : KEY_CREATE_SUB_KEY,
1498 &ObjectAttributes,
1499 0,
1500 NULL,
1501 CreateOptions,
1502 NULL);
1503
1504 /* Close parent key handle, we don't need it anymore */
1505 if (hParent)
1506 ZwClose(hParent);
1507
1508 /* Key opening/creating failed? */
1509 if (!NT_SUCCESS(Status))
1510 {
1511 DPRINT1("ZwCreateKey('%wZ') failed with status 0x%08lx\n", &KeyName, Status);
1512 return Status;
1513 }
1514
1515 /* Check if it is the end of the string */
1516 if (Current == Last)
1517 {
1518 /* Yes, return success */
1519 *Handle = hKey;
1520 return STATUS_SUCCESS;
1521 }
1522
1523 /* Start with this new parent key */
1524 hParent = hKey;
1525 Current++;
1526 KeyName.Buffer = (PWSTR)Current;
1527 }
1528
1529 return STATUS_UNSUCCESSFUL;
1530 }
1531
1532 NTSTATUS
1533 IopSetDeviceInstanceData(HANDLE InstanceKey,
1534 PDEVICE_NODE DeviceNode)
1535 {
1536 OBJECT_ATTRIBUTES ObjectAttributes;
1537 UNICODE_STRING KeyName;
1538 HANDLE LogConfKey;
1539 ULONG ResCount;
1540 ULONG ResultLength;
1541 NTSTATUS Status;
1542 HANDLE ControlHandle;
1543
1544 DPRINT("IopSetDeviceInstanceData() called\n");
1545
1546 /* Create the 'LogConf' key */
1547 RtlInitUnicodeString(&KeyName, L"LogConf");
1548 InitializeObjectAttributes(&ObjectAttributes,
1549 &KeyName,
1550 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
1551 InstanceKey,
1552 NULL);
1553 Status = ZwCreateKey(&LogConfKey,
1554 KEY_ALL_ACCESS,
1555 &ObjectAttributes,
1556 0,
1557 NULL,
1558 REG_OPTION_VOLATILE,
1559 NULL);
1560 if (NT_SUCCESS(Status))
1561 {
1562 /* Set 'BootConfig' value */
1563 if (DeviceNode->BootResources != NULL)
1564 {
1565 ResCount = DeviceNode->BootResources->Count;
1566 if (ResCount != 0)
1567 {
1568 RtlInitUnicodeString(&KeyName, L"BootConfig");
1569 Status = ZwSetValueKey(LogConfKey,
1570 &KeyName,
1571 0,
1572 REG_RESOURCE_LIST,
1573 DeviceNode->BootResources,
1574 PnpDetermineResourceListSize(DeviceNode->BootResources));
1575 }
1576 }
1577
1578 /* Set 'BasicConfigVector' value */
1579 if (DeviceNode->ResourceRequirements != NULL &&
1580 DeviceNode->ResourceRequirements->ListSize != 0)
1581 {
1582 RtlInitUnicodeString(&KeyName, L"BasicConfigVector");
1583 Status = ZwSetValueKey(LogConfKey,
1584 &KeyName,
1585 0,
1586 REG_RESOURCE_REQUIREMENTS_LIST,
1587 DeviceNode->ResourceRequirements,
1588 DeviceNode->ResourceRequirements->ListSize);
1589 }
1590
1591 ZwClose(LogConfKey);
1592 }
1593
1594 /* Set the 'ConfigFlags' value */
1595 RtlInitUnicodeString(&KeyName, L"ConfigFlags");
1596 Status = ZwQueryValueKey(InstanceKey,
1597 &KeyName,
1598 KeyValueBasicInformation,
1599 NULL,
1600 0,
1601 &ResultLength);
1602 if (Status == STATUS_OBJECT_NAME_NOT_FOUND)
1603 {
1604 /* Write the default value */
1605 ULONG DefaultConfigFlags = 0;
1606 Status = ZwSetValueKey(InstanceKey,
1607 &KeyName,
1608 0,
1609 REG_DWORD,
1610 &DefaultConfigFlags,
1611 sizeof(DefaultConfigFlags));
1612 }
1613
1614 /* Create the 'Control' key */
1615 RtlInitUnicodeString(&KeyName, L"Control");
1616 InitializeObjectAttributes(&ObjectAttributes,
1617 &KeyName,
1618 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
1619 InstanceKey,
1620 NULL);
1621 Status = ZwCreateKey(&ControlHandle, 0, &ObjectAttributes, 0, NULL, REG_OPTION_VOLATILE, NULL);
1622
1623 if (NT_SUCCESS(Status))
1624 ZwClose(ControlHandle);
1625
1626 DPRINT("IopSetDeviceInstanceData() done\n");
1627
1628 return Status;
1629 }
1630
1631 /*
1632 * IopGetParentIdPrefix
1633 *
1634 * Retrieve (or create) a string which identifies a device.
1635 *
1636 * Parameters
1637 * DeviceNode
1638 * Pointer to device node.
1639 * ParentIdPrefix
1640 * Pointer to the string where is returned the parent node identifier
1641 *
1642 * Remarks
1643 * If the return code is STATUS_SUCCESS, the ParentIdPrefix string is
1644 * valid and its Buffer field is NULL-terminated. The caller needs to
1645 * to free the string with RtlFreeUnicodeString when it is no longer
1646 * needed.
1647 */
1648
1649 NTSTATUS
1650 IopGetParentIdPrefix(PDEVICE_NODE DeviceNode,
1651 PUNICODE_STRING ParentIdPrefix)
1652 {
1653 const UNICODE_STRING EnumKeyPath = RTL_CONSTANT_STRING(L"\\Registry\\Machine\\System\\CurrentControlSet\\Enum\\");
1654 ULONG KeyNameBufferLength;
1655 PKEY_VALUE_PARTIAL_INFORMATION ParentIdPrefixInformation = NULL;
1656 UNICODE_STRING KeyName = {0, 0, NULL};
1657 UNICODE_STRING KeyValue;
1658 UNICODE_STRING ValueName;
1659 HANDLE hKey = NULL;
1660 ULONG crc32;
1661 NTSTATUS Status;
1662
1663 /* HACK: As long as some devices have a NULL device
1664 * instance path, the following test is required :(
1665 */
1666 if (DeviceNode->Parent->InstancePath.Length == 0)
1667 {
1668 DPRINT1("Parent of %wZ has NULL Instance path, please report!\n",
1669 &DeviceNode->InstancePath);
1670 return STATUS_UNSUCCESSFUL;
1671 }
1672
1673 /* 1. Try to retrieve ParentIdPrefix from registry */
1674 KeyNameBufferLength = FIELD_OFFSET(KEY_VALUE_PARTIAL_INFORMATION, Data[0]) + MAX_PATH * sizeof(WCHAR);
1675 ParentIdPrefixInformation = ExAllocatePoolWithTag(PagedPool,
1676 KeyNameBufferLength + sizeof(UNICODE_NULL),
1677 TAG_IO);
1678 if (!ParentIdPrefixInformation)
1679 {
1680 return STATUS_INSUFFICIENT_RESOURCES;
1681 }
1682
1683 KeyName.Length = 0;
1684 KeyName.MaximumLength = EnumKeyPath.Length +
1685 DeviceNode->Parent->InstancePath.Length +
1686 sizeof(UNICODE_NULL);
1687 KeyName.Buffer = ExAllocatePoolWithTag(PagedPool,
1688 KeyName.MaximumLength,
1689 TAG_IO);
1690 if (!KeyName.Buffer)
1691 {
1692 Status = STATUS_INSUFFICIENT_RESOURCES;
1693 goto cleanup;
1694 }
1695
1696 RtlCopyUnicodeString(&KeyName, &EnumKeyPath);
1697 RtlAppendUnicodeStringToString(&KeyName, &DeviceNode->Parent->InstancePath);
1698
1699 Status = IopOpenRegistryKeyEx(&hKey, NULL, &KeyName, KEY_QUERY_VALUE | KEY_SET_VALUE);
1700 if (!NT_SUCCESS(Status))
1701 {
1702 goto cleanup;
1703 }
1704 RtlInitUnicodeString(&ValueName, L"ParentIdPrefix");
1705 Status = ZwQueryValueKey(hKey,
1706 &ValueName,
1707 KeyValuePartialInformation,
1708 ParentIdPrefixInformation,
1709 KeyNameBufferLength,
1710 &KeyNameBufferLength);
1711 if (NT_SUCCESS(Status))
1712 {
1713 if (ParentIdPrefixInformation->Type != REG_SZ)
1714 {
1715 Status = STATUS_UNSUCCESSFUL;
1716 }
1717 else
1718 {
1719 KeyValue.MaximumLength = (USHORT)ParentIdPrefixInformation->DataLength;
1720 KeyValue.Length = KeyValue.MaximumLength - sizeof(UNICODE_NULL);
1721 KeyValue.Buffer = (PWSTR)ParentIdPrefixInformation->Data;
1722 ASSERT(KeyValue.Buffer[KeyValue.Length / sizeof(WCHAR)] == UNICODE_NULL);
1723 }
1724 goto cleanup;
1725 }
1726 if (Status != STATUS_OBJECT_NAME_NOT_FOUND)
1727 {
1728 /* FIXME how do we get here and why is ParentIdPrefixInformation valid? */
1729 KeyValue.MaximumLength = (USHORT)ParentIdPrefixInformation->DataLength;
1730 KeyValue.Length = KeyValue.MaximumLength - sizeof(UNICODE_NULL);
1731 KeyValue.Buffer = (PWSTR)ParentIdPrefixInformation->Data;
1732 ASSERT(KeyValue.Buffer[KeyValue.Length / sizeof(WCHAR)] == UNICODE_NULL);
1733 goto cleanup;
1734 }
1735
1736 /* 2. Create the ParentIdPrefix value */
1737 crc32 = RtlComputeCrc32(0,
1738 (PUCHAR)DeviceNode->Parent->InstancePath.Buffer,
1739 DeviceNode->Parent->InstancePath.Length);
1740
1741 RtlStringCbPrintfW((PWSTR)ParentIdPrefixInformation,
1742 KeyNameBufferLength,
1743 L"%lx&%lx",
1744 DeviceNode->Parent->Level,
1745 crc32);
1746 RtlInitUnicodeString(&KeyValue, (PWSTR)ParentIdPrefixInformation);
1747
1748 /* 3. Try to write the ParentIdPrefix to registry */
1749 Status = ZwSetValueKey(hKey,
1750 &ValueName,
1751 0,
1752 REG_SZ,
1753 KeyValue.Buffer,
1754 ((ULONG)wcslen(KeyValue.Buffer) + 1) * sizeof(WCHAR));
1755
1756 cleanup:
1757 if (NT_SUCCESS(Status))
1758 {
1759 /* Duplicate the string to return it */
1760 Status = RtlDuplicateUnicodeString(RTL_DUPLICATE_UNICODE_STRING_NULL_TERMINATE,
1761 &KeyValue,
1762 ParentIdPrefix);
1763 }
1764 ExFreePoolWithTag(ParentIdPrefixInformation, TAG_IO);
1765 RtlFreeUnicodeString(&KeyName);
1766 if (hKey != NULL)
1767 {
1768 ZwClose(hKey);
1769 }
1770 return Status;
1771 }
1772
1773 static
1774 BOOLEAN
1775 IopValidateID(
1776 _In_ PWCHAR Id,
1777 _In_ BUS_QUERY_ID_TYPE QueryType)
1778 {
1779 PWCHAR PtrChar;
1780 PWCHAR StringEnd;
1781 WCHAR Char;
1782 ULONG SeparatorsCount = 0;
1783 PWCHAR PtrPrevChar = NULL;
1784 ULONG MaxSeparators;
1785 BOOLEAN IsMultiSz;
1786
1787 PAGED_CODE();
1788
1789 switch (QueryType)
1790 {
1791 case BusQueryDeviceID:
1792 MaxSeparators = MAX_SEPARATORS_DEVICEID;
1793 IsMultiSz = FALSE;
1794 break;
1795 case BusQueryInstanceID:
1796 MaxSeparators = MAX_SEPARATORS_INSTANCEID;
1797 IsMultiSz = FALSE;
1798 break;
1799
1800 case BusQueryHardwareIDs:
1801 case BusQueryCompatibleIDs:
1802 MaxSeparators = MAX_SEPARATORS_DEVICEID;
1803 IsMultiSz = TRUE;
1804 break;
1805
1806 default:
1807 DPRINT1("IopValidateID: Not handled QueryType - %x\n", QueryType);
1808 return FALSE;
1809 }
1810
1811 StringEnd = Id + MAX_DEVICE_ID_LEN;
1812
1813 for (PtrChar = Id; PtrChar < StringEnd; PtrChar++)
1814 {
1815 Char = *PtrChar;
1816
1817 if (Char == UNICODE_NULL)
1818 {
1819 if (!IsMultiSz || (PtrPrevChar && PtrChar == PtrPrevChar + 1))
1820 {
1821 if (MaxSeparators == SeparatorsCount || IsMultiSz)
1822 {
1823 return TRUE;
1824 }
1825
1826 DPRINT1("IopValidateID: SeparatorsCount - %lu, MaxSeparators - %lu\n",
1827 SeparatorsCount, MaxSeparators);
1828 goto ErrorExit;
1829 }
1830
1831 StringEnd = PtrChar + MAX_DEVICE_ID_LEN + 1;
1832 PtrPrevChar = PtrChar;
1833 SeparatorsCount = 0;
1834 }
1835 else if (Char < ' ' || Char > 0x7F || Char == ',')
1836 {
1837 DPRINT1("IopValidateID: Invalid character - %04X\n", Char);
1838 goto ErrorExit;
1839 }
1840 else if (Char == ' ')
1841 {
1842 *PtrChar = '_';
1843 }
1844 else if (Char == '\\')
1845 {
1846 SeparatorsCount++;
1847
1848 if (SeparatorsCount > MaxSeparators)
1849 {
1850 DPRINT1("IopValidateID: SeparatorsCount - %lu, MaxSeparators - %lu\n",
1851 SeparatorsCount, MaxSeparators);
1852 goto ErrorExit;
1853 }
1854 }
1855 }
1856
1857 DPRINT1("IopValidateID: Not terminated ID\n");
1858
1859 ErrorExit:
1860 // FIXME logging
1861 return FALSE;
1862 }
1863
1864 NTSTATUS
1865 IopQueryHardwareIds(PDEVICE_NODE DeviceNode,
1866 HANDLE InstanceKey)
1867 {
1868 IO_STACK_LOCATION Stack;
1869 IO_STATUS_BLOCK IoStatusBlock;
1870 PWSTR Ptr;
1871 UNICODE_STRING ValueName;
1872 NTSTATUS Status;
1873 ULONG Length, TotalLength;
1874 BOOLEAN IsValidID;
1875
1876 DPRINT("Sending IRP_MN_QUERY_ID.BusQueryHardwareIDs to device stack\n");
1877
1878 RtlZeroMemory(&Stack, sizeof(Stack));
1879 Stack.Parameters.QueryId.IdType = BusQueryHardwareIDs;
1880 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
1881 &IoStatusBlock,
1882 IRP_MN_QUERY_ID,
1883 &Stack);
1884 if (NT_SUCCESS(Status))
1885 {
1886 IsValidID = IopValidateID((PWCHAR)IoStatusBlock.Information, BusQueryHardwareIDs);
1887
1888 if (!IsValidID)
1889 {
1890 DPRINT1("Invalid HardwareIDs. DeviceNode - %p\n", DeviceNode);
1891 }
1892
1893 TotalLength = 0;
1894
1895 Ptr = (PWSTR)IoStatusBlock.Information;
1896 DPRINT("Hardware IDs:\n");
1897 while (*Ptr)
1898 {
1899 DPRINT(" %S\n", Ptr);
1900 Length = (ULONG)wcslen(Ptr) + 1;
1901
1902 Ptr += Length;
1903 TotalLength += Length;
1904 }
1905 DPRINT("TotalLength: %hu\n", TotalLength);
1906 DPRINT("\n");
1907
1908 RtlInitUnicodeString(&ValueName, L"HardwareID");
1909 Status = ZwSetValueKey(InstanceKey,
1910 &ValueName,
1911 0,
1912 REG_MULTI_SZ,
1913 (PVOID)IoStatusBlock.Information,
1914 (TotalLength + 1) * sizeof(WCHAR));
1915 if (!NT_SUCCESS(Status))
1916 {
1917 DPRINT1("ZwSetValueKey() failed (Status %lx)\n", Status);
1918 }
1919 }
1920 else
1921 {
1922 DPRINT("IopInitiatePnpIrp() failed (Status %x)\n", Status);
1923 }
1924
1925 return Status;
1926 }
1927
1928 NTSTATUS
1929 IopQueryCompatibleIds(PDEVICE_NODE DeviceNode,
1930 HANDLE InstanceKey)
1931 {
1932 IO_STACK_LOCATION Stack;
1933 IO_STATUS_BLOCK IoStatusBlock;
1934 PWSTR Ptr;
1935 UNICODE_STRING ValueName;
1936 NTSTATUS Status;
1937 ULONG Length, TotalLength;
1938 BOOLEAN IsValidID;
1939
1940 DPRINT("Sending IRP_MN_QUERY_ID.BusQueryCompatibleIDs to device stack\n");
1941
1942 RtlZeroMemory(&Stack, sizeof(Stack));
1943 Stack.Parameters.QueryId.IdType = BusQueryCompatibleIDs;
1944 Status = IopInitiatePnpIrp(
1945 DeviceNode->PhysicalDeviceObject,
1946 &IoStatusBlock,
1947 IRP_MN_QUERY_ID,
1948 &Stack);
1949 if (NT_SUCCESS(Status) && IoStatusBlock.Information)
1950 {
1951 IsValidID = IopValidateID((PWCHAR)IoStatusBlock.Information, BusQueryCompatibleIDs);
1952
1953 if (!IsValidID)
1954 {
1955 DPRINT1("Invalid CompatibleIDs. DeviceNode - %p\n", DeviceNode);
1956 }
1957
1958 TotalLength = 0;
1959
1960 Ptr = (PWSTR)IoStatusBlock.Information;
1961 DPRINT("Compatible IDs:\n");
1962 while (*Ptr)
1963 {
1964 DPRINT(" %S\n", Ptr);
1965 Length = (ULONG)wcslen(Ptr) + 1;
1966
1967 Ptr += Length;
1968 TotalLength += Length;
1969 }
1970 DPRINT("TotalLength: %hu\n", TotalLength);
1971 DPRINT("\n");
1972
1973 RtlInitUnicodeString(&ValueName, L"CompatibleIDs");
1974 Status = ZwSetValueKey(InstanceKey,
1975 &ValueName,
1976 0,
1977 REG_MULTI_SZ,
1978 (PVOID)IoStatusBlock.Information,
1979 (TotalLength + 1) * sizeof(WCHAR));
1980 if (!NT_SUCCESS(Status))
1981 {
1982 DPRINT1("ZwSetValueKey() failed (Status %lx) or no Compatible ID returned\n", Status);
1983 }
1984 }
1985 else
1986 {
1987 DPRINT("IopInitiatePnpIrp() failed (Status %x)\n", Status);
1988 }
1989
1990 return Status;
1991 }
1992
1993 NTSTATUS
1994 IopCreateDeviceInstancePath(
1995 _In_ PDEVICE_NODE DeviceNode,
1996 _Out_ PUNICODE_STRING InstancePath)
1997 {
1998 IO_STATUS_BLOCK IoStatusBlock;
1999 UNICODE_STRING DeviceId;
2000 UNICODE_STRING InstanceId;
2001 IO_STACK_LOCATION Stack;
2002 NTSTATUS Status;
2003 UNICODE_STRING ParentIdPrefix = { 0, 0, NULL };
2004 DEVICE_CAPABILITIES DeviceCapabilities;
2005 BOOLEAN IsValidID;
2006
2007 DPRINT("Sending IRP_MN_QUERY_ID.BusQueryDeviceID to device stack\n");
2008
2009 Stack.Parameters.QueryId.IdType = BusQueryDeviceID;
2010 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2011 &IoStatusBlock,
2012 IRP_MN_QUERY_ID,
2013 &Stack);
2014 if (!NT_SUCCESS(Status))
2015 {
2016 DPRINT1("IopInitiatePnpIrp(BusQueryDeviceID) failed (Status %x)\n", Status);
2017 return Status;
2018 }
2019
2020 IsValidID = IopValidateID((PWCHAR)IoStatusBlock.Information, BusQueryDeviceID);
2021
2022 if (!IsValidID)
2023 {
2024 DPRINT1("Invalid DeviceID. DeviceNode - %p\n", DeviceNode);
2025 }
2026
2027 /* Save the device id string */
2028 RtlInitUnicodeString(&DeviceId, (PWSTR)IoStatusBlock.Information);
2029
2030 DPRINT("Sending IRP_MN_QUERY_CAPABILITIES to device stack (after enumeration)\n");
2031
2032 Status = IopQueryDeviceCapabilities(DeviceNode, &DeviceCapabilities);
2033 if (!NT_SUCCESS(Status))
2034 {
2035 DPRINT1("IopQueryDeviceCapabilities() failed (Status 0x%08lx)\n", Status);
2036 RtlFreeUnicodeString(&DeviceId);
2037 return Status;
2038 }
2039
2040 /* This bit is only check after enumeration */
2041 if (DeviceCapabilities.HardwareDisabled)
2042 {
2043 /* FIXME: Cleanup device */
2044 DeviceNode->Flags |= DNF_DISABLED;
2045 RtlFreeUnicodeString(&DeviceId);
2046 return STATUS_PLUGPLAY_NO_DEVICE;
2047 }
2048 else
2049 {
2050 DeviceNode->Flags &= ~DNF_DISABLED;
2051 }
2052
2053 if (!DeviceCapabilities.UniqueID)
2054 {
2055 /* Device has not a unique ID. We need to prepend parent bus unique identifier */
2056 DPRINT("Instance ID is not unique\n");
2057 Status = IopGetParentIdPrefix(DeviceNode, &ParentIdPrefix);
2058 if (!NT_SUCCESS(Status))
2059 {
2060 DPRINT1("IopGetParentIdPrefix() failed (Status 0x%08lx)\n", Status);
2061 RtlFreeUnicodeString(&DeviceId);
2062 return Status;
2063 }
2064 }
2065
2066 DPRINT("Sending IRP_MN_QUERY_ID.BusQueryInstanceID to device stack\n");
2067
2068 Stack.Parameters.QueryId.IdType = BusQueryInstanceID;
2069 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2070 &IoStatusBlock,
2071 IRP_MN_QUERY_ID,
2072 &Stack);
2073 if (!NT_SUCCESS(Status))
2074 {
2075 DPRINT("IopInitiatePnpIrp(BusQueryInstanceID) failed (Status %lx)\n", Status);
2076 ASSERT(IoStatusBlock.Information == 0);
2077 }
2078
2079 if (IoStatusBlock.Information)
2080 {
2081 IsValidID = IopValidateID((PWCHAR)IoStatusBlock.Information, BusQueryInstanceID);
2082
2083 if (!IsValidID)
2084 {
2085 DPRINT1("Invalid InstanceID. DeviceNode - %p\n", DeviceNode);
2086 }
2087 }
2088
2089 RtlInitUnicodeString(&InstanceId,
2090 (PWSTR)IoStatusBlock.Information);
2091
2092 InstancePath->Length = 0;
2093 InstancePath->MaximumLength = DeviceId.Length + sizeof(WCHAR) +
2094 ParentIdPrefix.Length +
2095 InstanceId.Length +
2096 sizeof(UNICODE_NULL);
2097 if (ParentIdPrefix.Length && InstanceId.Length)
2098 {
2099 InstancePath->MaximumLength += sizeof(WCHAR);
2100 }
2101
2102 InstancePath->Buffer = ExAllocatePoolWithTag(PagedPool,
2103 InstancePath->MaximumLength,
2104 TAG_IO);
2105 if (!InstancePath->Buffer)
2106 {
2107 RtlFreeUnicodeString(&InstanceId);
2108 RtlFreeUnicodeString(&ParentIdPrefix);
2109 RtlFreeUnicodeString(&DeviceId);
2110 return STATUS_INSUFFICIENT_RESOURCES;
2111 }
2112
2113 /* Start with the device id */
2114 RtlCopyUnicodeString(InstancePath, &DeviceId);
2115 RtlAppendUnicodeToString(InstancePath, L"\\");
2116
2117 /* Add information from parent bus device to InstancePath */
2118 RtlAppendUnicodeStringToString(InstancePath, &ParentIdPrefix);
2119 if (ParentIdPrefix.Length && InstanceId.Length)
2120 {
2121 RtlAppendUnicodeToString(InstancePath, L"&");
2122 }
2123
2124 /* Finally, add the id returned by the driver stack */
2125 RtlAppendUnicodeStringToString(InstancePath, &InstanceId);
2126
2127 /*
2128 * FIXME: Check for valid characters, if there is invalid characters
2129 * then bugcheck
2130 */
2131
2132 RtlFreeUnicodeString(&InstanceId);
2133 RtlFreeUnicodeString(&DeviceId);
2134 RtlFreeUnicodeString(&ParentIdPrefix);
2135
2136 return STATUS_SUCCESS;
2137 }
2138
2139 /*
2140 * IopActionInterrogateDeviceStack
2141 *
2142 * Retrieve information for all (direct) child nodes of a parent node.
2143 *
2144 * Parameters
2145 * DeviceNode
2146 * Pointer to device node.
2147 * Context
2148 * Pointer to parent node to retrieve child node information for.
2149 *
2150 * Remarks
2151 * Any errors that occur are logged instead so that all child services have a chance
2152 * of being interrogated.
2153 */
2154
2155 NTSTATUS
2156 IopActionInterrogateDeviceStack(PDEVICE_NODE DeviceNode,
2157 PVOID Context)
2158 {
2159 IO_STATUS_BLOCK IoStatusBlock;
2160 PWSTR DeviceDescription;
2161 PWSTR LocationInformation;
2162 PDEVICE_NODE ParentDeviceNode;
2163 IO_STACK_LOCATION Stack;
2164 NTSTATUS Status;
2165 ULONG RequiredLength;
2166 LCID LocaleId;
2167 HANDLE InstanceKey = NULL;
2168 UNICODE_STRING ValueName;
2169 UNICODE_STRING InstancePathU;
2170 PDEVICE_OBJECT OldDeviceObject;
2171
2172 DPRINT("IopActionInterrogateDeviceStack(%p, %p)\n", DeviceNode, Context);
2173 DPRINT("PDO 0x%p\n", DeviceNode->PhysicalDeviceObject);
2174
2175 ParentDeviceNode = (PDEVICE_NODE)Context;
2176
2177 /*
2178 * We are called for the parent too, but we don't need to do special
2179 * handling for this node
2180 */
2181 if (DeviceNode == ParentDeviceNode)
2182 {
2183 DPRINT("Success\n");
2184 return STATUS_SUCCESS;
2185 }
2186
2187 /*
2188 * Make sure this device node is a direct child of the parent device node
2189 * that is given as an argument
2190 */
2191 if (DeviceNode->Parent != ParentDeviceNode)
2192 {
2193 DPRINT("Skipping 2+ level child\n");
2194 return STATUS_SUCCESS;
2195 }
2196
2197 /* Skip processing if it was already completed before */
2198 if (DeviceNode->Flags & DNF_PROCESSED)
2199 {
2200 /* Nothing to do */
2201 return STATUS_SUCCESS;
2202 }
2203
2204 /* Get Locale ID */
2205 Status = ZwQueryDefaultLocale(FALSE, &LocaleId);
2206 if (!NT_SUCCESS(Status))
2207 {
2208 DPRINT1("ZwQueryDefaultLocale() failed with status 0x%lx\n", Status);
2209 return Status;
2210 }
2211
2212 /*
2213 * FIXME: For critical errors, cleanup and disable device, but always
2214 * return STATUS_SUCCESS.
2215 */
2216
2217 Status = IopCreateDeviceInstancePath(DeviceNode, &InstancePathU);
2218 if (!NT_SUCCESS(Status))
2219 {
2220 if (Status != STATUS_PLUGPLAY_NO_DEVICE)
2221 {
2222 DPRINT1("IopCreateDeviceInstancePath() failed with status 0x%lx\n", Status);
2223 }
2224
2225 /* We have to return success otherwise we abort the traverse operation */
2226 return STATUS_SUCCESS;
2227 }
2228
2229 /* Verify that this is not a duplicate */
2230 OldDeviceObject = IopGetDeviceObjectFromDeviceInstance(&InstancePathU);
2231 if (OldDeviceObject != NULL)
2232 {
2233 PDEVICE_NODE OldDeviceNode = IopGetDeviceNode(OldDeviceObject);
2234
2235 DPRINT1("Duplicate device instance '%wZ'\n", &InstancePathU);
2236 DPRINT1("Current instance parent: '%wZ'\n", &DeviceNode->Parent->InstancePath);
2237 DPRINT1("Old instance parent: '%wZ'\n", &OldDeviceNode->Parent->InstancePath);
2238
2239 KeBugCheckEx(PNP_DETECTED_FATAL_ERROR,
2240 0x01,
2241 (ULONG_PTR)DeviceNode->PhysicalDeviceObject,
2242 (ULONG_PTR)OldDeviceObject,
2243 0);
2244 }
2245
2246 DeviceNode->InstancePath = InstancePathU;
2247
2248 DPRINT("InstancePath is %S\n", DeviceNode->InstancePath.Buffer);
2249
2250 /*
2251 * Create registry key for the instance id, if it doesn't exist yet
2252 */
2253 Status = IopCreateDeviceKeyPath(&DeviceNode->InstancePath, REG_OPTION_NON_VOLATILE, &InstanceKey);
2254 if (!NT_SUCCESS(Status))
2255 {
2256 DPRINT1("Failed to create the instance key! (Status %lx)\n", Status);
2257
2258 /* We have to return success otherwise we abort the traverse operation */
2259 return STATUS_SUCCESS;
2260 }
2261
2262 IopQueryHardwareIds(DeviceNode, InstanceKey);
2263
2264 IopQueryCompatibleIds(DeviceNode, InstanceKey);
2265
2266 DPRINT("Sending IRP_MN_QUERY_DEVICE_TEXT.DeviceTextDescription to device stack\n");
2267
2268 Stack.Parameters.QueryDeviceText.DeviceTextType = DeviceTextDescription;
2269 Stack.Parameters.QueryDeviceText.LocaleId = LocaleId;
2270 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2271 &IoStatusBlock,
2272 IRP_MN_QUERY_DEVICE_TEXT,
2273 &Stack);
2274 DeviceDescription = NT_SUCCESS(Status) ? (PWSTR)IoStatusBlock.Information
2275 : NULL;
2276 /* This key is mandatory, so even if the Irp fails, we still write it */
2277 RtlInitUnicodeString(&ValueName, L"DeviceDesc");
2278 if (ZwQueryValueKey(InstanceKey, &ValueName, KeyValueBasicInformation, NULL, 0, &RequiredLength) == STATUS_OBJECT_NAME_NOT_FOUND)
2279 {
2280 if (DeviceDescription &&
2281 *DeviceDescription != UNICODE_NULL)
2282 {
2283 /* This key is overriden when a driver is installed. Don't write the
2284 * new description if another one already exists */
2285 Status = ZwSetValueKey(InstanceKey,
2286 &ValueName,
2287 0,
2288 REG_SZ,
2289 DeviceDescription,
2290 ((ULONG)wcslen(DeviceDescription) + 1) * sizeof(WCHAR));
2291 }
2292 else
2293 {
2294 UNICODE_STRING DeviceDesc = RTL_CONSTANT_STRING(L"Unknown device");
2295 DPRINT("Driver didn't return DeviceDesc (Status 0x%08lx), so place unknown device there\n", Status);
2296
2297 Status = ZwSetValueKey(InstanceKey,
2298 &ValueName,
2299 0,
2300 REG_SZ,
2301 DeviceDesc.Buffer,
2302 DeviceDesc.MaximumLength);
2303 if (!NT_SUCCESS(Status))
2304 {
2305 DPRINT1("ZwSetValueKey() failed (Status 0x%lx)\n", Status);
2306 }
2307
2308 }
2309 }
2310
2311 if (DeviceDescription)
2312 {
2313 ExFreePoolWithTag(DeviceDescription, 0);
2314 }
2315
2316 DPRINT("Sending IRP_MN_QUERY_DEVICE_TEXT.DeviceTextLocation to device stack\n");
2317
2318 Stack.Parameters.QueryDeviceText.DeviceTextType = DeviceTextLocationInformation;
2319 Stack.Parameters.QueryDeviceText.LocaleId = LocaleId;
2320 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2321 &IoStatusBlock,
2322 IRP_MN_QUERY_DEVICE_TEXT,
2323 &Stack);
2324 if (NT_SUCCESS(Status) && IoStatusBlock.Information)
2325 {
2326 LocationInformation = (PWSTR)IoStatusBlock.Information;
2327 DPRINT("LocationInformation: %S\n", LocationInformation);
2328 RtlInitUnicodeString(&ValueName, L"LocationInformation");
2329 Status = ZwSetValueKey(InstanceKey,
2330 &ValueName,
2331 0,
2332 REG_SZ,
2333 LocationInformation,
2334 ((ULONG)wcslen(LocationInformation) + 1) * sizeof(WCHAR));
2335 if (!NT_SUCCESS(Status))
2336 {
2337 DPRINT1("ZwSetValueKey() failed (Status %lx)\n", Status);
2338 }
2339
2340 ExFreePoolWithTag(LocationInformation, 0);
2341 }
2342 else
2343 {
2344 DPRINT("IopInitiatePnpIrp() failed (Status %x) or IoStatusBlock.Information=NULL\n", Status);
2345 }
2346
2347 DPRINT("Sending IRP_MN_QUERY_BUS_INFORMATION to device stack\n");
2348
2349 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2350 &IoStatusBlock,
2351 IRP_MN_QUERY_BUS_INFORMATION,
2352 NULL);
2353 if (NT_SUCCESS(Status) && IoStatusBlock.Information)
2354 {
2355 PPNP_BUS_INFORMATION BusInformation = (PPNP_BUS_INFORMATION)IoStatusBlock.Information;
2356
2357 DeviceNode->ChildBusNumber = BusInformation->BusNumber;
2358 DeviceNode->ChildInterfaceType = BusInformation->LegacyBusType;
2359 DeviceNode->ChildBusTypeIndex = IopGetBusTypeGuidIndex(&BusInformation->BusTypeGuid);
2360 ExFreePoolWithTag(BusInformation, 0);
2361 }
2362 else
2363 {
2364 DPRINT("IopInitiatePnpIrp() failed (Status %x) or IoStatusBlock.Information=NULL\n", Status);
2365
2366 DeviceNode->ChildBusNumber = 0xFFFFFFF0;
2367 DeviceNode->ChildInterfaceType = InterfaceTypeUndefined;
2368 DeviceNode->ChildBusTypeIndex = -1;
2369 }
2370
2371 DPRINT("Sending IRP_MN_QUERY_RESOURCES to device stack\n");
2372
2373 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2374 &IoStatusBlock,
2375 IRP_MN_QUERY_RESOURCES,
2376 NULL);
2377 if (NT_SUCCESS(Status) && IoStatusBlock.Information)
2378 {
2379 DeviceNode->BootResources = (PCM_RESOURCE_LIST)IoStatusBlock.Information;
2380 IopDeviceNodeSetFlag(DeviceNode, DNF_HAS_BOOT_CONFIG);
2381 }
2382 else
2383 {
2384 DPRINT("IopInitiatePnpIrp() failed (Status %x) or IoStatusBlock.Information=NULL\n", Status);
2385 DeviceNode->BootResources = NULL;
2386 }
2387
2388 DPRINT("Sending IRP_MN_QUERY_RESOURCE_REQUIREMENTS to device stack\n");
2389
2390 Status = IopInitiatePnpIrp(DeviceNode->PhysicalDeviceObject,
2391 &IoStatusBlock,
2392 IRP_MN_QUERY_RESOURCE_REQUIREMENTS,
2393 NULL);
2394 if (NT_SUCCESS(Status))
2395 {
2396 DeviceNode->ResourceRequirements = (PIO_RESOURCE_REQUIREMENTS_LIST)IoStatusBlock.Information;
2397 }
2398 else
2399 {
2400 DPRINT("IopInitiatePnpIrp() failed (Status %08lx)\n", Status);
2401 DeviceNode->ResourceRequirements = NULL;
2402 }
2403
2404 if (InstanceKey != NULL)
2405 {
2406 IopSetDeviceInstanceData(InstanceKey, DeviceNode);
2407 }
2408
2409 ZwClose(InstanceKey);
2410
2411 IopDeviceNodeSetFlag(DeviceNode, DNF_PROCESSED);
2412
2413 if (!IopDeviceNodeHasFlag(DeviceNode, DNF_LEGACY_DRIVER))
2414 {
2415 /* Report the device to the user-mode pnp manager */
2416 IopQueueTargetDeviceEvent(&GUID_DEVICE_ENUMERATED,
2417 &DeviceNode->InstancePath);
2418 }
2419
2420 return STATUS_SUCCESS;
2421 }
2422
2423 static
2424 VOID
2425 IopHandleDeviceRemoval(
2426 IN PDEVICE_NODE DeviceNode,
2427 IN PDEVICE_RELATIONS DeviceRelations)
2428 {
2429 PDEVICE_NODE Child = DeviceNode->Child, NextChild;
2430 ULONG i;
2431 BOOLEAN Found;
2432
2433 if (DeviceNode == IopRootDeviceNode)
2434 return;
2435
2436 while (Child != NULL)
2437 {
2438 NextChild = Child->Sibling;
2439 Found = FALSE;
2440
2441 for (i = 0; DeviceRelations && i < DeviceRelations->Count; i++)
2442 {
2443 if (IopGetDeviceNode(DeviceRelations->Objects[i]) == Child)
2444 {
2445 Found = TRUE;
2446 break;
2447 }
2448 }
2449
2450 if (!Found && !(Child->Flags & DNF_WILL_BE_REMOVED))
2451 {
2452 /* Send removal IRPs to all of its children */
2453 IopPrepareDeviceForRemoval(Child->PhysicalDeviceObject, TRUE);
2454
2455 /* Send the surprise removal IRP */
2456 IopSendSurpriseRemoval(Child->PhysicalDeviceObject);
2457
2458 /* Tell the user-mode PnP manager that a device was removed */
2459 IopQueueTargetDeviceEvent(&GUID_DEVICE_SURPRISE_REMOVAL,
2460 &Child->InstancePath);
2461
2462 /* Send the remove device IRP */
2463 IopSendRemoveDevice(Child->PhysicalDeviceObject);
2464 }
2465
2466 Child = NextChild;
2467 }
2468 }
2469
2470 NTSTATUS
2471 IopEnumerateDevice(
2472 IN PDEVICE_OBJECT DeviceObject)
2473 {
2474 PDEVICE_NODE DeviceNode = IopGetDeviceNode(DeviceObject);
2475 DEVICETREE_TRAVERSE_CONTEXT Context;
2476 PDEVICE_RELATIONS DeviceRelations;
2477 PDEVICE_OBJECT ChildDeviceObject;
2478 IO_STATUS_BLOCK IoStatusBlock;
2479 PDEVICE_NODE ChildDeviceNode;
2480 IO_STACK_LOCATION Stack;
2481 NTSTATUS Status;
2482 ULONG i;
2483
2484 DPRINT("DeviceObject 0x%p\n", DeviceObject);
2485
2486 if (DeviceNode->Flags & DNF_NEED_ENUMERATION_ONLY)
2487 {
2488 DeviceNode->Flags &= ~DNF_NEED_ENUMERATION_ONLY;
2489
2490 DPRINT("Sending GUID_DEVICE_ARRIVAL\n");
2491 IopQueueTargetDeviceEvent(&GUID_DEVICE_ARRIVAL,
2492 &DeviceNode->InstancePath);
2493 }
2494
2495 DPRINT("Sending IRP_MN_QUERY_DEVICE_RELATIONS to device stack\n");
2496
2497 Stack.Parameters.QueryDeviceRelations.Type = BusRelations;
2498
2499 Status = IopInitiatePnpIrp(
2500 DeviceObject,
2501 &IoStatusBlock,
2502 IRP_MN_QUERY_DEVICE_RELATIONS,
2503 &Stack);
2504 if (!NT_SUCCESS(Status) || Status == STATUS_PENDING)
2505 {
2506 DPRINT("IopInitiatePnpIrp() failed with status 0x%08lx\n", Status);
2507 return Status;
2508 }
2509
2510 DeviceRelations = (PDEVICE_RELATIONS)IoStatusBlock.Information;
2511
2512 /*
2513 * Send removal IRPs for devices that have disappeared
2514 * NOTE: This code handles the case where no relations are specified
2515 */
2516 IopHandleDeviceRemoval(DeviceNode, DeviceRelations);
2517
2518 /* Now we bail if nothing was returned */
2519 if (!DeviceRelations)
2520 {
2521 /* We're all done */
2522 DPRINT("No PDOs\n");
2523 return STATUS_SUCCESS;
2524 }
2525
2526 DPRINT("Got %u PDOs\n", DeviceRelations->Count);
2527
2528 /*
2529 * Create device nodes for all discovered devices
2530 */
2531 for (i = 0; i < DeviceRelations->Count; i++)
2532 {
2533 ChildDeviceObject = DeviceRelations->Objects[i];
2534 ASSERT((ChildDeviceObject->Flags & DO_DEVICE_INITIALIZING) == 0);
2535
2536 ChildDeviceNode = IopGetDeviceNode(ChildDeviceObject);
2537 if (!ChildDeviceNode)
2538 {
2539 /* One doesn't exist, create it */
2540 Status = IopCreateDeviceNode(
2541 DeviceNode,
2542 ChildDeviceObject,
2543 NULL,
2544 &ChildDeviceNode);
2545 if (NT_SUCCESS(Status))
2546 {
2547 /* Mark the node as enumerated */
2548 ChildDeviceNode->Flags |= DNF_ENUMERATED;
2549
2550 /* Mark the DO as bus enumerated */
2551 ChildDeviceObject->Flags |= DO_BUS_ENUMERATED_DEVICE;
2552 }
2553 else
2554 {
2555 /* Ignore this DO */
2556 DPRINT1("IopCreateDeviceNode() failed with status 0x%08x. Skipping PDO %u\n", Status, i);
2557 ObDereferenceObject(ChildDeviceObject);
2558 }
2559 }
2560 else
2561 {
2562 /* Mark it as enumerated */
2563 ChildDeviceNode->Flags |= DNF_ENUMERATED;
2564 ObDereferenceObject(ChildDeviceObject);
2565 }
2566 }
2567 ExFreePool(DeviceRelations);
2568
2569 /*
2570 * Retrieve information about all discovered children from the bus driver
2571 */
2572 IopInitDeviceTreeTraverseContext(
2573 &Context,
2574 DeviceNode,
2575 IopActionInterrogateDeviceStack,
2576 DeviceNode);
2577
2578 Status = IopTraverseDeviceTree(&Context);
2579 if (!NT_SUCCESS(Status))
2580 {
2581 DPRINT("IopTraverseDeviceTree() failed with status 0x%08lx\n", Status);
2582 return Status;
2583 }
2584
2585 /*
2586 * Retrieve configuration from the registry for discovered children
2587 */
2588 IopInitDeviceTreeTraverseContext(
2589 &Context,
2590 DeviceNode,
2591 IopActionConfigureChildServices,
2592 DeviceNode);
2593
2594 Status = IopTraverseDeviceTree(&Context);
2595 if (!NT_SUCCESS(Status))
2596 {
2597 DPRINT("IopTraverseDeviceTree() failed with status 0x%08lx\n", Status);
2598 return Status;
2599 }
2600
2601 /*
2602 * Initialize services for discovered children.
2603 */
2604 Status = IopInitializePnpServices(DeviceNode);
2605 if (!NT_SUCCESS(Status))
2606 {
2607 DPRINT("IopInitializePnpServices() failed with status 0x%08lx\n", Status);
2608 return Status;
2609 }
2610
2611 DPRINT("IopEnumerateDevice() finished\n");
2612 return STATUS_SUCCESS;
2613 }
2614
2615
2616 /*
2617 * IopActionConfigureChildServices
2618 *
2619 * Retrieve configuration for all (direct) child nodes of a parent node.
2620 *
2621 * Parameters
2622 * DeviceNode
2623 * Pointer to device node.
2624 * Context
2625 * Pointer to parent node to retrieve child node configuration for.
2626 *
2627 * Remarks
2628 * Any errors that occur are logged instead so that all child services have a chance of beeing
2629 * configured.
2630 */
2631
2632 NTSTATUS
2633 IopActionConfigureChildServices(PDEVICE_NODE DeviceNode,
2634 PVOID Context)
2635 {
2636 RTL_QUERY_REGISTRY_TABLE QueryTable[3];
2637 PDEVICE_NODE ParentDeviceNode;
2638 PUNICODE_STRING Service;
2639 UNICODE_STRING ClassGUID;
2640 NTSTATUS Status;
2641 DEVICE_CAPABILITIES DeviceCaps;
2642
2643 DPRINT("IopActionConfigureChildServices(%p, %p)\n", DeviceNode, Context);
2644
2645 ParentDeviceNode = (PDEVICE_NODE)Context;
2646
2647 /*
2648 * We are called for the parent too, but we don't need to do special
2649 * handling for this node
2650 */
2651 if (DeviceNode == ParentDeviceNode)
2652 {
2653 DPRINT("Success\n");
2654 return STATUS_SUCCESS;
2655 }
2656
2657 /*
2658 * Make sure this device node is a direct child of the parent device node
2659 * that is given as an argument
2660 */
2661
2662 if (DeviceNode->Parent != ParentDeviceNode)
2663 {
2664 DPRINT("Skipping 2+ level child\n");
2665 return STATUS_SUCCESS;
2666 }
2667
2668 if (!(DeviceNode->Flags & DNF_PROCESSED))
2669 {
2670 DPRINT1("Child not ready to be configured\n");
2671 return STATUS_SUCCESS;
2672 }
2673
2674 if (!(DeviceNode->Flags & (DNF_DISABLED | DNF_STARTED | DNF_ADDED)))
2675 {
2676 WCHAR RegKeyBuffer[MAX_PATH];
2677 UNICODE_STRING RegKey;
2678
2679 /* Install the service for this if it's in the CDDB */
2680 IopInstallCriticalDevice(DeviceNode);
2681
2682 RegKey.Length = 0;
2683 RegKey.MaximumLength = sizeof(RegKeyBuffer);
2684 RegKey.Buffer = RegKeyBuffer;
2685
2686 /*
2687 * Retrieve configuration from Enum key
2688 */
2689
2690 Service = &DeviceNode->ServiceName;
2691
2692 RtlZeroMemory(QueryTable, sizeof(QueryTable));
2693 RtlInitUnicodeString(Service, NULL);
2694 RtlInitUnicodeString(&ClassGUID, NULL);
2695
2696 QueryTable[0].Name = L"Service";
2697 QueryTable[0].Flags = RTL_QUERY_REGISTRY_DIRECT;
2698 QueryTable[0].EntryContext = Service;
2699
2700 QueryTable[1].Name = L"ClassGUID";
2701 QueryTable[1].Flags = RTL_QUERY_REGISTRY_DIRECT;
2702 QueryTable[1].EntryContext = &ClassGUID;
2703 QueryTable[1].DefaultType = REG_SZ;
2704 QueryTable[1].DefaultData = L"";
2705 QueryTable[1].DefaultLength = 0;
2706
2707 RtlAppendUnicodeToString(&RegKey, L"\\Registry\\Machine\\System\\CurrentControlSet\\Enum\\");
2708 RtlAppendUnicodeStringToString(&RegKey, &DeviceNode->InstancePath);
2709
2710 Status = RtlQueryRegistryValues(RTL_REGISTRY_ABSOLUTE,
2711 RegKey.Buffer, QueryTable, NULL, NULL);
2712
2713 if (!NT_SUCCESS(Status))
2714 {
2715 /* FIXME: Log the error */
2716 DPRINT("Could not retrieve configuration for device %wZ (Status 0x%08x)\n",
2717 &DeviceNode->InstancePath, Status);
2718 IopDeviceNodeSetFlag(DeviceNode, DNF_DISABLED);
2719 return STATUS_SUCCESS;
2720 }
2721
2722 if (Service->Buffer == NULL)
2723 {
2724 if (NT_SUCCESS(IopQueryDeviceCapabilities(DeviceNode, &DeviceCaps)) &&
2725 DeviceCaps.RawDeviceOK)
2726 {
2727 DPRINT("%wZ is using parent bus driver (%wZ)\n", &DeviceNode->InstancePath, &ParentDeviceNode->ServiceName);
2728 RtlInitEmptyUnicodeString(&DeviceNode->ServiceName, NULL, 0);
2729 }
2730 else if (ClassGUID.Length != 0)
2731 {
2732 /* Device has a ClassGUID value, but no Service value.
2733 * Suppose it is using the NULL driver, so state the
2734 * device is started */
2735 DPRINT("%wZ is using NULL driver\n", &DeviceNode->InstancePath);
2736 IopDeviceNodeSetFlag(DeviceNode, DNF_STARTED);
2737 }
2738 else
2739 {
2740 DeviceNode->Problem = CM_PROB_FAILED_INSTALL;
2741 IopDeviceNodeSetFlag(DeviceNode, DNF_DISABLED);
2742 }
2743 return STATUS_SUCCESS;
2744 }
2745
2746 DPRINT("Got Service %S\n", Service->Buffer);
2747 }
2748
2749 return STATUS_SUCCESS;
2750 }
2751
2752 /*
2753 * IopActionInitChildServices
2754 *
2755 * Initialize the service for all (direct) child nodes of a parent node
2756 *
2757 * Parameters
2758 * DeviceNode
2759 * Pointer to device node.
2760 * Context
2761 * Pointer to parent node to initialize child node services for.
2762 *
2763 * Remarks
2764 * If the driver image for a service is not loaded and initialized
2765 * it is done here too. Any errors that occur are logged instead so
2766 * that all child services have a chance of being initialized.
2767 */
2768
2769 NTSTATUS
2770 IopActionInitChildServices(PDEVICE_NODE DeviceNode,
2771 PVOID Context)
2772 {
2773 PDEVICE_NODE ParentDeviceNode;
2774 NTSTATUS Status;
2775 BOOLEAN BootDrivers = !PnpSystemInit;
2776
2777 DPRINT("IopActionInitChildServices(%p, %p)\n", DeviceNode, Context);
2778
2779 ParentDeviceNode = Context;
2780
2781 /*
2782 * We are called for the parent too, but we don't need to do special
2783 * handling for this node
2784 */
2785 if (DeviceNode == ParentDeviceNode)
2786 {
2787 DPRINT("Success\n");
2788 return STATUS_SUCCESS;
2789 }
2790
2791 /*
2792 * We don't want to check for a direct child because
2793 * this function is called during boot to reinitialize
2794 * devices with drivers that couldn't load yet due to
2795 * stage 0 limitations (ie can't load from disk yet).
2796 */
2797
2798 if (!(DeviceNode->Flags & DNF_PROCESSED))
2799 {
2800 DPRINT1("Child not ready to be added\n");
2801 return STATUS_SUCCESS;
2802 }
2803
2804 if (IopDeviceNodeHasFlag(DeviceNode, DNF_STARTED) ||
2805 IopDeviceNodeHasFlag(DeviceNode, DNF_ADDED) ||
2806 IopDeviceNodeHasFlag(DeviceNode, DNF_DISABLED))
2807 return STATUS_SUCCESS;
2808
2809 if (DeviceNode->ServiceName.Buffer == NULL)
2810 {
2811 /* We don't need to worry about loading the driver because we're
2812 * being driven in raw mode so our parent must be loaded to get here */
2813 Status = IopInitializeDevice(DeviceNode, NULL);
2814 if (NT_SUCCESS(Status))
2815 {
2816 Status = IopStartDevice(DeviceNode);
2817 if (!NT_SUCCESS(Status))
2818 {
2819 DPRINT1("IopStartDevice(%wZ) failed with status 0x%08x\n",
2820 &DeviceNode->InstancePath, Status);
2821 }
2822 }
2823 }
2824 else
2825 {
2826 PLDR_DATA_TABLE_ENTRY ModuleObject;
2827 PDRIVER_OBJECT DriverObject;
2828
2829 KeEnterCriticalRegion();
2830 ExAcquireResourceExclusiveLite(&IopDriverLoadResource, TRUE);
2831 /* Get existing DriverObject pointer (in case the driver has
2832 already been loaded and initialized) */
2833 Status = IopGetDriverObject(
2834 &DriverObject,
2835 &DeviceNode->ServiceName,
2836 FALSE);
2837
2838 if (!NT_SUCCESS(Status))
2839 {
2840 /* Driver is not initialized, try to load it */
2841 Status = IopLoadServiceModule(&DeviceNode->ServiceName, &ModuleObject);
2842
2843 if (NT_SUCCESS(Status) || Status == STATUS_IMAGE_ALREADY_LOADED)
2844 {
2845 /* Initialize the driver */
2846 Status = IopInitializeDriverModule(DeviceNode, ModuleObject,
2847 &DeviceNode->ServiceName, FALSE, &DriverObject);
2848 if (!NT_SUCCESS(Status)) DeviceNode->Problem = CM_PROB_FAILED_DRIVER_ENTRY;
2849 }
2850 else if (Status == STATUS_DRIVER_UNABLE_TO_LOAD)
2851 {
2852 DPRINT1("Service '%wZ' is disabled\n", &DeviceNode->ServiceName);
2853 DeviceNode->Problem = CM_PROB_DISABLED_SERVICE;
2854 }
2855 else
2856 {
2857 DPRINT("IopLoadServiceModule(%wZ) failed with status 0x%08x\n",
2858 &DeviceNode->ServiceName, Status);
2859 if (!BootDrivers) DeviceNode->Problem = CM_PROB_DRIVER_FAILED_LOAD;
2860 }
2861 }
2862 ExReleaseResourceLite(&IopDriverLoadResource);
2863 KeLeaveCriticalRegion();
2864
2865 /* Driver is loaded and initialized at this point */
2866 if (NT_SUCCESS(Status))
2867 {
2868 /* Initialize the device, including all filters */
2869 Status = PipCallDriverAddDevice(DeviceNode, FALSE, DriverObject);
2870
2871 /* Remove the extra reference */
2872 ObDereferenceObject(DriverObject);
2873 }
2874 else
2875 {
2876 /*
2877 * Don't disable when trying to load only boot drivers
2878 */
2879 if (!BootDrivers)
2880 {
2881 IopDeviceNodeSetFlag(DeviceNode, DNF_DISABLED);
2882 }
2883 }
2884 }
2885
2886 return STATUS_SUCCESS;
2887 }
2888
2889 /*
2890 * IopInitializePnpServices
2891 *
2892 * Initialize services for discovered children
2893 *
2894 * Parameters
2895 * DeviceNode
2896 * Top device node to start initializing services.
2897 *
2898 * Return Value
2899 * Status
2900 */
2901 NTSTATUS
2902 IopInitializePnpServices(IN PDEVICE_NODE DeviceNode)
2903 {
2904 DEVICETREE_TRAVERSE_CONTEXT Context;
2905
2906 DPRINT("IopInitializePnpServices(%p)\n", DeviceNode);
2907
2908 IopInitDeviceTreeTraverseContext(
2909 &Context,
2910 DeviceNode,
2911 IopActionInitChildServices,
2912 DeviceNode);
2913
2914 return IopTraverseDeviceTree(&Context);
2915 }
2916
2917 static NTSTATUS INIT_FUNCTION
2918 IopEnumerateDetectedDevices(
2919 IN HANDLE hBaseKey,
2920 IN PUNICODE_STRING RelativePath OPTIONAL,
2921 IN HANDLE hRootKey,
2922 IN BOOLEAN EnumerateSubKeys,
2923 IN PCM_FULL_RESOURCE_DESCRIPTOR ParentBootResources,
2924 IN ULONG ParentBootResourcesLength)
2925 {
2926 UNICODE_STRING IdentifierU = RTL_CONSTANT_STRING(L"Identifier");
2927 UNICODE_STRING HardwareIDU = RTL_CONSTANT_STRING(L"HardwareID");
2928 UNICODE_STRING ConfigurationDataU = RTL_CONSTANT_STRING(L"Configuration Data");
2929 UNICODE_STRING BootConfigU = RTL_CONSTANT_STRING(L"BootConfig");
2930 UNICODE_STRING LogConfU = RTL_CONSTANT_STRING(L"LogConf");
2931 OBJECT_ATTRIBUTES ObjectAttributes;
2932 HANDLE hDevicesKey = NULL;
2933 HANDLE hDeviceKey = NULL;
2934 HANDLE hLevel1Key, hLevel2Key = NULL, hLogConf;
2935 UNICODE_STRING Level2NameU;
2936 WCHAR Level2Name[5];
2937 ULONG IndexDevice = 0;
2938 ULONG IndexSubKey;
2939 PKEY_BASIC_INFORMATION pDeviceInformation = NULL;
2940 ULONG DeviceInfoLength = sizeof(KEY_BASIC_INFORMATION) + 50 * sizeof(WCHAR);
2941 PKEY_VALUE_PARTIAL_INFORMATION pValueInformation = NULL;
2942 ULONG ValueInfoLength = sizeof(KEY_VALUE_PARTIAL_INFORMATION) + 50 * sizeof(WCHAR);
2943 UNICODE_STRING DeviceName, ValueName;
2944 ULONG RequiredSize;
2945 PCM_FULL_RESOURCE_DESCRIPTOR BootResources = NULL;
2946 ULONG BootResourcesLength;
2947 NTSTATUS Status;
2948
2949 const UNICODE_STRING IdentifierSerial = RTL_CONSTANT_STRING(L"SerialController");
2950 UNICODE_STRING HardwareIdSerial = RTL_CONSTANT_STRING(L"*PNP0501\0");
2951 static ULONG DeviceIndexSerial = 0;
2952 const UNICODE_STRING IdentifierKeyboard = RTL_CONSTANT_STRING(L"KeyboardController");
2953 UNICODE_STRING HardwareIdKeyboard = RTL_CONSTANT_STRING(L"*PNP0303\0");
2954 static ULONG DeviceIndexKeyboard = 0;
2955 const UNICODE_STRING IdentifierMouse = RTL_CONSTANT_STRING(L"PointerController");
2956 UNICODE_STRING HardwareIdMouse = RTL_CONSTANT_STRING(L"*PNP0F13\0");
2957 static ULONG DeviceIndexMouse = 0;
2958 const UNICODE_STRING IdentifierParallel = RTL_CONSTANT_STRING(L"ParallelController");
2959 UNICODE_STRING HardwareIdParallel = RTL_CONSTANT_STRING(L"*PNP0400\0");
2960 static ULONG DeviceIndexParallel = 0;
2961 const UNICODE_STRING IdentifierFloppy = RTL_CONSTANT_STRING(L"FloppyDiskPeripheral");
2962 UNICODE_STRING HardwareIdFloppy = RTL_CONSTANT_STRING(L"*PNP0700\0");
2963 static ULONG DeviceIndexFloppy = 0;
2964 UNICODE_STRING HardwareIdKey;
2965 PUNICODE_STRING pHardwareId;
2966 ULONG DeviceIndex = 0;
2967 PUCHAR CmResourceList;
2968 ULONG ListCount;
2969
2970 if (RelativePath)
2971 {
2972 Status = IopOpenRegistryKeyEx(&hDevicesKey, hBaseKey, RelativePath, KEY_ENUMERATE_SUB_KEYS);
2973 if (!NT_SUCCESS(Status))
2974 {
2975 DPRINT("ZwOpenKey() failed with status 0x%08lx\n", Status);
2976 goto cleanup;
2977 }
2978 }
2979 else
2980 hDevicesKey = hBaseKey;
2981
2982 pDeviceInformation = ExAllocatePool(PagedPool, DeviceInfoLength);
2983 if (!pDeviceInformation)
2984 {
2985 DPRINT("ExAllocatePool() failed\n");
2986 Status = STATUS_NO_MEMORY;
2987 goto cleanup;
2988 }
2989
2990 pValueInformation = ExAllocatePool(PagedPool, ValueInfoLength);
2991 if (!pValueInformation)
2992 {
2993 DPRINT("ExAllocatePool() failed\n");
2994 Status = STATUS_NO_MEMORY;
2995 goto cleanup;
2996 }
2997
2998 while (TRUE)
2999 {
3000 Status = ZwEnumerateKey(hDevicesKey, IndexDevice, KeyBasicInformation, pDeviceInformation, DeviceInfoLength, &RequiredSize);
3001 if (Status == STATUS_NO_MORE_ENTRIES)
3002 break;
3003 else if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
3004 {
3005 ExFreePool(pDeviceInformation);
3006 DeviceInfoLength = RequiredSize;
3007 pDeviceInformation = ExAllocatePool(PagedPool, DeviceInfoLength);
3008 if (!pDeviceInformation)
3009 {
3010 DPRINT("ExAllocatePool() failed\n");
3011 Status = STATUS_NO_MEMORY;
3012 goto cleanup;
3013 }
3014 Status = ZwEnumerateKey(hDevicesKey, IndexDevice, KeyBasicInformation, pDeviceInformation, DeviceInfoLength, &RequiredSize);
3015 }
3016 if (!NT_SUCCESS(Status))
3017 {
3018 DPRINT("ZwEnumerateKey() failed with status 0x%08lx\n", Status);
3019 goto cleanup;
3020 }
3021 IndexDevice++;
3022
3023 /* Open device key */
3024 DeviceName.Length = DeviceName.MaximumLength = (USHORT)pDeviceInformation->NameLength;
3025 DeviceName.Buffer = pDeviceInformation->Name;
3026
3027 Status = IopOpenRegistryKeyEx(&hDeviceKey, hDevicesKey, &DeviceName,
3028 KEY_QUERY_VALUE + (EnumerateSubKeys ? KEY_ENUMERATE_SUB_KEYS : 0));
3029 if (!NT_SUCCESS(Status))
3030 {
3031 DPRINT("ZwOpenKey() failed with status 0x%08lx\n", Status);
3032 goto cleanup;
3033 }
3034
3035 /* Read boot resources, and add then to parent ones */
3036 Status = ZwQueryValueKey(hDeviceKey, &ConfigurationDataU, KeyValuePartialInformation, pValueInformation, ValueInfoLength, &RequiredSize);
3037 if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
3038 {
3039 ExFreePool(pValueInformation);
3040 ValueInfoLength = RequiredSize;
3041 pValueInformation = ExAllocatePool(PagedPool, ValueInfoLength);
3042 if (!pValueInformation)
3043 {
3044 DPRINT("ExAllocatePool() failed\n");
3045 ZwDeleteKey(hLevel2Key);
3046 Status = STATUS_NO_MEMORY;
3047 goto cleanup;
3048 }
3049 Status = ZwQueryValueKey(hDeviceKey, &ConfigurationDataU, KeyValuePartialInformation, pValueInformation, ValueInfoLength, &RequiredSize);
3050 }
3051 if (Status == STATUS_OBJECT_NAME_NOT_FOUND)
3052 {
3053 BootResources = ParentBootResources;
3054 BootResourcesLength = ParentBootResourcesLength;
3055 }
3056 else if (!NT_SUCCESS(Status))
3057 {
3058 DPRINT("ZwQueryValueKey() failed with status 0x%08lx\n", Status);
3059 goto nextdevice;
3060 }
3061 else if (pValueInformation->Type != REG_FULL_RESOURCE_DESCRIPTOR)
3062 {
3063 DPRINT("Wrong registry type: got 0x%lx, expected 0x%lx\n", pValueInformation->Type, REG_FULL_RESOURCE_DESCRIPTOR);
3064 goto nextdevice;
3065 }
3066 else
3067 {
3068 static const ULONG Header = FIELD_OFFSET(CM_FULL_RESOURCE_DESCRIPTOR, PartialResourceList.PartialDescriptors);
3069
3070 /* Concatenate current resources and parent ones */
3071 if (ParentBootResourcesLength == 0)
3072 BootResourcesLength = pValueInformation->DataLength;
3073 else
3074 BootResourcesLength = ParentBootResourcesLength
3075 + pValueInformation->DataLength
3076 - Header;
3077 BootResources = ExAllocatePool(PagedPool, BootResourcesLength);
3078 if (!BootResources)
3079 {
3080 DPRINT("ExAllocatePool() failed\n");
3081 goto nextdevice;
3082 }
3083 if (ParentBootResourcesLength < sizeof(CM_FULL_RESOURCE_DESCRIPTOR))
3084 {
3085 RtlCopyMemory(BootResources, pValueInformation->Data, pValueInformation->DataLength);
3086 }
3087 else if (ParentBootResources->PartialResourceList.PartialDescriptors[ParentBootResources->PartialResourceList.Count - 1].Type == CmResourceTypeDeviceSpecific)
3088 {
3089 RtlCopyMemory(BootResources, pValueInformation->Data, pValueInformation->DataLength);
3090 RtlCopyMemory(
3091 (PVOID)((ULONG_PTR)BootResources + pValueInformation->DataLength),
3092 (PVOID)((ULONG_PTR)ParentBootResources + Header),
3093 ParentBootResourcesLength - Header);
3094 BootResources->PartialResourceList.Count += ParentBootResources->PartialResourceList.Count;
3095 }
3096 else
3097 {
3098 RtlCopyMemory(BootResources, pValueInformation->Data, Header);
3099 RtlCopyMemory(
3100 (PVOID)((ULONG_PTR)BootResources + Header),
3101 (PVOID)((ULONG_PTR)ParentBootResources + Header),
3102 ParentBootResourcesLength - Header);
3103 RtlCopyMemory(
3104 (PVOID)((ULONG_PTR)BootResources + ParentBootResourcesLength),
3105 pValueInformation->Data + Header,
3106 pValueInformation->DataLength - Header);
3107 BootResources->PartialResourceList.Count += ParentBootResources->PartialResourceList.Count;
3108 }
3109 }
3110
3111 if (EnumerateSubKeys)
3112 {
3113 IndexSubKey = 0;
3114 while (TRUE)
3115 {
3116 Status = ZwEnumerateKey(hDeviceKey, IndexSubKey, KeyBasicInformation, pDeviceInformation, DeviceInfoLength, &RequiredSize);
3117 if (Status == STATUS_NO_MORE_ENTRIES)
3118 break;
3119 else if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
3120 {
3121 ExFreePool(pDeviceInformation);
3122 DeviceInfoLength = RequiredSize;
3123 pDeviceInformation = ExAllocatePool(PagedPool, DeviceInfoLength);
3124 if (!pDeviceInformation)
3125 {
3126 DPRINT("ExAllocatePool() failed\n");
3127 Status = STATUS_NO_MEMORY;
3128 goto cleanup;
3129 }
3130 Status = ZwEnumerateKey(hDeviceKey, IndexSubKey, KeyBasicInformation, pDeviceInformation, DeviceInfoLength, &RequiredSize);
3131 }
3132 if (!NT_SUCCESS(Status))
3133 {
3134 DPRINT("ZwEnumerateKey() failed with status 0x%08lx\n", Status);
3135 goto cleanup;
3136 }
3137 IndexSubKey++;
3138 DeviceName.Length = DeviceName.MaximumLength = (USHORT)pDeviceInformation->NameLength;
3139 DeviceName.Buffer = pDeviceInformation->Name;
3140
3141 Status = IopEnumerateDetectedDevices(
3142 hDeviceKey,
3143 &DeviceName,
3144 hRootKey,
3145 TRUE,
3146 BootResources,
3147 BootResourcesLength);
3148 if (!NT_SUCCESS(Status))
3149 goto cleanup;
3150 }
3151 }
3152
3153 /* Read identifier */
3154 Status = ZwQueryValueKey(hDeviceKey, &IdentifierU, KeyValuePartialInformation, pValueInformation, ValueInfoLength, &RequiredSize);
3155 if (Status == STATUS_BUFFER_OVERFLOW || Status == STATUS_BUFFER_TOO_SMALL)
3156 {
3157 ExFreePool(pValueInformation);
3158 ValueInfoLength = RequiredSize;
3159 pValueInformation = ExAllocatePool(PagedPool, ValueInfoLength);
3160 if (!pValueInformation)
3161 {
3162 DPRINT("ExAllocatePool() failed\n");
3163 Status = STATUS_NO_MEMORY;
3164 goto cleanup;
3165 }
3166 Status = ZwQueryValueKey(hDeviceKey, &IdentifierU, KeyValuePartialInformation, pValueInformation, ValueInfoLength, &RequiredSize);
3167 }
3168 if (!NT_SUCCESS(Status))
3169 {
3170 if (Status != STATUS_OBJECT_NAME_NOT_FOUND)
3171 {
3172 DPRINT("ZwQueryValueKey() failed with status 0x%08lx\n", Status);
3173 goto nextdevice;
3174 }
3175 ValueName.Length = ValueName.MaximumLength = 0;
3176 }
3177 else if (pValueInformation->Type != REG_SZ)
3178 {
3179 DPRINT("Wrong registry type: got 0x%lx, expected 0x%lx\n", pValueInformation->Type, REG_SZ);
3180 goto nextdevice;
3181 }
3182 else
3183 {
3184 /* Assign hardware id to this device */
3185 ValueName.Length = ValueName.MaximumLength = (USHORT)pValueInformation->DataLength;
3186 ValueName.Buffer = (PWCHAR)pValueInformation->Data;
3187 if (ValueName.Length >= sizeof(WCHAR) && ValueName.Buffer[ValueName.Length / sizeof(WCHAR) - 1] == UNICODE_NULL)
3188 ValueName.Length -= sizeof(WCHAR);
3189 }
3190
3191 if (RelativePath && RtlCompareUnicodeString(RelativePath, &IdentifierSerial, FALSE) == 0)
3192 {
3193 pHardwareId = &HardwareIdSerial;
3194 DeviceIndex = DeviceIndexSerial++;
3195 }
3196 else if (RelativePath && RtlCompareUnicodeString(RelativePath, &IdentifierKeyboard, FALSE) == 0)
3197 {
3198 pHardwareId = &HardwareIdKeyboard;
3199 DeviceIndex = DeviceIndexKeyboard++;
3200 }
3201 else if (RelativePath && RtlCompareUnicodeString(RelativePath, &IdentifierMouse, FALSE) == 0)
3202 {
3203 pHardwareId = &HardwareIdMouse;
3204 DeviceIndex = DeviceIndexMouse++;
3205 }
3206 else if (RelativePath && RtlCompareUnicodeString(RelativePath, &IdentifierParallel, FALSE) == 0)
3207 {
3208 pHardwareId = &HardwareIdParallel;
3209 DeviceIndex = DeviceIndexParallel++;
3210 }
3211 else if (RelativePath && RtlCompareUnicodeString(RelativePath, &IdentifierFloppy, FALSE) == 0)
3212 {
3213 pHardwareId = &HardwareIdFloppy;
3214 DeviceIndex = DeviceIndexFloppy++;
3215 }
3216 else
3217 {
3218 /* Unknown key path */
3219 DPRINT("Unknown key path '%wZ'\n", RelativePath);
3220 goto nextdevice;
3221 }
3222
3223 /* Prepare hardware id key (hardware id value without final \0) */
3224 HardwareIdKey = *pHardwareId;
3225 HardwareIdKey.Length -= sizeof(UNICODE_NULL);
3226
3227 /* Add the detected device to Root key */
3228 InitializeObjectAttributes(&ObjectAttributes, &HardwareIdKey, OBJ_KERNEL_HANDLE, hRootKey, NULL);
3229 Status = ZwCreateKey(
3230 &hLevel1Key,
3231 KEY_CREATE_SUB_KEY,
3232 &ObjectAttributes,
3233 0,
3234 NULL,
3235 ExpInTextModeSetup ? REG_OPTION_VOLATILE : 0,
3236 NULL);
3237 if (!NT_SUCCESS(Status))
3238 {
3239 DPRINT("ZwCreateKey() failed with status 0x%08lx\n", Status);
3240 goto nextdevice;
3241 }
3242 swprintf(Level2Name, L"%04lu", DeviceIndex);
3243 RtlInitUnicodeString(&Level2NameU, Level2Name);
3244 InitializeObjectAttributes(&ObjectAttributes, &Level2NameU, OBJ_KERNEL_HANDLE, hLevel1Key, NULL);
3245 Status = ZwCreateKey(
3246 &hLevel2Key,
3247 KEY_SET_VALUE | KEY_CREATE_SUB_KEY,
3248 &ObjectAttributes,
3249 0,
3250 NULL,
3251 ExpInTextModeSetup ? REG_OPTION_VOLATILE : 0,
3252 NULL);
3253 ZwClose(hLevel1Key);
3254 if (!NT_SUCCESS(Status))
3255 {
3256 DPRINT("ZwCreateKey() failed with status 0x%08lx\n", Status);
3257 goto nextdevice;
3258 }
3259 DPRINT("Found %wZ #%lu (%wZ)\n", &ValueName, DeviceIndex, &HardwareIdKey);
3260 Status = ZwSetValueKey(hLevel2Key, &HardwareIDU, 0, REG_MULTI_SZ, pHardwareId->Buffer, pHardwareId->MaximumLength);
3261 if (!NT_SUCCESS(Status))
3262 {
3263 DPRINT("ZwSetValueKey() failed with status 0x%08lx\n", Status);
3264 ZwDeleteKey(hLevel2Key);
3265 goto nextdevice;
3266 }
3267 /* Create 'LogConf' subkey */
3268 InitializeObjectAttributes(&ObjectAttributes, &LogConfU, OBJ_KERNEL_HANDLE, hLevel2Key, NULL);
3269 Status = ZwCreateKey(
3270 &hLogConf,
3271 KEY_SET_VALUE,
3272 &ObjectAttributes,
3273 0,
3274 NULL,
3275 REG_OPTION_VOLATILE,
3276 NULL);
3277 if (!NT_SUCCESS(Status))
3278 {
3279 DPRINT("ZwCreateKey() failed with status 0x%08lx\n", Status);
3280 ZwDeleteKey(hLevel2Key);
3281 goto nextdevice;
3282 }
3283 if (BootResourcesLength >= sizeof(CM_FULL_RESOURCE_DESCRIPTOR))
3284 {
3285 CmResourceList = ExAllocatePool(PagedPool, BootResourcesLength + sizeof(ULONG));
3286 if (!CmResourceList)
3287 {
3288 ZwClose(hLogConf);
3289 ZwDeleteKey(hLevel2Key);
3290 goto nextdevice;
3291 }
3292
3293 /* Add the list count (1st member of CM_RESOURCE_LIST) */
3294 ListCount = 1;
3295 RtlCopyMemory(CmResourceList,
3296 &ListCount,
3297 sizeof(ULONG));
3298
3299 /* Now add the actual list (2nd member of CM_RESOURCE_LIST) */
3300 RtlCopyMemory(CmResourceList + sizeof(ULONG),
3301 BootResources,
3302 BootResourcesLength);
3303
3304 /* Save boot resources to 'LogConf\BootConfig' */
3305 Status = ZwSetValueKey(hLogConf, &BootConfigU, 0, REG_RESOURCE_LIST, CmResourceList, BootResourcesLength + sizeof(ULONG));
3306 if (!NT_SUCCESS(Status))
3307 {
3308 DPRINT("ZwSetValueKey() failed with status 0x%08lx\n", Status);
3309 ZwClose(hLogConf);
3310 ZwDeleteKey(hLevel2Key);
3311 goto nextdevice;
3312 }
3313 }
3314 ZwClose(hLogConf);
3315
3316 nextdevice:
3317 if (BootResources && BootResources != ParentBootResources)
3318 {
3319 ExFreePool(BootResources);
3320 BootResources = NULL;
3321 }
3322 if (hLevel2Key)
3323 {
3324 ZwClose(hLevel2Key);
3325 hLevel2Key = NULL;
3326 }
3327 if (hDeviceKey)
3328 {
3329 ZwClose(hDeviceKey);
3330 hDeviceKey = NULL;
3331 }
3332 }
3333
3334 Status = STATUS_SUCCESS;
3335
3336 cleanup:
3337 if (hDevicesKey && hDevicesKey != hBaseKey)
3338 ZwClose(hDevicesKey);
3339 if (hDeviceKey)
3340 ZwClose(hDeviceKey);
3341 if (pDeviceInformation)
3342 ExFreePool(pDeviceInformation);
3343 if (pValueInformation)
3344 ExFreePool(pValueInformation);
3345 return Status;
3346 }
3347
3348 static BOOLEAN INIT_FUNCTION
3349 IopIsFirmwareMapperDisabled(VOID)
3350 {
3351 UNICODE_STRING KeyPathU = RTL_CONSTANT_STRING(L"\\Registry\\Machine\\SYSTEM\\CURRENTCONTROLSET\\Control\\Pnp");
3352 UNICODE_STRING KeyNameU = RTL_CONSTANT_STRING(L"DisableFirmwareMapper");
3353 OBJECT_ATTRIBUTES ObjectAttributes;
3354 HANDLE hPnpKey;
3355 PKEY_VALUE_PARTIAL_INFORMATION KeyInformation;
3356 ULONG DesiredLength, Length;
3357 ULONG KeyValue = 0;
3358 NTSTATUS Status;
3359
3360 InitializeObjectAttributes(&ObjectAttributes, &KeyPathU, OBJ_KERNEL_HANDLE | OBJ_CASE_INSENSITIVE, NULL, NULL);
3361 Status = ZwOpenKey(&hPnpKey, KEY_QUERY_VALUE, &ObjectAttributes);
3362 if (NT_SUCCESS(Status))
3363 {
3364 Status = ZwQueryValueKey(hPnpKey,
3365 &KeyNameU,
3366 KeyValuePartialInformation,
3367 NULL,
3368 0,
3369 &DesiredLength);
3370 if ((Status == STATUS_BUFFER_TOO_SMALL) ||
3371 (Status == STATUS_BUFFER_OVERFLOW))
3372 {
3373 Length = DesiredLength;
3374 KeyInformation = ExAllocatePool(PagedPool, Length);
3375 if (KeyInformation)
3376 {
3377 Status = ZwQueryValueKey(hPnpKey,
3378 &KeyNameU,
3379 KeyValuePartialInformation,
3380 KeyInformation,
3381 Length,
3382 &DesiredLength);
3383 if (NT_SUCCESS(Status) && KeyInformation->DataLength == sizeof(ULONG))
3384 {
3385 KeyValue = (ULONG)(*KeyInformation->Data);
3386 }
3387 else
3388 {
3389 DPRINT1("ZwQueryValueKey(%wZ%wZ) failed\n", &KeyPathU, &KeyNameU);
3390 }
3391
3392 ExFreePool(KeyInformation);
3393 }
3394 else
3395 {
3396 DPRINT1("Failed to allocate memory for registry query\n");
3397 }
3398 }
3399 else
3400 {
3401 DPRINT1("ZwQueryValueKey(%wZ%wZ) failed with status 0x%08lx\n", &KeyPathU, &KeyNameU, Status);
3402 }
3403
3404 ZwClose(hPnpKey);
3405 }
3406 else
3407 {
3408 DPRINT1("ZwOpenKey(%wZ) failed with status 0x%08lx\n", &KeyPathU, Status);
3409 }
3410
3411 DPRINT("Firmware mapper is %s\n", KeyValue != 0 ? "disabled" : "enabled");
3412
3413 return (KeyValue != 0) ? TRUE : FALSE;
3414 }
3415
3416 NTSTATUS
3417 NTAPI
3418 INIT_FUNCTION
3419 IopUpdateRootKey(VOID)
3420 {
3421 UNICODE_STRING EnumU = RTL_CONSTANT_STRING(L"\\Registry\\Machine\\SYSTEM\\CurrentControlSet\\Enum");
3422 UNICODE_STRING RootPathU = RTL_CONSTANT_STRING(L"Root");
3423 UNICODE_STRING MultiKeyPathU = RTL_CONSTANT_STRING(L"\\Registry\\Machine\\HARDWARE\\DESCRIPTION\\System\\MultifunctionAdapter");
3424 OBJECT_ATTRIBUTES ObjectAttributes;
3425 HANDLE hEnum, hRoot;
3426 NTSTATUS Status;
3427
3428 InitializeObjectAttributes(&ObjectAttributes, &EnumU, OBJ_KERNEL_HANDLE | OBJ_CASE_INSENSITIVE, NULL, NULL);
3429 Status = ZwCreateKey(&hEnum, KEY_CREATE_SUB_KEY, &ObjectAttributes, 0, NULL, 0, NULL);
3430 if (!NT_SUCCESS(Status))
3431 {
3432 DPRINT1("ZwCreateKey() failed with status 0x%08lx\n", Status);
3433 return Status;
3434 }
3435
3436 InitializeObjectAttributes(&ObjectAttributes, &RootPathU, OBJ_KERNEL_HANDLE | OBJ_CASE_INSENSITIVE, hEnum, NULL);
3437 Status = ZwCreateKey(&hRoot, KEY_CREATE_SUB_KEY, &ObjectAttributes, 0, NULL, 0, NULL);
3438 ZwClose(hEnum);
3439 if (!NT_SUCCESS(Status))
3440 {
3441 DPRINT1("ZwOpenKey() failed with status 0x%08lx\n", Status);
3442 return Status;
3443 }
3444
3445 if (!IopIsFirmwareMapperDisabled())
3446 {
3447 Status = IopOpenRegistryKeyEx(&hEnum, NULL, &MultiKeyPathU, KEY_ENUMERATE_SUB_KEYS);
3448 if (!NT_SUCCESS(Status))
3449 {
3450 /* Nothing to do, don't return with an error status */
3451 DPRINT("ZwOpenKey() failed with status 0x%08lx\n", Status);
3452 ZwClose(hRoot);
3453 return STATUS_SUCCESS;
3454 }
3455 Status = IopEnumerateDetectedDevices(
3456 hEnum,
3457 NULL,
3458 hRoot,
3459 TRUE,
3460 NULL,
3461 0);
3462 ZwClose(hEnum);
3463 }
3464 else
3465 {
3466 /* Enumeration is disabled */
3467 Status = STATUS_SUCCESS;
3468 }
3469
3470 ZwClose(hRoot);
3471
3472 return Status;
3473 }
3474
3475 NTSTATUS
3476 NTAPI
3477 IopOpenRegistryKeyEx(PHANDLE KeyHandle,
3478 HANDLE ParentKey,
3479 PUNICODE_STRING Name,
3480 ACCESS_MASK DesiredAccess)
3481 {
3482 OBJECT_ATTRIBUTES ObjectAttributes;
3483 NTSTATUS Status;
3484
3485 PAGED_CODE();
3486
3487 *KeyHandle = NULL;
3488
3489 InitializeObjectAttributes(&ObjectAttributes,
3490 Name,
3491 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
3492 ParentKey,
3493 NULL);
3494
3495 Status = ZwOpenKey(KeyHandle, DesiredAccess, &ObjectAttributes);
3496
3497 return Status;
3498 }
3499
3500 NTSTATUS
3501 NTAPI
3502 IopCreateRegistryKeyEx(OUT PHANDLE Handle,
3503 IN HANDLE RootHandle OPTIONAL,
3504 IN PUNICODE_STRING KeyName,
3505 IN ACCESS_MASK DesiredAccess,
3506 IN ULONG CreateOptions,
3507 OUT PULONG Disposition OPTIONAL)
3508 {
3509 OBJECT_ATTRIBUTES ObjectAttributes;
3510 ULONG KeyDisposition, RootHandleIndex = 0, i = 1, NestedCloseLevel = 0;
3511 USHORT Length;
3512 HANDLE HandleArray[2];
3513 BOOLEAN Recursing = TRUE;
3514 PWCHAR pp, p, p1;
3515 UNICODE_STRING KeyString;
3516 NTSTATUS Status = STATUS_SUCCESS;
3517 PAGED_CODE();
3518
3519 /* P1 is start, pp is end */
3520 p1 = KeyName->Buffer;
3521 pp = (PVOID)((ULONG_PTR)p1 + KeyName->Length);
3522
3523 /* Create the target key */
3524 InitializeObjectAttributes(&ObjectAttributes,
3525 KeyName,
3526 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
3527 RootHandle,
3528 NULL);
3529 Status = ZwCreateKey(&HandleArray[i],
3530 DesiredAccess,
3531 &ObjectAttributes,
3532 0,
3533 NULL,
3534 CreateOptions,
3535 &KeyDisposition);
3536
3537 /* Now we check if this failed */
3538 if ((Status == STATUS_OBJECT_NAME_NOT_FOUND) && (RootHandle))
3539 {
3540 /* Target key failed, so we'll need to create its parent. Setup array */
3541 HandleArray[0] = NULL;
3542 HandleArray[1] = RootHandle;
3543
3544 /* Keep recursing for each missing parent */
3545 while (Recursing)
3546 {
3547 /* And if we're deep enough, close the last handle */
3548 if (NestedCloseLevel > 1) ZwClose(HandleArray[RootHandleIndex]);
3549
3550 /* We're setup to ping-pong between the two handle array entries */
3551 RootHandleIndex = i;
3552 i = (i + 1) & 1;
3553
3554 /* Clear the one we're attempting to open now */
3555 HandleArray[i] = NULL;
3556
3557 /* Process the parent key name */
3558 for (p = p1; ((p < pp) && (*p != OBJ_NAME_PATH_SEPARATOR)); p++);
3559 Length = (USHORT)(p - p1) * sizeof(WCHAR);
3560
3561 /* Is there a parent name? */
3562 if (Length)
3563 {
3564 /* Build the unicode string for it */
3565 KeyString.Buffer = p1;
3566 KeyString.Length = KeyString.MaximumLength = Length;
3567
3568 /* Now try opening the parent */
3569 InitializeObjectAttributes(&ObjectAttributes,
3570 &KeyString,
3571 OBJ_CASE_INSENSITIVE | OBJ_KERNEL_HANDLE,
3572 HandleArray[RootHandleIndex],
3573 NULL);
3574 Status = ZwCreateKey(&HandleArray[i],
3575 DesiredAccess,
3576 &ObjectAttributes,
3577 0,
3578 NULL,
3579 CreateOptions,
3580 &KeyDisposition);
3581 if (NT_SUCCESS(Status))
3582 {
3583 /* It worked, we have one more handle */
3584 NestedCloseLevel++;
3585 }
3586 else
3587 {
3588 /* Parent key creation failed, abandon loop */
3589 Recursing = FALSE;
3590 continue;
3591 }
3592 }
3593 else
3594 {
3595 /* We don't have a parent name, probably corrupted key name */
3596 Status = STATUS_INVALID_PARAMETER;
3597 Recursing = FALSE;
3598 continue;
3599 }
3600
3601 /* Now see if there's more parents to create */
3602 p1 = p + 1;
3603 if ((p == pp) || (p1 == pp))
3604 {
3605 /* We're done, hopefully successfully, so stop */
3606 Recursing = FALSE;
3607 }
3608 }
3609
3610 /* Outer loop check for handle nesting that requires closing the top handle */
3611 if (NestedCloseLevel > 1) ZwClose(HandleArray[RootHandleIndex]);
3612 }
3613
3614 /* Check if we broke out of the loop due to success */
3615 if (NT_SUCCESS(Status))
3616 {
3617 /* Return the target handle (we closed all the parent ones) and disposition */
3618 *Handle = HandleArray[i];
3619 if (Disposition) *Disposition = KeyDisposition;
3620 }
3621
3622 /* Return the success state */
3623 return Status;
3624 }
3625
3626 NTSTATUS
3627 NTAPI
3628 IopGetRegistryValue(IN HANDLE Handle,
3629 IN PWSTR ValueName,
3630 OUT PKEY_VALUE_FULL_INFORMATION *Information)
3631 {
3632 UNICODE_STRING ValueString;
3633 NTSTATUS Status;
3634 PKEY_VALUE_FULL_INFORMATION FullInformation;
3635 ULONG Size;
3636 PAGED_CODE();
3637
3638 RtlInitUnicodeString(&ValueString, ValueName);
3639
3640 Status = ZwQueryValueKey(Handle,
3641 &ValueString,
3642 KeyValueFullInformation,
3643 NULL,
3644 0,
3645 &Size);
3646 if ((Status != STATUS_BUFFER_OVERFLOW) &&
3647 (Status != STATUS_BUFFER_TOO_SMALL))
3648 {
3649 return Status;
3650 }
3651
3652 FullInformation = ExAllocatePool(NonPagedPool, Size);
3653 if (!FullInformation) return STATUS_INSUFFICIENT_RESOURCES;
3654
3655 Status = ZwQueryValueKey(Handle,
3656 &ValueString,
3657 KeyValueFullInformation,
3658 FullInformation,
3659 Size,
3660 &Size);
3661 if (!NT_SUCCESS(Status))
3662 {
3663 ExFreePool(FullInformation);
3664 return Status;
3665 }
3666
3667 *Information = FullInformation;
3668 return STATUS_SUCCESS;
3669 }
3670
3671 RTL_GENERIC_COMPARE_RESULTS
3672 NTAPI
3673 PiCompareInstancePath(IN PRTL_AVL_TABLE Table,
3674 IN PVOID FirstStruct,
3675 IN PVOID SecondStruct)
3676 {
3677 /* FIXME: TODO */
3678 ASSERT(FALSE);
3679 return 0;
3680 }
3681
3682 //
3683 // The allocation function is called by the generic table package whenever
3684 // it needs to allocate memory for the table.
3685 //
3686
3687 PVOID
3688 NTAPI
3689 PiAllocateGenericTableEntry(IN PRTL_AVL_TABLE Table,
3690 IN CLONG ByteSize)
3691 {
3692 /* FIXME: TODO */
3693 ASSERT(FALSE);
3694 return NULL;
3695 }
3696
3697 VOID
3698 NTAPI
3699 PiFreeGenericTableEntry(IN PRTL_AVL_TABLE Table,
3700 IN PVOID Buffer)
3701 {
3702 /* FIXME: TODO */
3703 ASSERT(FALSE);
3704 }
3705
3706 VOID
3707 NTAPI
3708 PpInitializeDeviceReferenceTable(VOID)
3709 {
3710 /* Setup the guarded mutex and AVL table */
3711 KeInitializeGuardedMutex(&PpDeviceReferenceTableLock);
3712 RtlInitializeGenericTableAvl(
3713 &PpDeviceReferenceTable,
3714 (PRTL_AVL_COMPARE_ROUTINE)PiCompareInstancePath,
3715 (PRTL_AVL_ALLOCATE_ROUTINE)PiAllocateGenericTableEntry,
3716 (PRTL_AVL_FREE_ROUTINE)PiFreeGenericTableEntry,
3717 NULL);
3718 }
3719
3720 BOOLEAN
3721 NTAPI
3722 PiInitPhase0(VOID)
3723 {
3724 /* Initialize the resource when accessing device registry data */
3725 ExInitializeResourceLite(&PpRegistryDeviceResource);
3726
3727 /* Setup the device reference AVL table */
3728 PpInitializeDeviceReferenceTable();
3729 return TRUE;
3730 }
3731
3732 BOOLEAN
3733 NTAPI
3734 PpInitSystem(VOID)
3735 {
3736 /* Check the initialization phase */
3737 switch (ExpInitializationPhase)
3738 {
3739 case 0:
3740
3741 /* Do Phase 0 */
3742 return PiInitPhase0();
3743
3744 case 1:
3745
3746 /* Do Phase 1 */
3747 return TRUE;
3748 //return PiInitPhase1();
3749
3750 default:
3751
3752 /* Don't know any other phase! Bugcheck! */
3753 KeBugCheck(UNEXPECTED_INITIALIZATION_CALL);
3754 return FALSE;
3755 }
3756 }
3757
3758 LONG IopNumberDeviceNodes;
3759
3760 PDEVICE_NODE
3761 NTAPI
3762 PipAllocateDeviceNode(IN PDEVICE_OBJECT PhysicalDeviceObject)
3763 {
3764 PDEVICE_NODE DeviceNode;
3765 PAGED_CODE();
3766
3767 /* Allocate it */
3768 DeviceNode = ExAllocatePoolWithTag(NonPagedPool, sizeof(DEVICE_NODE), TAG_IO_DEVNODE);
3769 if (!DeviceNode) return DeviceNode;
3770
3771 /* Statistics */
3772 InterlockedIncrement(&IopNumberDeviceNodes);
3773
3774 /* Set it up */
3775 RtlZeroMemory(DeviceNode, sizeof(DEVICE_NODE));
3776 DeviceNode->InterfaceType = InterfaceTypeUndefined;
3777 DeviceNode->BusNumber = -1;
3778 DeviceNode->ChildInterfaceType = InterfaceTypeUndefined;
3779 DeviceNode->ChildBusNumber = -1;
3780 DeviceNode->ChildBusTypeIndex = -1;
3781 // KeInitializeEvent(&DeviceNode->EnumerationMutex, SynchronizationEvent, TRUE);
3782 InitializeListHead(&DeviceNode->DeviceArbiterList);
3783 InitializeListHead(&DeviceNode->DeviceTranslatorList);
3784 InitializeListHead(&DeviceNode->TargetDeviceNotify);
3785 InitializeListHead(&DeviceNode->DockInfo.ListEntry);
3786 InitializeListHead(&DeviceNode->PendedSetInterfaceState);
3787
3788 /* Check if there is a PDO */
3789 if (PhysicalDeviceObject)
3790 {
3791 /* Link it and remove the init flag */
3792 DeviceNode->PhysicalDeviceObject = PhysicalDeviceObject;
3793 ((PEXTENDED_DEVOBJ_EXTENSION)PhysicalDeviceObject->DeviceObjectExtension)->DeviceNode = DeviceNode;
3794 PhysicalDeviceObject->Flags &= ~DO_DEVICE_INITIALIZING;
3795 }
3796
3797 /* Return the node */
3798 return DeviceNode;
3799 }
3800
3801 /* PUBLIC FUNCTIONS **********************************************************/
3802
3803 NTSTATUS
3804 NTAPI
3805 PnpBusTypeGuidGet(IN USHORT Index,
3806 IN LPGUID BusTypeGuid)
3807 {
3808 NTSTATUS Status = STATUS_SUCCESS;
3809
3810 /* Acquire the lock */
3811 ExAcquireFastMutex(&PnpBusTypeGuidList->Lock);
3812
3813 /* Validate size */
3814 if (Index < PnpBusTypeGuidList->GuidCount)
3815 {
3816 /* Copy the data */
3817 RtlCopyMemory(BusTypeGuid, &PnpBusTypeGuidList->Guids[Index], sizeof(GUID));
3818 }
3819 else
3820 {
3821 /* Failure path */
3822 Status = STATUS_OBJECT_NAME_NOT_FOUND;
3823 }
3824
3825 /* Release lock and return status */
3826 ExReleaseFastMutex(&PnpBusTypeGuidList->Lock);
3827 return Status;
3828 }
3829
3830 NTSTATUS
3831 NTAPI
3832 PnpDeviceObjectToDeviceInstance(IN PDEVICE_OBJECT DeviceObject,
3833 IN PHANDLE DeviceInstanceHandle,
3834 IN ACCESS_MASK DesiredAccess)
3835 {
3836 NTSTATUS Status;
3837 HANDLE KeyHandle;
3838 PDEVICE_NODE DeviceNode;
3839 UNICODE_STRING KeyName = RTL_CONSTANT_STRING(L"\\REGISTRY\\MACHINE\\SYSTEM\\CURRENTCONTROLSET\\ENUM");
3840 PAGED_CODE();
3841
3842 /* Open the enum key */
3843 Status = IopOpenRegistryKeyEx(&KeyHandle,
3844 NULL,
3845 &KeyName,
3846 KEY_READ);
3847 if (!NT_SUCCESS(Status)) return Status;
3848
3849 /* Make sure we have an instance path */
3850 DeviceNode = IopGetDeviceNode(DeviceObject);
3851 if ((DeviceNode) && (DeviceNode->InstancePath.Length))
3852 {
3853 /* Get the instance key */
3854 Status = IopOpenRegistryKeyEx(DeviceInstanceHandle,
3855 KeyHandle,
3856 &DeviceNode->InstancePath,
3857 DesiredAccess);
3858 }
3859 else
3860 {
3861 /* Fail */
3862 Status = STATUS_INVALID_DEVICE_REQUEST;
3863 }
3864
3865 /* Close the handle and return status */
3866 ZwClose(KeyHandle);
3867 return Status;
3868 }
3869
3870 ULONG
3871 NTAPI
3872 PnpDetermineResourceListSize(IN PCM_RESOURCE_LIST ResourceList)
3873 {
3874 ULONG FinalSize, PartialSize, EntrySize, i, j;
3875 PCM_FULL_RESOURCE_DESCRIPTOR FullDescriptor;
3876 PCM_PARTIAL_RESOURCE_DESCRIPTOR PartialDescriptor;
3877
3878 /* If we don't have one, that's easy */
3879 if (!ResourceList) return 0;
3880
3881 /* Start with the minimum size possible */
3882 FinalSize = FIELD_OFFSET(CM_RESOURCE_LIST, List);
3883
3884 /* Loop each full descriptor */
3885 FullDescriptor = ResourceList->List;
3886 for (i = 0; i < ResourceList->Count; i++)
3887 {
3888 /* Start with the minimum size possible */
3889 PartialSize = FIELD_OFFSET(CM_FULL_RESOURCE_DESCRIPTOR, PartialResourceList) +
3890 FIELD_OFFSET(CM_PARTIAL_RESOURCE_LIST, PartialDescriptors);
3891
3892 /* Loop each partial descriptor */
3893 PartialDescriptor = FullDescriptor->PartialResourceList.PartialDescriptors;
3894 for (j = 0; j < FullDescriptor->PartialResourceList.Count; j++)
3895 {
3896 /* Start with the minimum size possible */
3897 EntrySize = sizeof(CM_PARTIAL_RESOURCE_DESCRIPTOR);
3898
3899 /* Check if there is extra data */
3900 if (PartialDescriptor->Type == CmResourceTypeDeviceSpecific)
3901 {
3902 /* Add that data */
3903 EntrySize += PartialDescriptor->u.DeviceSpecificData.DataSize;
3904 }
3905
3906 /* The size of partial descriptors is bigger */
3907 PartialSize += EntrySize;
3908
3909 /* Go to the next partial descriptor */
3910 PartialDescriptor = (PVOID)((ULONG_PTR)PartialDescriptor + EntrySize);
3911 }
3912
3913 /* The size of full descriptors is bigger */
3914 FinalSize += PartialSize;
3915
3916 /* Go to the next full descriptor */
3917 FullDescriptor = (PVOID)((ULONG_PTR)FullDescriptor + PartialSize);
3918 }
3919
3920 /* Return the final size */
3921 return FinalSize;
3922 }
3923
3924 NTSTATUS
3925 NTAPI
3926 PiGetDeviceRegistryProperty(IN PDEVICE_OBJECT DeviceObject,
3927 IN ULONG ValueType,
3928 IN PWSTR ValueName,
3929 IN PWSTR KeyName,
3930 OUT PVOID Buffer,
3931 IN PULONG BufferLength)
3932 {
3933 NTSTATUS Status;
3934 HANDLE KeyHandle, SubHandle;
3935 UNICODE_STRING KeyString;
3936 PKEY_VALUE_FULL_INFORMATION KeyValueInfo = NULL;
3937 ULONG Length;
3938 PAGED_CODE();
3939
3940
|
__label__pos
| 0.989159 |
Рейтинг: 5 / 5
Звезда активнаЗвезда активнаЗвезда активнаЗвезда активнаЗвезда активна
Сценарии в yii 2
Сценарии
Модель может быть использована в различных сценариях. Например, модель User может быть использована для коллекции входных логинов пользователей, а также может быть использована для цели регистрации пользователей.
В различных сценариях, модель может использовать различные бизнес-правила и логику. Например, атрибут email может потребоваться во время регистрации пользователя, но не во время входа пользователя в систему.
Модель использует свойство [[yii\base\Model::scenario]], чтобы отслеживать сценарий, в котором она используется. По умолчанию, модель поддерживает только один сценарий с именем default. В следующем коде показано два способа установки сценария модели:
// сценарий задается как свойство
$model = new User;
$model->scenario = User::SCENARIO_LOGIN;
// сценарий задается через конфигурацию
$model = new User(['scenario' => User::SCENARIO_LOGIN]);
По умолчанию сценарии, поддерживаемые моделью, определяются правилами валидации объявленными в модели. Однако, Вы можете изменить это поведение путем переопределения метода [[yii\base\Model::scenarios()]] как показано ниже:
namespace app\models;
use yii\db\ActiveRecord;
class User extends ActiveRecord
{
const SCENARIO_LOGIN = 'login';
const SCENARIO_REGISTER = 'register';
public function scenarios()
{
return [
self::SCENARIO_LOGIN => ['username', 'password'],
self::SCENARIO_REGISTER => ['username', 'email', 'password'],
];
}
}
Info: В приведенном выше и следующих примерах, классы моделей расширяются от [[yii\db\ActiveRecord]] потому, что использование нескольких сценариев обычно происходит от классов Active Record.
Метод scenarios() возвращает массив, ключами которого являются имена сценариев, а значения - соответствующие активные атрибуты. Активные атрибуты могут быть массово присвоены и подлежат валидации. В приведенном выше примере, атрибуты username и password это активные атрибуты сценария login, а в сценарии register так же активным атрибутом является email вместе с username и password.
По умолчанию реализация scenarios() вернёт все найденные сценарии в правилах валидации задекларированных в методе [[yii\base\Model::rules()]]. При переопределении метода scenarios(), если Вы хотите ввести новые сценарии помимо стандартных, Вы можете написать код на основе следующего примера:
namespace app\models;
use yii\db\ActiveRecord;
class User extends ActiveRecord
{
const SCENARIO_LOGIN = 'login';
const SCENARIO_REGISTER = 'register';
public function scenarios()
{
$scenarios = parent::scenarios();
$scenarios[self::SCENARIO_LOGIN] = ['username', 'password'];
$scenarios[self::SCENARIO_REGISTER] = ['username', 'email', 'password'];
return $scenarios;
}
}
Возможности сценариев в основном используются валидацией и массовым присвоением атрибутов. Однако, Вы можете использовать их и для других целей. Например, Вы можете различным образом объявлять метки атрибутов на основе текущего сценария.
Заберите ссылку на статью к себе, чтобы потом легко её найти ;)
Выберите, то, чем пользуетесь чаще всего:
Спасибо за внимание, оставайтесь на связи! Ниже ссылка на форум и обсуждение ; )
Войдите чтобы комментировать
Обсудить эту статью
INFO: Вы отправляете сообщение как 'Гость'
|
__label__pos
| 0.645074 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
First off, don't flame me for not searching, I've looked for answers to this and while there are answers out there, I can't understand any of them.
Now, with that aside, I'm trying to put my ftp command into an ASync task for Android.
Code:
package com.dronnoc.ftp;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.io.CopyStreamEvent;
import org.apache.commons.net.io.CopyStreamListener;
import org.apache.commons.net.io.Util;
import android.app.Activity;
import android.os.Bundle;
import android.os.Environment;
import android.util.Log;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ProgressBar;
import android.widget.TextView;
public class Main extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
FTPTransfer ftp = new FTPTransfer("hostname", "user", "pass");
String data = Environment.getDataDirectory().toString();
data += "/data/com.dronnoc.ftp/databases/test.db";
boolean result = ftp.upload(data, "/ftp_dir/test.db");
if(result)
{
Log.d("message", "Upload Successful.");
}
else
{
Log.e("message", ftp.error());
}
ftp.close();
}
}
FTPTransfer.java
package com.dronnoc.ftp;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.SocketException;
import org.apache.commons.net.ftp.FTPClient;
import org.apache.commons.net.ftp.FTPConnectionClosedException;
public class FTPTransfer {
final FTPClient ftp = new FTPClient();
private String error = null;
private boolean connected = false;
public FTPTransfer(String host, String username, String pass) {
// TODO Auto-generated constructor stub
try {
ftp.connect(host);
if(ftp.login(username, pass))
{
ftp.enterLocalPassiveMode();
connected = true;
}
}
catch (FTPConnectionClosedException e) { error = e.getMessage(); }
catch (SocketException e) { error = e.getMessage(); }
catch (IOException e) { error = e.getMessage(); }
}
public boolean upload(String localName, String remoteName)
{
if(ftp.isConnected() && connected)
{
try {
FileInputStream file = new FileInputStream(localName);
boolean result = ftp.storeFile(remoteName, file);
if(result) { return true; }
else { return false; }
}
catch (FileNotFoundException e) { error = e.getMessage(); return false; }
catch (IOException e) { error = e.getMessage(); return false; }
}
return false;
}
public boolean upload(File localName, String remoteName)
{
if(ftp.isConnected() && connected)
{
try {
FileInputStream file = new FileInputStream(localName.getAbsolutePath());
boolean result = ftp.storeFile(remoteName, file);
if(result) { return true; }
else { return false; }
}
catch (FileNotFoundException e) { error = e.getMessage(); return false; }
catch (IOException e) { error = e.getMessage(); return false; }
}
return false;
}
public boolean download(String remote, String local)
{
//TODO Put appropriate code here
return false;
}
public boolean close()
{
try {
ftp.disconnect();
return true;
} catch (IOException e) {
error = e.getMessage();
return false;
}
}
public String error()
{
return error;
}
}
What I want to know is how I can put my FTP function into an ASync task so that I can run it in a background activity and update a progress bar, and also indicate how many bytes its uploaded so far?
Cheers
EDIT The code itself works at the moment, I just need to know how to make it into an Async task
share|improve this question
add comment
1 Answer
up vote 0 down vote accepted
Not sure if you're asking how to use the AsyncTask but in case you are here's a tutorial about using AsyncTask to make requests to a web service. This can be extended easily to perform FTP in the background. AsyncTask can also support Progress though I don't think it's mentioned in the tutorial.
Basically your upload function needs to move to doInBackground and so does the connection code. See http://geekjamboree.wordpress.com/2011/11/22/asynctask-call-web-services-in-android/
share|improve this answer
Thanks, that helped me understand it a little more, though I'm not entirely sure on how to impliment it? I've never had to work with an Async task before, so I haven't had any experience at all – Spiritfyre Jan 5 '12 at 3:25
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.996805 |
Can I upgrade/resize my Droplet?
July 2, 2012 57.9k views
26 Answers
I just wanted to provide an update to this question in case anyone still stumbles across it. Since this was asked, we introduced some new features around resizing. There are now two options, permanent and flexible.
Permanent resizing allows you to resize your disk space as well as CPU and RAM. As the name implies, this is a one-way operation. After increasing the disk size, you will not be able to decrease it.
Flexible resizing only upgrades your CPU and RAM. This option is reversible and gives you the flexibility to scale up and down as needed.
For more info, check out this article:
Resizing your servers can be an effective way of increasing their capacity, by allowing them to utilize more memory (RAM), CPU, and disk storage. The ability to resize a server, also known as vertical scaling, can be useful in a variety of situations that prompt the need for a more powerful server, such as if your concurrent user base increases or if you need to store more data. In this tutorial, we will show you how to resize your server, also known as a droplet, on DigitalOcean.
Yes. You can resize your Droplet UP anytime using the control panel's resize feature. Your configuration, data, and IP addresses will be preserved.
Given the disk space doesn't change is the cost at all different after a fast resize? Or is the rate the same, for example going from $10 flat to $20 flat?
If there is available space on the existing hypervisor it is automatically resized with a simple reboot and takes about 15 seconds.
Otherwise the virtual machine will need to be migrated to another physical hypervisor and the migration time is dependent on how much disk space you are using on your server and can take up to 20-25 minutes.
We're in the process of updating our resize functionality so that users are warned in this case and can decide to either initiate the action or not.
I have the same issue, I "fast-resized" up, but the disk space remains the same. I think it is DISHONEST that DO bills you for something you are not receiving (the disk space).
I think in this cases DO should bill only for the CPU and RAM.
How long does it take to resize? Do we need to shutdown the server, resize and reboot? OR does it automatically resize without any restarts?
What about billing stuff when doing an upgrade for lets say a week and then downgrading to the smaller instance again? How does this work?
Since our droplets are charged per hour, your droplet would be charged at the higher rate for the week (168 hours) that it was at the larger size.
Once you resize back down, your droplet would be once again charged per hour at the lower rate.
But how about if I also want to change the datacenter? My droplet will be preserved?
I clicked on the "Fast Resize" button but it says that service is unavailable???
@info: Please open up a support ticket and perhaps attach a screenshot too so we can identify the issue. Thanks!
@Ricardo: It is clearly stated that disk space will not be changed: fast-resize
A nice feature would be to be able to change the CPU and/or RAM before doing a shell based restart.
That way I could take my time to resize and only be down for the time to complete a shutdown -r now.
I don't mind so much that the disk size remains the same, as I'm guessing that a resize is most likely often due to CPU or RAM shortage.
If your needs for space resize occurs often, then perhaps AWS would be a better option (or hope that DO will copy the option to add extra partitions)
@Kamal I'm not saying that a warning is not included. What I'm saying is that I feel it is not fully honest to bill for something the customer is not using.
@Ricardo: We're working on re-implementing Migrate-Resize which will resize disk space as well but will take a longer time to process. Until then, you can "claim" your "missing" disk space by taking a snapshot of your droplet, destroying it, and recreating it.
When you create a snapshot and want to upgrade to a larger server would you just resize it, or deploy a new droplet and after it has been launched delete the smaller one? Could you clarify each scenario please.
If you do make a snapshot will the server image be preserved, such as ssh keys, settings etc when it is used on a droplet?
I assume that in order to get disk space to properly show up you need to do a full reboot?
And lastly regarding snapshots, from what I've seen are all snapshots free, regardless of size, and # of snapshots?
When you create a snapshot and want to upgrade to a larger server would you just resize it, or deploy a new droplet and after it has been launched delete the smaller one? Could you clarify each scenario please.
You can't resize an existing droplet's disk space. To do that, you have to create a new droplet of a larger size off of the snapshot.
If you do make a snapshot will the server image be preserved, such as ssh keys, settings etc when it is used on a droplet?
Yes. A snapshot is a full copy of your droplet.
And lastly regarding snapshots, from what I've seen are all snapshots free, regardless of size, and # of snapshots?
Starting August 2013, snapshots will be $0.02/used GB/month. For more information visit https://www.digitalocean.com/community/articles/digitalocean-backups-and-snapshots-explained
This tutorial provides an explanation for how backup and snapshots work on DigitalOcean. Additionally, it includes information on how to scale, backup, and clone out servers with snapshots.
I was trying the 'fast resize' feature and it is processing for over an hour. The response to my support ticket was "I am still seeing it processing on my end". I'm glad that we are all seeing it processing, but this is unacceptable in production. How can I rely on such a faulty feature for my online app? Avoid resizing? What would be the best practice in order to avoid such a long downtime???
Did you got an reply about this issue?
If you resize with snapshot to increase disc size, and create new droplet, do you keep same IP?
e.g. if i had a cPanel install, and wanted to increase the size to meet increased client signup, your suggestion is a new droplet, with all data preserved.
And how I can upgrade my disk size without changing my IP address?
This tutorial covers how to manually migrate droplets between hypervisors by taking a snapshot of the droplet and then spinning it up in a large or smaller size.
@Kamal
I guess you can't resize without changing the IP address, which is what Cristiano was asking.
I followed your link and as expected, I have a new IP. So, in order to resize my disk space for a production server, I have to change all DNS.
@dharris: If you delete the original droplet, and then create a new one from the snapshot using the same name, we try to automatically return your original IP address. This isn't guaranteed though. If you open a support ticket, we will try to coordinate with you to ensure the IP address is retained.
if i m using the resize option from control panel how it is implement and is it my ip will be the same for my site also how i can shift back after some time
thanks
Have another answer? Share your knowledge.
|
__label__pos
| 0.712378 |
我所理解的monad(7):把monad看做行为的组合子
先从最简单的情况开始,我们用monad封装一段执行逻辑,然后提供一个map组合子,可以组合后续行为:
// 一个不完善的monad
class M[A](inner: => A) {
// 执行逻辑
def apply() = inner
// 组合当前行为与后续行为,返回一个新的monad
def map[B](next: A=>B): M[B] = new M(next(inner))
}
用上面的例子模拟几个行为的组合:
scala> val first = new M({println("first"); "111"})
first: M[String] = M@745b171c
scala> val second = first.map(x => {println("second"); x.toInt})
second: M[Int] = M@155f28dc
scala> val third = second.map(x => {println("third"); x + 200})
third: M[Int] = M@b345419
scala> third()
first
second
third
res0: Int = 311
看上去对行为的组合也不过如此,其实在Function1这个类里已经提供了对只有一个参数的函数的组合:
scala> class A; class B; class C
scala> val f1: A=>B = (a:A) => new B
scala> val f2: B=>C = (b:B) => new C
scala> val f3 = f1 andThen f2
f3: A => C = <function1>
Function1里的andThen方法可以把前面函数的结果交给后边的函数处理,A=>BB=>C 组成函数 A=>C
这样看上去我们当前实现的monad和Function1有些相似,封装了一段行为,提供方法将下一个行为组合成新的行为(可以不断的组合下去),在java里我们可以对比Command模式和Composite模式。
真实世界的”行为”(Action),普通函数的问题
1) 结果的不确定性,行为链的校验问题
这个不完善的monad已经可以做一些事情,但有个很显著的问题,只能组合”普通函数(plain)”,所谓普通函数是指结果类型就是我们想要的数据类型。
比如f: A=>B 这个函数可以表示一个行为,这个行为最终得到的数据类型是B,但如果这个行为遇到异常,或者返回为null,这时我们的组合过程会如何?
这意味着我们必须在组合过程中判断每个函数的结果是否合法,只有合法的情况下,才能与下一步组合。你可能会想把这些判断放在组合逻辑里不就得了吗?
def map[B](next: A=>B): M[B] = {
val result = inner // 执行了当前行为
if(result != null) {
new M(next(result))
}else {
null.asInstanceOf[M[B]]
}
}
上面的情况不是我们希望的,因为要判断当前行为(inner)的结果是否符合传递给下一个行为,只有执行当前行为才能拿到结果。而我们希望组合子做的是先把所有行为组合起来,最后再一并执行。
看来只能要求next函数在实现的时候做了异常检测,但即使next函数做了判断,也不一定完美;因为行为已经先被组合好了,执行的时候,链上的每个函数仍会被调用。假设组合了多个函数,执行时中间的一个函数即使为null,仍会传递给下一个执行(下一个也必须也对参数检测),其实这个时候已经没有必要传递下去了
在scala里对这种结果不确定的情况,提供了一个专门的Option,它有两个子类:SomeNone,其中Some表示结果是一个正常值,可以通过模式匹配来获取到这个值,而None则表示为空的情况。
如果我们Option来表示我们前面的行为,可以写为:f: A=>Option[B],即结果被一个富类型给包装起来,它表示结果可能有值,也可能为空。
2) 副作用的问题
另外一个真实世界中无法用函数式完美解决的问题是IO操作,因为IO无论如何总要伴随状态的产生或变化(也就是副作用)
一段常见的举例片段:
def read(state: State) = {
// 返回下一状态和读取到的字符串
(state.nextState, readLine)
}
def write(state: State, str: String) = {
//返回下一状态,和字符串处理的结果
//这里是print返回为Unit类型,所以最终返回一个State和Unit的二元组
(state.nextState, println(str))
}
这两个函数类型为State => (State, String)(State,String) => (State, Unit) 在入参和出参里都伴随State,是一种”不纯粹”的函数,因为每次都执行状态都不一样,即使两次的字符串是一样的,状态也是不同的。这是一种典型的“非引用透明”问题,这样的函数无法满足结合率,函数的组合性无法保障。
要实现组合特性,需要把函数转换为符合“引用透明”的特性,我们可以通过柯里化来把两个函数转化为高阶函数:
def read =
(state: State) => (state.nextState, readLine)
def write(str: String) =
(state: State) => (state.nextState, println(str))
现在这两个函数相当于只是构造了行为,要执行它们需要后续传入“状态”才行;等于把执行推迟了;同时这两个函数现在符合“引用透明”特性了。
再进一步,为了把State在构造行为的时候给隐藏起来,我们继续重构:
class IOAction[A](expression: =>A) extends Function1[State, (State,A)] {
def apply(s:State) = (state.next, expression)
}
def read = new IOAction[String](readLine)
def write(str: String) = new IOAction[Unit](println(str))
现在我们可以把read看做是一个 () => IOAction[String] 的函数,write则是:String => IOAction[Unit]的函数。
用一个”富类型”表示行为的结果
我们看到现实世界的行为,除了可以用A=>B这样的plain function描述,还有A=>Option[B]A=>IOAction[B] 这种结果是一个富类型的函数来描述。我们把后两种统一描述为:
A => Result[B]
当我们要组合一个这种形式的行为时,不能再使用map,而是flatMap组合子。
实际上,我们一开始就提到过map并不是必须的方法,flatMap才是,可以把map看做只是一种特例。把行为都用统一形式A => Result[B] 来描述,对于普通的A=>B也可以转为A=>Result[B]
现在我们看几个flatMap的例子,先看一个Option的实际例子:
scala> Some({println("111"); new A}).
flatMap(a => {println("222");Some(new B)}).
flatMap(b => {println("333"); None}).
flatMap(c => {println("444"); Some(new D)})
111
222
333
res14: Option[D] = None
上面的例子看到,在组合第三步的时候,得到None,后边的第四步c => {println("444"); Some(new D)}没有被执行。这是因为None与后续的行为结合时,会忽略后续的行为。
我所理解的monad(6):从组合子(combinator)说起
从同事的反馈了解到前边几篇monad的介绍说的有些抽象。现在还没有提出monad,如果把monoid与endofunctor结合起来,恐怕更是抽象。还是回到用大家更能体会的场景来描述,等对monad有了基本的印象之后,最后再试图把monad在现实世界的表现与范畴论里的形式结合起来。
我想到一个比较好的方式是从行为的组合来说monad,因为这个层面的例子容易接受,最近在用的spray-routing框架,它的核心Directive(指令)就是一个行为组合子monad
最初知道组合子(combinator)这个单词是从《PIS》书中专门有一章讲“连接符解析”(combinator parsing),当时没有理解combinator的概念,那章也是偏重说parser的;后来在很多资料里看到把monad也当成combinator对待,一直没有悟清楚各种combinator是不是一回事,直到看到了ajoo的系列文章(推荐一下):
论面向组合子程序设计方法:
http://www.blogjava.net/ajoo/category/6968.html
论面向组合子程序设计方法 之一 创世纪
论面向组合子程序设计方法 之二 失乐园
论面向组合子程序设计方法 之三 失乐园之补充
论面向组合子程序设计方法 之四 燃烧的荆棘
论面向组合子程序设计方法 之五 新约
论面向组合子程序设计方法 之六 oracle
论面向组合子程序设计方法 之七 重构
论面向组合子程序设计方法 之八 monad
论面向组合子程序设计方法 之九 南无阿弥陀佛
论面向组合子程序设计方法 之十 还是重构
论面向组合子程序设计方法 之十一 微步毂纹生
ajoo是个牛人,早期在javaeye有看到过他实现了一套基于jvm的haskell,取名“jaskell”。那也是好些年前了,在javaeye围观他与Trustno1等人对函数式编程的讨论,云里雾里。现在感觉稍微能跟这些大牛们有一些对话,却找不到这样的圈子了。引用ajoo的话:
组合子,英文叫combinator,是函数式编程里面的重要思想。如果说OO是归纳法(分析归纳需求,然后根据需求分解问题,解决问题),那么 “面向组合子”就是“演绎法”。通过定义最基本的原子操作,定义基本的组合规则,然后把这些原子以各种方法组合起来。
引用另外一位函数式领域的大师的说法:
A combinator is a function which builds program fragments from program fragments
这句话有一些“自相似性”,可以把combinator先当成一种“胶水”,更具体一些,把scala的集合库里提供的一些函数看成combinator:
map
foreach
filter
fold/reduce
zip/partition
flatten/flatMap
而monad正是一个通用的combinator,也是一种设计模式,但它有很多面,就我后续的举例来说,先把monad看成一个“行为”的组合子。体现在代码上:
class M[A](value: A) {
// 1) 把普通类型B构造为M[B]
// 因为M定义为class并提供了构造方法,可以通过new关键字来构造,该工厂方法可以省略
def unit[B] (value : B) = new M(value)
// 2) 不是必须的
def map[B](f: A => B) : M[B] = flatMap {x => unit(f(x))}
// 3) 必须,核心方法
def flatMap[B](f: A => M[B]) : M[B] = ...
}
一个monad内部除了一个工厂方法unit(注意,与Unit类型没有关系,unit这里表示“装箱”,源自haskell里的叫法),还包含了两个combinator方法,mapflatMap,这两个组合子中flatMap是不可少的,map可以通过flatMap来实现。
unit方法不一定会定义在monad类内部,很多情况下会在伴生对象中通过工厂方法来实现。
这里忽略了monad里的法则,重点先理解M封装了什么?以及内部的mapflatMap的含义是什么。
我所理解的monad(5):自函子(Endofunctor)是什么
经过前面一篇对函子(Functor)的铺垫,我们现在可以看看什么是自函子(Endofunctor)了,从范畴的定义看很简单:
自函子就是一个将范畴映射到自身的函子 (A functor that maps a category to itself)
这句话看起来简单,但有个问题,如何区分自函子与Identity函子?让我们先从简单的“自函数”来看。
自函数(Endofunction)
自函数是把一个类型映射到自身类型,比如Int=>Int, String=>String
注意自函数与Identity函数的差异,Identity函数是什么也不做,传入什么参数返回什么参数,它属于自函数的一种特例;自函数是入参和出参的类型一致,比如 (x:Int) => x * 2(x:Int) => x * 3 都属于自函数:
自函子(Endofunctor)
自函子映射的结果是自身,下图是一个简单的情况:
假设这个自函子为F,则对于 F[Int] 作用的结果仍是Int,对于函数f: Int=>String 映射的结果 F[f] 也仍是函数f,所以这个自函子实际是一个Identity函子(自函子的一种特例),即对范畴中的元素和关系不做任何改变。
那怎么描述出一个非Identity的自函子呢?在介绍范畴在程序上的用法的资料里通常都用haskell来举例,把haskell里的所有类型和函数都放到一个范畴里,取名叫Hask,那么对于这个Hask的范畴,它看上去像是这样的:
先来解释一下(画这个图的时候做了简化),A,B代表普通类型如String,Int,Boolean等,这些(有限的)普通类型是一组类型集合,还有一组类型集合是衍生类型(即由类型构造器与类型参数组成的),这是一个无限集合(可以无限衍生下去)。这样范畴Hask就涵盖了haskell中所有的类型。
对于范畴Hask来说,如果有一个函子F,对里面的元素映射后,其结果仍属于Hask,比如我们用List这个函子:
List[A], List[List[A]], List[List[List[A]]]...
发现这些映射的结果也是属于Hask范畴(子集),所以这是一个自函子,实际上在Hask范畴上的所有函子都是自函子。
我们仔细观察这个Hask范畴的结构,发现它实际是一个fractal结构,所谓fractal(分形),是个很神奇的结构,在自然界也大量存在:
如上面的这片叶子,它的每一簇分支,形状上与整体的形状是完全一样的,即局部与整体是一致的结构(并且局部可以再分解下去)
这种结构在函数式语言里也是很常用的,最典型的如List结构,由headtail 两部分组合而成,而每个tail也是一个List结构,可以递归的分解下去。
我所理解的monad(4):函子(functor)是什么
大致介绍了幺半群(monoid)后,我们重新回顾最初引用wadler(haskell委员会成员,把monad引入haskell的家伙)的那句话:
一个单子(Monad)说白了不过就是自函子范畴上的一个幺半群而已
现在我们来解读这句话中包含的另一个概念:自函子(Endofunctor),不过我们先需要一些铺垫:
首先,什么是函子(Functor)?
乍一看名字,以为函子(functor)对函数(function)是一种封装,实际没有关系,尽管他们都是表示映射,但两者针对的目标不一样。
函数表达的映射关系在类型上体现在特定类型(proper type)之间的映射,举例来说:
// Int => String
scala> def foo(i:Int): String = i.toString
// List[Int] => List[String]
scala> def bar(l:List[Int]): List[String] = l.map(_.toString)
// List[T] => Set[T]
scala> def baz[T](l:List[T]): Set[T] = l.toSet
而函子,则是体现在高阶类型(确切的说是范畴,可把范畴简单的看成高阶类型)之间的映射(关于高阶类型参考: scala类型系统:24) 理解 higher-kinded-type),听上去还是不够直观,函子这个术语是来自群论(范畴论)里的概念,表示的是范畴之间的映射,那范畴又与类型之间是什么关系?
把范畴看做一组类型的集合
假设这里有两个范畴:范畴C1 里面有类型String 和类型 Int;范畴C2 里面有 List[String]List[Int]
函子表示范畴之间的映射
从上图例子来看,这两个范畴之间有映射关系,即在C1里的Int 对应在C2里的List[Int],C1里的String对应C2里的List[String],在C1里存在Int->String的关系态射(术语是morphism,我们可理解为函数),在C2里也存在List[Int]->List[String]的关系态射。
换句话说,如果一个范畴内部的所有元素可以映射为另一个范畴的元素,且元素间的关系也可以映射为另一个范畴元素间关系,则认为这两个范畴之间存在映射。所谓函子就是表示两个范畴的映射。
怎么用代码来描述函子?
从上图的例子,我们已经清楚了functor的含义,即它包含两个层面的映射:
1) 将C1中的类型 T 映射为 C2 中的 List[T] : T => List[T]
2) 将C1中的函数 f 映射为 C2 中的 函数fm : (A => B) => (List[A] => List[B])
要满足这两点,我们需要一个类型构造器
trait Functor[F[_]] {
def typeMap[A]: F[A]
def funcMap[A,B](f: A=>B): F[A]=>F[B]
}
我们现在可以把这个定义再简化一些,类型的映射方法可以不用,并把它作为一个type class
trait Functor[F[_]] {
def map[A,B](fa: F[A], f: A=>B): F[B]
}
现在我们自定义一个My[_]的类型构造器,测试一下这个type class:
scala> case class My[T](e:T)
scala> def testMap[A,B, M <: My[A]](m:M, f: A=>B)(implicit functor:Functor[My]) = {
| functor.map(m,f)
| }
scala> implicit object MyFunctor extends Functor[My] {
| def map[A,B](fa: My[A], f:A=>B) = My(f(fa.e))
| }
//对 My[Int], 应用函数 Int=>String 得到 My[String]
scala> testMap(My(200), (x:Int)=>x+"ok")
res9: My[String] = My(200ok)
不过大多数库中对functor的支持,都不是通过type class模式来做的,而是直接在类型构造器的定义中实现了map方法:
scala> case class My[A](e:A) {
| def map[B](f: A=>B): My[B] = My(f(e))
| }
scala> My(200).map(_.toString)
res10: My[String] = My(200)
这样相当于显式的让My同时具备了对类型和函数的映射(A->My[A]A=>B -> My[A]=>My[B];在haskell里把这两个行为也叫提升(lift),相当于把类型和函数放到容器里),所以我们也可以说一个带有map方法的类型构造器,就是一个函子。
范畴与高阶类型
我们再来思考一下,如果忽略范畴中的关系(函数),范畴其实就是对特定类型的抽象,即高阶类型(first-order-type或higher-kinded-type,也就是类型构造器),那么对于上面例子中的”范畴C2″,它的所有类型都是List[T]的特定类型,这个范畴就可以抽象为List高阶类型。那对于”范畴C1″呢?它又怎么抽象?其实,”范畴C1″的抽象类型可以看做是一个Identity类型构造器,它与任何参数类型作用构造出的类型就是参数类型:
scala> type Id[T] = T
是不是很像单位元的概念?在shapeless里已经提起过
这么看的话,如果范畴也可以用类型(高阶)来表达,那岂不是只用普通函数就可以描述它们之间的映射了?别急,先试试,方法里是不支持类型构造器做参数的:
scala> def foo(cat: Id) = print(cat)
<console>:18: error: type Id takes type parameters
方法中只能使用特定类型(proper type)做参数。
|
__label__pos
| 0.887897 |
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
I am interested in trying to understand the following problem. Let $G$ be a connected simple algebraic group of type $D_n$, (with $n\geqslant 4$ even), defined over an algebraically closed field of odd characteristic. Then in such a group there are unipotent classes which are so called degenerate unipotent classes. These are the unipotent classes whose Jordan normal forms are parameterised by partitions $\lambda$ of $2n$ such that all parts of $\lambda$ are even and every even number occurs an even number of times. For example if $G$ is $SO_8(\mathbb{K})$ and $\lambda$ is either the partition $(4,4)$ or $(2,2,2,2)$. They are degenerate as there are two distinct conjugacy classes of unipotent elements in $G$ whose Jordan normal forms are both parameterised by $\lambda$.
Let $\mathcal{O}$ represent one of the two degenerate conjugacy classes of $G$. We can write the partition $\lambda$ as $(2\eta_1,2\eta_1,2\eta_2,2\eta_2,\dots,2\eta_k,2\eta_k)$ for some natural number $k$. It is commented by Carter, (in Finite Groups of Lie Type: Complex Characters and Conjugacy Classes section 13.3), that this class is a Richardson class for a parabolic subgroup $P$ of $G$ whose Levi complement has semisimple type $A_{2\eta_1-1} \times A_{2\eta_2-1} \times \cdots \times A_{2\eta_k -1}$. If we assume the branch point of the Dynkin diagram is on the right of the diagram then there are two such parabolic subgroups arising from the choice to be made over the extremal right hand side node in the construction of the root subsystem of type $A_{2\eta_k-1}$.
EDIT: To be very specific if $G$ is of type $D_4$ let us assume that the simple roots $\Delta = \{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ are labelled as in Bourbaki, (Groupes et algèbres de Lie: Chapitres 4 à 6). Then the two Levi subgroups of type $A_3$ correspond to the subsets $\{\alpha_1,\alpha_2,\alpha_3\}$ and $\{\alpha_1,\alpha_2,\alpha_4\}$ of the roots and the two Levi subgroups of type $A_1\times A_1$ corespond to the subsets $\{\alpha_1,\alpha_3\}$ and $\{\alpha_1,\alpha_4\}$.
Using the Bala-Carter theorem we can associate to this unipotent class a Levi subgroup of $G$ and a distinguished parabolic subgroup of the Levi subgroup. Now we know from Bala and Carter's classification of the distinguished parabolic subgroups in a simple group of type $D$ that the Levi associated to these classes cannot be $G$ itself. This is because the right extremal nodes of the weighted Dynkin diagrams of this class have values 0 and 2 but in a distinguished parabolic they must either be both 2 or both 0. Therefore we must have that the Levi is a direct product of type $A$ components and the distinguished parabolic is the unique distinguished parabolic in a group of type A, (the Borel).
My question is then the following. Will the Levi subgroup associated to the class $\mathcal{O}$ from the Bala-Carter theorem be conjugate to the Levi complement of the parabolic subgroup $P$. Or alternatively is there a way, say from the weighted Dynkin diagram, that one can determine which Levi subgroup $L$ will be such that $u \in \mathcal{O}$ is distinguished in $L$?
For example if $G$ is $SO_8(\mathbb{K})$ and $\lambda$ is the partition $(4,4)$ then is $L$ a Levi subgroup of type $A_3$?
Thanks for any help anyone may be able to give me with this.
EDIT: Some clarifications of the language, due to the suggestion of Jim.
share|improve this question
A couple of very minor edits. Before attempting to answer this, let me point out that language like "right extremal nodes" of the Dynkin diagram is a little fuzzy for type $D_4$ (but probably without affecting your question). And the initial description "simple connected semisimple" sounds awkward; but I guess you are mainly concerned just with $SO_{2n}$ throughout. – Jim Humphreys Dec 6 '10 at 16:13
Yes, you're right. I have made it a bit clearer now for $D_4$, especially as this is the running example in my post. Yeah, I always get a bit haphazard with this. I get so used to writing "connected reductive" or "connected semisimple" that I forget when referring to a connected simple algebraic group that this is redundant. Thanks for the suggestion. In fact, no. I'm actually interested in these classes in the half spin group $HSpin_{2n}(\mathbb{K})$. – Jay Taylor Dec 6 '10 at 17:00
add comment
1 Answer
up vote 1 down vote accepted
It gets complicated to compare the different ways to parametrize or realize a unipotent class (or equivalently, in good characteristic, a nilpotent orbit in the Lie algebra). But I think the answer to the basic question here is no, unless I'm misreading it. When a class happens to be Richardson (the unique orbit intersecting densely the unipotent radical of a given parabolic subgroup), the dimension of the class is twice the dimension of the unipotent radical in question. The class can also be determined by the Bala-Carter method, starting with a Levi subgroup of $G$ and its Borel subgroup or other distinguished parabolic. Here the class is the Richardson class determined by that distinguished parabolic.
Taking just the example $D_4$ with the given class being one of two determined by the partition $(4,4)$, either class (and a third one as well) has dimension 20. Here the Richardson viewpoint starts with a parabolic subgroup of Levi type $A_2$ whose unipotent radical has dimension 10. But the Bala-Carter viewpoint starts with a Levi subgroup of type $A_3$ together with its distinguished parabolic (Borel) subgroup having a unipotent radical of dimension 10. So the two Levi subgroups in the picture are far from being conjugate.
Generally speaking, the Richardson method starts with a parabolic subgroup having a small Levi subgroup to yield a big class, while the Bala-Carter method starts with a big Levi subgroup to yield the same big class. The former method doesn't usually yield all unipotent classes, whereas the latter method gets them all. But the two methods are roughly dual.
Going back to $D_4$, what you can observe in the closure diagram is a duality between the two situations in which a single partition labels two classes. But there is still the notational task of sorting out which parabolic or Levi goes with which of the two classes in each case. There is a lot to be said here in general for type $D_{2n}$. Let me just add that unipotent classes or nilpotent orbits don't depend on the isogeny type of the group, so special orthogonal groups are convenient to work with.
share|improve this answer
Thanks for your answer Jim. Just a couple of questions. Sticking with the example of $D_4$ and the partition $(4,4)$. Do you really mean to say that the Bala-Carter theory gives a parabolic with Levi complement $A_3$ and the Richardson parabolic is of type $A_2$? What I got the impression that Carter was saying was that the parabolic for which the class is a Richardson class has Levi complement $A_3$. This is mentioned in his book (pg. 423 to be precise) in his description of the Springer correspondence. – Jay Taylor Dec 6 '10 at 18:51
So let's just ignore the parabolic coming from the Richardson theory altogether. Can one determine in a nice way specifically the Levi subgroup attached to the class in the Bala-Carter theory? I had the impression that as the partition determines the Jordan normal form of the matrix that this would determine the Levi subgroup. I guess in my mind in the example in $D_4$ the two Jordan blocks of size 4 fit very nicely in some $A_3$ Levi, which will look like $GL_4\times GL_4$ in $SO_8$, (except one $GL_4$ is determined by the other). Using matrices in $SO_{2n}$ would probably make this concrete. – Jay Taylor Dec 6 '10 at 19:00
I'll get back into this literature a little later, but at first glance it looks as if your initial concern is about labelling conventions. This gets messy, since one can also look at dual partitions, etc. I don't recall immediately how Bala-Carter connects with partition labels for the classical groups. – Jim Humphreys Dec 6 '10 at 19:27
P.S. Concerning your first comment, I think you are misreading the complicated page 423 in Carter's book (e.g., how his $\xi_i$ are related to $\lambda$). Concerning Bala-Carter, their original papers may help. But they require pairs: Levi subgroup plus its distinguished parabolic. Class dimensions can be useful in keeping track of partition labels, though in your case the latter by themselves are ambiguous. Too much notation in the whole subject, unfortunately. You need to focus your questions a little more accordingly. – Jim Humphreys Dec 6 '10 at 21:58
OK, well thanks for all your help and very useful comments Jim. I'll take a look at the Bala-Carter papers to see if I can sort out my labelling issues. Thanks again. – Jay Taylor Dec 7 '10 at 8:10
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.739412 |
NAG Library Routine Document
f08bqf (ztpmqrt)
1
Purpose
f08bqf (ztpmqrt) multiplies an arbitrary complex matrix C by the complex unitary matrix Q from a QR factorization computed by f08bpf (ztpqrt).
2
Specification
Fortran Interface
Subroutine f08bqf ( side, trans, m, n, k, l, nb, v, ldv, t, ldt, c1, ldc1, c2, ldc2, work, info)
Integer, Intent (In):: m, n, k, l, nb, ldv, ldt, ldc1, ldc2
Integer, Intent (Out):: info
Complex (Kind=nag_wp), Intent (In):: v(ldv,*), t(ldt,*)
Complex (Kind=nag_wp), Intent (Inout):: c1(ldc1,*), c2(ldc2,*), work(*)
Character (1), Intent (In):: side, trans
C Header Interface
#include <nagmk26.h>
void f08bqf_ (const char *side, const char *trans, const Integer *m, const Integer *n, const Integer *k, const Integer *l, const Integer *nb, const Complex v[], const Integer *ldv, const Complex t[], const Integer *ldt, Complex c1[], const Integer *ldc1, Complex c2[], const Integer *ldc2, Complex work[], Integer *info, const Charlen length_side, const Charlen length_trans)
The routine may be called by its LAPACK name ztpmqrt.
3
Description
f08bqf (ztpmqrt) is intended to be used after a call to f08bpf (ztpqrt) which performs a QR factorization of a triangular-pentagonal matrix containing an upper triangular matrix A over a pentagonal matrix B. The unitary matrix Q is represented as a product of elementary reflectors.
This routine may be used to form the matrix products
QC , QHC , CQ or CQH ,
where the complex rectangular mc by nc matrix C is split into component matrices C1 and C2.
If Q is being applied from the left (QC or QHC) then
C = C1 C2
where C1 is k by nc, C2 is mv by nc, mc=k+mv is fixed and mv is the number of rows of the matrix V containing the elementary reflectors (i.e., m as passed to f08bpf (ztpqrt)); the number of columns of V is nv (i.e., n as passed to f08bpf (ztpqrt)).
If Q is being applied from the right (CQ or CQH) then
C = C1 C2
where C1 is mc by k, and C2 is mc by mv and nc=k+mv is fixed.
The matrices C1 and C2 are overwriten by the result of the matrix product.
A common application of this routine is in updating the solution of a linear least squares problem as illustrated in Section 10 in f08bpf (ztpqrt).
4
References
Golub G H and Van Loan C F (2012) Matrix Computations (4th Edition) Johns Hopkins University Press, Baltimore
5
Arguments
1: side – Character(1)Input
On entry: indicates how Q or QH is to be applied to C.
side='L'
Q or QH is applied to C from the left.
side='R'
Q or QH is applied to C from the right.
Constraint: side='L' or 'R'.
2: trans – Character(1)Input
On entry: indicates whether Q or QH is to be applied to C.
trans='N'
Q is applied to C.
trans='C'
QH is applied to C.
Constraint: trans='N' or 'C'.
3: m – IntegerInput
On entry: the number of rows of the matrix C2, that is,
if side='L'
then mv, the number of rows of the matrix V;
if side='R'
then mc, the number of rows of the matrix C.
Constraint: m0.
4: n – IntegerInput
On entry: the number of columns of the matrix C2, that is,
if side='L'
then nc, the number of columns of the matrix C;
if side='R'
then nv, the number of columns of the matrix V.
Constraint: n0.
5: k – IntegerInput
On entry: k, the number of elementary reflectors whose product defines the matrix Q.
Constraint: k0.
6: l – IntegerInput
On entry: l, the number of rows of the upper trapezoidal part of the pentagonal composite matrix V, passed (as b) in a previous call to f08bpf (ztpqrt). This must be the same value used in the previous call to f08bpf (ztpqrt) (see l in f08bpf (ztpqrt)).
Constraint: 0lk.
7: nb – IntegerInput
On entry: nb, the blocking factor used in a previous call to f08bpf (ztpqrt) to compute the QR factorization of a triangular-pentagonal matrix containing composite matrices A and B.
Constraints:
• nb1;
• if k>0, nbk.
8: vldv* – Complex (Kind=nag_wp) arrayInput
Note: the second dimension of the array v must be at least max1,k.
On entry: the mv by nv matrix V; this should remain unchanged from the array b returned by a previous call to f08bpf (ztpqrt).
9: ldv – IntegerInput
On entry: the first dimension of the array v as declared in the (sub)program from which f08bqf (ztpmqrt) is called.
Constraints:
• if side='L', ldv max1,m ;
• if side='R', ldv max1,n .
10: tldt* – Complex (Kind=nag_wp) arrayInput
Note: the second dimension of the array t must be at least max1,k.
On entry: this must remain unchanged from a previous call to f08bpf (ztpqrt) (see t in f08bpf (ztpqrt)).
11: ldt – IntegerInput
On entry: the first dimension of the array t as declared in the (sub)program from which f08bqf (ztpmqrt) is called.
Constraint: ldtnb.
12: c1ldc1* – Complex (Kind=nag_wp) arrayInput/Output
Note: the second dimension of the array c1 must be at least max1,n if side='L' and at least max1,k if side='R'.
On entry: C1, the first part of the composite matrix C:
if side='L'
then c1 contains the first k rows of C;
if side='R'
then c1 contains the first k columns of C.
On exit: c1 is overwritten by the corresponding block of QC or QHC or CQ or CQH.
13: ldc1 – IntegerInput
On entry: the first dimension of the array c1 as declared in the (sub)program from which f08bqf (ztpmqrt) is called.
Constraints:
• if side='L', ldc1 max1,k ;
• if side='R', ldc1 max1,m .
14: c2ldc2* – Complex (Kind=nag_wp) arrayInput/Output
Note: the second dimension of the array c2 must be at least max1,n.
On entry: C2, the second part of the composite matrix C.
if side='L'
then c2 contains the remaining mv rows of C;
if side='R'
then c2 contains the remaining mv columns of C;
On exit: c2 is overwritten by the corresponding block of QC or QHC or CQ or CQH.
15: ldc2 – IntegerInput
On entry: the first dimension of the array c2 as declared in the (sub)program from which f08bqf (ztpmqrt) is called.
Constraint: ldc2 max1,m .
16: work* – Complex (Kind=nag_wp) arrayWorkspace
Note: the dimension of the array work must be at least n×nb if side='L' and at least m×nb if side='R'.
17: info – IntegerOutput
On exit: info=0 unless the routine detects an error (see Section 6).
6
Error Indicators and Warnings
info<0
If info=-i, argument i had an illegal value. An explanatory message is output, and execution of the program is terminated.
7
Accuracy
The computed result differs from the exact result by a matrix E such that
E2 = Oε C2 ,
where ε is the machine precision.
8
Parallelism and Performance
f08bqf (ztpmqrt) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
9
Further Comments
The total number of floating-point operations is approximately 2nk 2m-k if side='L' and 2mk 2n-k if side='R'.
The real analogue of this routine is f08bcf (dtpmqrt).
10
Example
See Section 10 in f08bpf (ztpqrt).
|
__label__pos
| 0.613951 |
WordPress.org
Forums
[resolved] wp_rewrite help (3 posts)
1. camdagr8
Member
Posted 4 years ago #
I'm not too familiar with wp_rewrite and need some help creating a simple rewrite. I've looked at the codex and it's very limited on rewrite help for doing something simple.
I want to take the url:
http://mysite.com/tutorial/chp/1
and rewrite it to:
http://mysite.com/tutorial/?chp=1
The catch is that I want the url all the way up to /chp in tact so that no matter what comes before /chp/ it's used in the new url and the vars are added in.
I have my permalinks set to the following:
/%category%/%postname%/
I'm trying the current rewrite:
// URL rewrite
// flush rules
add_filter('init','wpngfb_flush');
function wpngfb_flush() {
global $wp_rewrite;
$wp_rewrite->flush_rules();
}
// Add a new rule
add_filter('rewrite_rules_array','wpngfb_rules');
function wpngfb_rules($rules) {
$newrules = array();
$newrules['^(.*)/chp/([0-9]+)$'] = 'index.php?pagename=$matches[1]&chp=$matches[2]';
return $newrules + $rules;
}
// Add the chp var so that WP recognizes it
add_filter('query_vars','wpngfb_var_insert');
function wpngfb_var_insert($vars) {
array_push($vars, 'chp');
return $vars;
}
This works some what, but it's dropping the /chp/1 from the url completely and ignoring the var include
2. camdagr8
Member
Posted 4 years ago #
bump
3. camdagr8
Member
Posted 4 years ago #
Figured it out. There's nothing wrong with my code, you just can't apply the rewrite rules on category/posts. It must be applied to a page.
Topic Closed
This topic has been closed to new replies.
About this Topic
|
__label__pos
| 0.661284 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I am using the outlook calendar functionality in my C# code.
I am able to do it. But there is one problem that is how to use html tags in the description body of the outlook calendar contents.
I want to bold (using <b>) some text in description mean to format the data. How can I do it What I am using now:
context.Response.ContentType = "text/calendar";
context.Response.AddHeader("content-disposition", "inline;filename=Calendar.ics");
context.Response.BinaryWrite(new System.Text.ASCIIEncoding().GetBytes(data));
share|improve this question
add comment
1 Answer
On the same portal while searching I got the answer. Find the answer on below given link.
HTML in iCal attachment
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.80247 |
Set Root Junction rule reference
The Set Root Junction rule is used to specify junctions based on a network junction class or object table as diagram root junctions by filtering those junctions based on their attributes, if any.
Since the root junctions are specific junctions from which tree layouts operate when they run on network diagrams, this rule is typically configured on templates that are set up to automatically run tree layouts at diagram generation.
Set Root Junction rule process
The Set Root Junction rule must be set up on a template before configuring any tree layouts—Smart Tree, Mainline Tree, or Radial Tree—so that the expected roots are positioned by the rule first, and the automatic tree layout operates from those root junctions.
In most cases, this rule is the last rule configured on the template rule sequence, so it runs after all rules that modify the diagram graph. This ensures that the Set Root Junction rule processes all junctions that exist in the diagram.
Set Root Junction rule configuration
You can add a Set Root Junction rule on a template with the Add Set Root Junction By Attribute Rule tool.
In some situations, you may consider configuring this tool with an SQL expression to set a particular diagram junction as the root junction. For example, to query the minimum attribute value from the diagram junctions in the generated diagram, you could run the tool with the following SQL expression: <attributeName> = (SELECT MIN(<attributeName>) FROM <networkClassName>) WHERE 'OBJECT' = 'IN_DIAGRAM'.
Root junctions set manually and via rule
A diagram can mix roots set by rules and roots set manually using the Set Root Junction tool; in other words, you can manually set other root junctions or remove existing root junctions in a diagram that comes with predefined root junctions at its generation/update.
However, most of the time, the root junctions that were manually set will be lost when updating such a diagram. Only the root junctions set by the Set Root Junction By Attribute rule should be kept at the end of diagram updates. The only exception is when no junction is set during the rule process; in that case, root junctions that were manually set in the diagram before its update are kept as root junctions in the updated diagram.
Related topics
|
__label__pos
| 0.602014 |
1 /*
2 * Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
4 *
5 * This code is free software; you can redistribute it and/or modify it
6 * under the terms of the GNU General Public License version 2 only, as
7 * published by the Free Software Foundation.
8 *
9 * This code is distributed in the hope that it will be useful, but WITHOUT
10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
12 * version 2 for more details (a copy is included in the LICENSE file that
13 * accompanied this code).
14 *
15 * You should have received a copy of the GNU General Public License version
16 * 2 along with this work; if not, write to the Free Software Foundation,
17 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
18 *
19 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
20 * or visit www.oracle.com if you need additional information or have any
21 * questions.
22 *
23 */
24
25 #include "precompiled.hpp"
26 #include "c1/c1_FrameMap.hpp"
27 #include "c1/c1_LIR.hpp"
28 #include "runtime/sharedRuntime.hpp"
29 #ifdef TARGET_ARCH_x86
30 # include "vmreg_x86.inline.hpp"
31 #endif
32 #ifdef TARGET_ARCH_sparc
33 # include "vmreg_sparc.inline.hpp"
34 #endif
35 #ifdef TARGET_ARCH_zero
36 # include "vmreg_zero.inline.hpp"
37 #endif
38 #ifdef TARGET_ARCH_arm
39 # include "vmreg_arm.inline.hpp"
40 #endif
41 #ifdef TARGET_ARCH_ppc
42 # include "vmreg_ppc.inline.hpp"
43 #endif
44 #ifdef TARGET_ARCH_aarch32
45 # include "vmreg_aarch32.inline.hpp"
46 #endif
47
48
49
50 //-----------------------------------------------------
51
52 // Convert method signature into an array of BasicTypes for the arguments
53 BasicTypeArray* FrameMap::signature_type_array_for(const ciMethod* method) {
54 ciSignature* sig = method->signature();
55 BasicTypeList* sta = new BasicTypeList(method->arg_size());
56 // add receiver, if any
57 if (!method->is_static()) sta->append(T_OBJECT);
58 // add remaining arguments
59 for (int i = 0; i < sig->count(); i++) {
60 ciType* type = sig->type_at(i);
61 BasicType t = type->basic_type();
62 if (t == T_ARRAY) {
63 t = T_OBJECT;
64 }
65 sta->append(t);
66 }
67 // done
68 return sta;
69 }
70
71
72 CallingConvention* FrameMap::java_calling_convention(const BasicTypeArray* signature, bool outgoing) {
73 // compute the size of the arguments first. The signature array
74 // that java_calling_convention takes includes a T_VOID after double
75 // work items but our signatures do not.
76 int i;
77 int sizeargs = 0;
78 for (i = 0; i < signature->length(); i++) {
79 sizeargs += type2size[signature->at(i)];
80 }
81
82 BasicType* sig_bt = NEW_RESOURCE_ARRAY(BasicType, sizeargs);
83 VMRegPair* regs = NEW_RESOURCE_ARRAY(VMRegPair, sizeargs);
84 int sig_index = 0;
85 for (i = 0; i < sizeargs; i++, sig_index++) {
86 sig_bt[i] = signature->at(sig_index);
87 if (sig_bt[i] == T_LONG || sig_bt[i] == T_DOUBLE) {
88 sig_bt[i + 1] = T_VOID;
89 i++;
90 }
91 }
92
93 intptr_t out_preserve = SharedRuntime::java_calling_convention(sig_bt, regs, sizeargs, outgoing);
94 LIR_OprList* args = new LIR_OprList(signature->length());
95 for (i = 0; i < sizeargs;) {
96 BasicType t = sig_bt[i];
97 assert(t != T_VOID, "should be skipping these");
98 LIR_Opr opr = map_to_opr(t, regs + i, outgoing);
99 args->append(opr);
100 if (opr->is_address()) {
101 LIR_Address* addr = opr->as_address_ptr();
102 assert(addr->disp() == (int)addr->disp(), "out of range value");
103 out_preserve = MAX2(out_preserve, (intptr_t)(addr->disp() - STACK_BIAS) / 4);
104 }
105 i += type2size[t];
106 }
107 assert(args->length() == signature->length(), "size mismatch");
108 out_preserve += SharedRuntime::out_preserve_stack_slots();
109
110 if (outgoing) {
111 // update the space reserved for arguments.
112 update_reserved_argument_area_size(out_preserve * BytesPerWord);
113 }
114 return new CallingConvention(args, out_preserve);
115 }
116
117
118 CallingConvention* FrameMap::c_calling_convention(const BasicTypeArray* signature) {
119 // compute the size of the arguments first. The signature array
120 // that java_calling_convention takes includes a T_VOID after double
121 // work items but our signatures do not.
122 int i;
123 int sizeargs = 0;
124 for (i = 0; i < signature->length(); i++) {
125 sizeargs += type2size[signature->at(i)];
126 }
127
128 BasicType* sig_bt = NEW_RESOURCE_ARRAY(BasicType, sizeargs);
129 VMRegPair* regs = NEW_RESOURCE_ARRAY(VMRegPair, sizeargs);
130 int sig_index = 0;
131 for (i = 0; i < sizeargs; i++, sig_index++) {
132 sig_bt[i] = signature->at(sig_index);
133 if (sig_bt[i] == T_LONG || sig_bt[i] == T_DOUBLE) {
134 sig_bt[i + 1] = T_VOID;
135 i++;
136 }
137 }
138
139 intptr_t out_preserve = SharedRuntime::c_calling_convention(sig_bt, regs, NULL, sizeargs);
140 LIR_OprList* args = new LIR_OprList(signature->length());
141 for (i = 0; i < sizeargs;) {
142 BasicType t = sig_bt[i];
143 assert(t != T_VOID, "should be skipping these");
144
145 // C calls are always outgoing
146 bool outgoing = true;
147 LIR_Opr opr = map_to_opr(t, regs + i, outgoing);
148 // they might be of different types if for instance floating point
149 // values are passed in cpu registers, but the sizes must match.
150 assert(type2size[opr->type()] == type2size[t], "type mismatch");
151 args->append(opr);
152 if (opr->is_address()) {
153 LIR_Address* addr = opr->as_address_ptr();
154 out_preserve = MAX2(out_preserve, (intptr_t)(addr->disp() - STACK_BIAS) / 4);
155 }
156 i += type2size[t];
157 }
158 assert(args->length() == signature->length(), "size mismatch");
159 out_preserve += SharedRuntime::out_preserve_stack_slots();
160 update_reserved_argument_area_size(out_preserve * BytesPerWord);
161 return new CallingConvention(args, out_preserve);
162 }
163
164
165 //--------------------------------------------------------
166 // FrameMap
167 //--------------------------------------------------------
168
169 bool FrameMap::_init_done = false;
170 Register FrameMap::_cpu_rnr2reg [FrameMap::nof_cpu_regs];
171 int FrameMap::_cpu_reg2rnr [FrameMap::nof_cpu_regs];
172
173
174 FrameMap::FrameMap(ciMethod* method, int monitors, int reserved_argument_area_size) {
175 assert(_init_done, "should already be completed");
176
177 _framesize = -1;
178 _num_spills = -1;
179
180 assert(monitors >= 0, "not set");
181 _num_monitors = monitors;
182 assert(reserved_argument_area_size >= 0, "not set");
183 _reserved_argument_area_size = MAX2(4, reserved_argument_area_size) * BytesPerWord;
184
185 _argcount = method->arg_size();
186 _argument_locations = new intArray(_argcount, -1);
187 _incoming_arguments = java_calling_convention(signature_type_array_for(method), false);
188 _oop_map_arg_count = _incoming_arguments->reserved_stack_slots();
189
190 int java_index = 0;
191 for (int i = 0; i < _incoming_arguments->length(); i++) {
192 LIR_Opr opr = _incoming_arguments->at(i);
193 if (opr->is_address()) {
194 LIR_Address* address = opr->as_address_ptr();
195 _argument_locations->at_put(java_index, address->disp() - STACK_BIAS);
196 _incoming_arguments->args()->at_put(i, LIR_OprFact::stack(java_index, as_BasicType(as_ValueType(address->type()))));
197 }
198 java_index += type2size[opr->type()];
199 }
200
201 }
202
203
204 bool FrameMap::finalize_frame(int nof_slots) {
205 assert(nof_slots >= 0, "must be positive");
206 assert(_num_spills == -1, "can only be set once");
207 _num_spills = nof_slots;
208 assert(_framesize == -1, "should only be calculated once");
209 _framesize = round_to(in_bytes(sp_offset_for_monitor_base(0)) +
210 _num_monitors * sizeof(BasicObjectLock) +
211 sizeof(intptr_t) + // offset of deopt orig pc
212 frame_pad_in_bytes,
213 StackAlignmentInBytes) / 4;
214 int java_index = 0;
215 for (int i = 0; i < _incoming_arguments->length(); i++) {
216 LIR_Opr opr = _incoming_arguments->at(i);
217 if (opr->is_stack()) {
218 _argument_locations->at_put(java_index, in_bytes(framesize_in_bytes()) +
219 _argument_locations->at(java_index));
220 }
221 java_index += type2size[opr->type()];
222 }
223 // make sure it's expressible on the platform
224 return validate_frame();
225 }
226
227 VMReg FrameMap::sp_offset2vmreg(ByteSize offset) const {
228 int offset_in_bytes = in_bytes(offset);
229 assert(offset_in_bytes % 4 == 0, "must be multiple of 4 bytes");
230 assert(offset_in_bytes / 4 < framesize() + oop_map_arg_count(), "out of range");
231 return VMRegImpl::stack2reg(offset_in_bytes / 4);
232 }
233
234
235 bool FrameMap::location_for_sp_offset(ByteSize byte_offset_from_sp,
236 Location::Type loc_type,
237 Location* loc) const {
238 int offset = in_bytes(byte_offset_from_sp);
239 assert(offset >= 0, "incorrect offset");
240 if (!Location::legal_offset_in_bytes(offset)) {
241 return false;
242 }
243 Location tmp_loc = Location::new_stk_loc(loc_type, offset);
244 *loc = tmp_loc;
245 return true;
246 }
247
248
249 bool FrameMap::locations_for_slot (int index, Location::Type loc_type,
250 Location* loc, Location* second) const {
251 ByteSize offset_from_sp = sp_offset_for_slot(index);
252 if (!location_for_sp_offset(offset_from_sp, loc_type, loc)) {
253 return false;
254 }
255 if (second != NULL) {
256 // two word item
257 offset_from_sp = offset_from_sp + in_ByteSize(4);
258 return location_for_sp_offset(offset_from_sp, loc_type, second);
259 }
260 return true;
261 }
262
263 //////////////////////
264 // Public accessors //
265 //////////////////////
266
267
268 ByteSize FrameMap::sp_offset_for_slot(const int index) const {
269 if (index < argcount()) {
270 int offset = _argument_locations->at(index);
271 assert(offset != -1, "not a memory argument");
272 assert(offset >= framesize() * 4, "argument inside of frame");
273 return in_ByteSize(offset);
274 }
275 ByteSize offset = sp_offset_for_spill(index - argcount());
276 assert(in_bytes(offset) < framesize() * 4, "spill outside of frame");
277 return offset;
278 }
279
280
281 ByteSize FrameMap::sp_offset_for_double_slot(const int index) const {
282 ByteSize offset = sp_offset_for_slot(index);
283 if (index >= argcount()) {
284 assert(in_bytes(offset) + 4 < framesize() * 4, "spill outside of frame");
285 }
286 return offset;
287 }
288
289
290 ByteSize FrameMap::sp_offset_for_spill(const int index) const {
291 assert(index >= 0 && index < _num_spills, "out of range");
292 int offset = round_to(first_available_sp_in_frame + _reserved_argument_area_size, sizeof(double)) +
293 index * spill_slot_size_in_bytes;
294 return in_ByteSize(offset);
295 }
296
297 ByteSize FrameMap::sp_offset_for_monitor_base(const int index) const {
298 int end_of_spills = round_to(first_available_sp_in_frame + _reserved_argument_area_size, sizeof(double)) +
299 _num_spills * spill_slot_size_in_bytes;
300 int offset = (int) round_to(end_of_spills, HeapWordSize) + index * sizeof(BasicObjectLock);
301 return in_ByteSize(offset);
302 }
303
304 ByteSize FrameMap::sp_offset_for_monitor_lock(int index) const {
305 check_monitor_index(index);
306 return sp_offset_for_monitor_base(index) + in_ByteSize(BasicObjectLock::lock_offset_in_bytes());;
307 }
308
309 ByteSize FrameMap::sp_offset_for_monitor_object(int index) const {
310 check_monitor_index(index);
311 return sp_offset_for_monitor_base(index) + in_ByteSize(BasicObjectLock::obj_offset_in_bytes());
312 }
313
314
315 // For OopMaps, map a local variable or spill index to an VMReg.
316 // This is the offset from sp() in the frame of the slot for the index,
317 // skewed by SharedInfo::stack0 to indicate a stack location (vs.a register.)
318 //
319 // C ABI size +
320 // framesize + framesize +
321 // stack0 stack0 stack0 0 <- VMReg->value()
322 // | | | <registers> |
323 // ..........|..............|..............|.............|
324 // 0 1 2 3 | <C ABI area> | 4 5 6 ...... | <- local indices
325 // ^ ^ sp()
326 // | |
327 // arguments non-argument locals
328
329
330 VMReg FrameMap::regname(LIR_Opr opr) const {
331 if (opr->is_single_cpu()) {
332 assert(!opr->is_virtual(), "should not see virtual registers here");
333 return opr->as_register()->as_VMReg();
334 } else if (opr->is_single_stack()) {
335 return sp_offset2vmreg(sp_offset_for_slot(opr->single_stack_ix()));
336 } else if (opr->is_address()) {
337 LIR_Address* addr = opr->as_address_ptr();
338 assert(addr->base() == stack_pointer(), "sp based addressing only");
339 return sp_offset2vmreg(in_ByteSize(addr->index()->as_jint()));
340 }
341 ShouldNotReachHere();
342 return VMRegImpl::Bad();
343 }
344
345
346
347
348 // ------------ extra spill slots ---------------
|
__label__pos
| 0.994013 |
Mathc complexes/Fichiers h : z s
Un livre de Wikilivres.
Sauter à la navigation Sauter à la recherche
Installer ce fichier dans votre répertoire de travail.
Crystal Clear mimetype source h.png z_r.h
'
/* ------------------------------------ */
/* Save as : z_r.h */
/* ------------------------------------ */
/* ------------------------------------
Call : time_t t;
int i;
srand(time(&t));
i = r_I(9);
------------------------------------ */
/* ------------------------------------
positive and negative numbers
------------------------------------ */
int r_I(
int maxI)
{
int x;
x = (rand() % maxI) + 1; /* + 1 : not zero */
x *= pow(-1,x);
return(x);
}
/* ------------------------------------
positive numbers
------------------------------------ */
int rp_I(
int maxI)
{
return((rand() % maxI) + 1); /* + 1 : not zero */
}
/* ------------------------------------
positive and negative floating point number
e= 1E-0 -> 1234
e = 1E-1 -> 123.4
e = 1E-2 -> 12.34
e = 1E-3 -> 1.234
------------------------------------ */
double r_E(
int maxI,
double e)
{
double x;
x = (rand() % maxI) + 1 ; /* + 1 : not zero */
x *= pow(-1,x) * e;
return(x);
}
Nous allons utiliser ses fonctions pour donner des valeurs aléatoires aux matrices.
|
__label__pos
| 0.929333 |
D3DDDICB_ALLOCATE
D3DDDICB_ALLOCATE structure
The D3DDDICB_ALLOCATE structure contains information for allocating memory.
Syntax
typedef struct _D3DDDICB_ALLOCATE {
const VOID *pPrivateDriverData;
UINT PrivateDriverDataSize;
HANDLE hResource;
D3DKMT_HANDLE hKMResource;
UINT NumAllocations;
#if (D3D_UMD_INTERFACE_VERSION >= D3D_UMD_INTERFACE_VERSION_WIN7)
union {
D3DDDI_ALLOCATIONINFO *pAllocationInfo;
D3DDDI_ALLOCATIONINFO2 *pAllocationInfo2;
};
#else
D3DDDI_ALLOCATIONINFO *pAllocationInfo;
} D3DDDICB_ALLOCATE;
Members
pPrivateDriverData
[in] A pointer to private data, which is passed to the display miniport driver. This data is per resource and not per allocation. If allocations are attached to an existing resource, the current data should overwrite the former data.
PrivateDriverDataSize
[in] The size, in bytes, of the private data that is pointed to by pPrivateDriverData.
hResource
[in] A handle to the resource that is associated with the allocations.
When the user-mode display driver calls the pfnAllocateCb function, the driver should assign the value that was received from the hResource member of the D3DDDIARG_CREATERESOURCE structure in a call to CreateResource, or the hRTResource parameter in a call to CreateResource(D3D10) or CreateResource(D3D11). It should assign the value to associate the allocations with the resource, or assign NULL to associate the allocations with the device. The driver must assign a non-NULL value for allocations that are created in response to shared resources. Shared resources might result from CreateResource calls with the SharedResource bit-field flag set in the Flags member of D3DDDIARG_CREATERESOURCE. They might also result from CreateResource(D3D10) or CreateResource(D3D11) calls, with the D3D10_DDI_RESOURCE_MISC_SHARED value set in the MiscFlags member of either D3D10DDIARG_CREATERESOURCE or D3D11DDIARG_CREATERESOURCE.
The Microsoft Direct3D runtime should use this handle in driver calls to identify the resource.
hKMResource
[out] A D3DKMT_HANDLE data type that represents a kernel-mode handle to the resource that is associated with the allocations.
The Direct3D runtime creates and returns a kernel-mode resource handle only if the user-mode display driver sets the hResource member of D3DDDICB_ALLOCATE to the user-mode runtime resource handle that was received from the hResource member of the D3DDDIARG_CREATERESOURCE structure. This handle is received in a call to CreateResource, or from the hResource parameter in a call to either CreateResource(D3D10) or CreateResource(D3D11).
The Direct3D runtime generates a unique handle and passes it back to the user-mode display driver. The user-mode display driver can insert the kernel-mode resource handle in the command stream for subsequent use by the display miniport driver.
NumAllocations
[in] The number of elements in the array at pAllocationInfo, which represents the number of allocations to allocate.
pAllocationInfo
[in] An array of D3DDDI_ALLOCATIONINFO structures that describe the allocations to allocate.
pAllocationInfo2
[in] This member is reserved and should be set to zero.
This member is available beginning with Windows 7.
pAllocationInfo
[in] An array of D3DDDI_ALLOCATIONINFO structures that describe the allocations to allocate.
Requirements
Version
Available in Windows Vista and later versions of the Windows operating systems.
Header
D3dumddi.h (include D3dumddi.h)
See also
CreateResource
CreateResource(D3D10)
CreateResource(D3D11)
D3DDDI_ALLOCATIONINFO
D3D10DDIARG_CREATERESOURCE
D3D11DDIARG_CREATERESOURCE
D3DDDIARG_CREATERESOURCE
pfnAllocateCb
Send comments about this topic to Microsoft
표시:
© 2016 Microsoft
|
__label__pos
| 0.916686 |
RSA加密算法中的公钥和密钥
技术博客 (55) 2023-10-29 09:01:01
原文链接 mp.weixin.qq.com/s/r7H82DC8L…
title: RSA加密算法中的公钥密钥 date: 2023-08-26 23:38:31 tags: '密码技术,读书笔记'
密钥对
在昨天的文章RSA加密算法中提到了两个关键词——公钥、私钥。并且我们知道了,RSA 的加密就是求“明文的E次方 mod N”,解密是求“密文的D次方 mod N”。那么公式中的E、D、N这三个数就是生成的密钥对
这三个数字肯定不是随随便便就拿来用的,那样的话加密也太容易被激活成功教程了。它们有自己的生成方式。
RSA密钥对的生成
我们先来简单陈述一下RSA密钥对的生成步骤,这其中涉及到了一些数学原理,暂时现不用知道什么意思。因为我也一知半解
1. 选择两个大的随机的质数 p 和 q
2. p 和 q进行乘法运算得到的值就是 N,这个N 就是公钥和私钥mod的那个N
3. 计算 φ(n), φ(n)= (p-1)*(q-1),其中φ是欧拉函数。暂时先不用知道什么是欧拉函数
4. 现在来计算E,E 的计算规则是大于1但是小于 φ(n) 并且与 φ(n)互为质数。这个E就是公钥中的次幂数,暂时先不用知道什么是质数
5. 现在来计算D,我们现在已经计算出来了公钥,E 和 N,那么D的计算规则需要满足以下条件:(E * D) % φ(N) = 1。为什么非要满足这个公式呢?因为只要数字D满足上述条件,通过E 和 N 进行加密的数据就可以通过D 和 N 进行解密
经过上述五个步骤,我们已经成功按照规则生成了RSA 的公钥和私钥,现在来详细解释一下让我们迷糊的名词和公式符号
1. 选择两个互质的数字,p 和 q。什么是质数呢?
1. 质数:质数(Prime number)是指大于1且只能被1和自身整除的正整数。换句话说,质数没有除了1和它本身之外的其他正因数。例如:2,3,5,7
2. 那为什么要选择质数呢?根据上面的解释我们可以看出来质数因为只能被1和本身整出,所以质数只有两个因数,那就是1 和 本身,换句话说就是质数的分解是唯一的,换句话说就是每个正整数都可以唯一地分解为质数的乘积,这被称为唯一质因数分解定理。
3. 什么是互为质数:是指这两个整数的最大公因数(最大公约数)为1,即它们没有除了1以外的公共正因数。
2. 第三步中提到了一个函数符号——φ(n)。这个φ符号表示:“欧拉函数”。那什么是“欧拉函数”?
1. 欧拉函数 φ是以欧拉命名的一个数论函数。它表示对于一个正整数m,在小于等于m的正整数中与m互质的正整数的个数。(维基百科)
2. (n)代表了一个正整数n。φ(n) 合起来表达的意思是:对于给定的正整数n,计算小于等于n并且与n本身互为质数的正整数个数。上面这两句话读起来很难理解,每个字都认识,但是连起来的意思总觉得模模糊糊。下面举个例子:
1. 当 n =10 ,φ(n) = 4。为什么呢?因为:小于等于10的正整数有:1,2,3,4,5,6,7,8,9,10
其中与10互质的有:1,3,7,9。所以 φ(n) = 4。
现在上述步骤中难以理解的公式和名词都解释了, 我们尝试来生成一个RSA 的密钥对。
实践一下,生成一个RSA的密钥对
按照步骤我们应该先选择两个大的随机的质数, 这里我们选择p = 3, q = 13
第二步中N = p * q ,则N = 3 *13 = 39
第三步骤计算:φ(n)= (p-1)*(q-1) ,把p = 3, q = 13代入到公式中计算得出φ(n)= 24.这表示当N等于39的时候,与39互质的整数的个数是24。现在我们知道φ(n)=24,那么就可以计算第四步骤中的E
E的计算规则是大于1但是小于 φ(n) 并且与 φ(n)互为质数。公式表示就是 1 < E < φ(n) , 我们知道了φ(n) = 24,所以E 这个数字的取值范围就是 1、5、7、11、13、17 ....我们可以看出有很多数字,但是并不是都满足条件,我们选择11这个数字作为E 的值。
到此,我们已经计算出了E = 11 ,N = 24。这两个数字将作为我们的公钥。
因为N这个数字公钥私钥都会用到,所以接下来我们只需要计算D就可以了。
第五步中D的计算公式为:(E * D) % φ(N) = 1,E = 11,D 未知,φ(N) = 24。经过计算得出:D = 23
计算过程如下:
我们想要找到一个值 D,使得以下方程成立: (E * D) % φ(N) = 1;好像是废话
1. 使用扩展欧几里德算法计算 11 和24的最大公约数(GCD):
要求求解D的值。
具体计算步骤如下:
计算11与24的最大公约数 gcd(11,24) = 1
因为gcd(11,24)=1,所以方程有解
使用扩展欧几里得算法求11对24的模反元素D:
设:r0 = 24
r1 = 11
r2 = r0 % r1 = 24 % 11 = 2
递归计算:
r3 = r1 % r2 = 11 % 2 = 1
r4 = r2 % r3 = 2 % 1 = 0
得到:r3 = gcd(11,24) = 1
所以11对24的模反元素是r2的值,即:
D = 2D=2代入原方程验证: (11 * 2) mod 24 = 22
结果不是1,所以D≠2。
继续递归计算:
r5 = r3 % r4 = 1 % 0 = 1
r6 = r4 % r5 = 0 % 1 = 0
得到:r5 = gcd(11,24) = 1
所以11对24的模反元素是r4的值,即:
D = 23
重新验证: (11 * 23) mod 24 = 1
综上,在给定的方程 11 * D mod 24 = 1 中,D=23
我自己按照维基百科的方法计算了三遍,都不对,最后求助了GPT。😆
现在我们终于把公钥私钥都计算完了。
公钥:E = 11 , N = 39
私钥:D = 23 , N = 39
加密一波试试
我们现在是用自己计算出来的公钥加密一段数字试试,RSA 的明文规则是小于N 的数,所以。我们取明文为8。代入加密公式:密文 = 明文E mod N 8 11 mod 39 = 5, 得到密文为5
将 20 代入到解密公式:明文 = 密文D mod N 5 23 mod 39 = 8, 得到明文为8
计算过程
/** * 上述代码首先初始化了基数 datai,指数 ED 和模数 N,然后定义了一个变量 result 存储计算结果。 * 接着,代码使用一个 for 循环,循环次数为指数的值。在每次循环中,都会将 result 乘以基数, * 然后对模数取模,以此来实现 (a*b)%c = (a%c * b%c)%c 这个公式。最后,代码打印出结果。 * @param datai * @param ED * @param N * @return */
private static long mod(int data, int ED, int N) {
long result = 1;
for (int j = 0; j < i1; j++) {
result = result * i % i2;
}
return result;
}
THE END
发表回复
|
__label__pos
| 0.993508 |
system.stackoverflowexception when passing an array in the constructor
maurizio verdirame 41 Reputation points
2021-09-09T13:31:53.86+00:00
I'm trying to pass to the constructor a string array containing two parameter. the code is:
string[] Parametri = new string[2];
Parametri[0] = this.txtName.Tag.ToString();
Parametri[1] = null;
dativisitafrm.Parametri = Parametri;
dativisitafrm.ShowDialog();
the constructor code is this:
***public string[] Parametri
{
get { return Parametri; }
}***
I get this error: System.StackOverflowException: 'Exception of type 'System.StackOverflowException' is thrown.' can someone help me please
.NET Standard
.NET Standard
A formal specification of .NET APIs that are available on multiple .NET implementations.
505 questions
{count} votes
1 answer
Sort by: Most helpful
1. AdamJachocki 6 Reputation points
2021-09-15T09:41:17.737+00:00
Your problem is here:
public string[] Parametri
{
get {return Parametri;}
}
Notice that you are returning the property itself. Then the getter of the property is running. Getter returns the property itself. Then the getter of the property is running. Getter returns the property itself. Then the..... And stack overflow.
Just don't return the property itself in the getter. I think that you wanted to do something like this:
public string[] Parametri {get; private set;}
or like this:
string[] parametri;
public string[] Parametri
{
get { return parametri; }
}
Notice that I don't return here the property itself, just the value connected with the property.
0 comments No comments
|
__label__pos
| 0.655322 |
Source
SCons / test / TEX / biber_biblatex.py
The default branch has multiple heads
#!/usr/bin/env python
#
# __COPYRIGHT__
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
"""
Test creation of a Tex document that uses the multibib oackage
Test courtesy Rob Managan.
"""
import TestSCons
import os
test = TestSCons.TestSCons()
latex = test.where_is('pdflatex')
if not latex:
test.skip_test("Could not find 'pdflatex'; skipping test.\n")
biber = test.where_is('biber')
if not biber:
test.skip_test("Could not find 'biber'; skipping test.\n")
gloss = os.system('kpsewhich biblatex.sty')
if not gloss==0:
test.skip_test("biblatex.sty not installed; skipping test(s).\n")
test.write(['SConstruct'], """\
#!/usr/bin/env python
import os
env = Environment(ENV=os.environ)
env['BIBTEX'] = 'biber'
main_output = env.PDF('bibertest.tex')
""")
sources_bib_content = r"""
@book{mybook,
title={Title},
author={Author, A},
year={%s},
publisher={Publisher},
}
"""
test.write(['ref.bib'],sources_bib_content % '2013' )
test.write(['bibertest.tex'],r"""
\documentclass{article}
\usepackage[backend=biber]{biblatex}
\addbibresource{ref.bib}
\begin{document}
Hello. This is boring.
\cite{mybook}
And even more boring.
\printbibliography
\end{document}
""")
test.run()
# All (?) the files we expect will get created in the docs directory
files = [
'bibertest.aux',
'bibertest.bbl',
'bibertest.bcf',
'bibertest.blg',
'bibertest.fls',
'bibertest.log',
'bibertest.pdf',
'bibertest.run.xml',
]
for f in files:
test.must_exist([ f])
pdf_output_1 = test.read('bibertest.pdf')
test.write(['ref.bib'],sources_bib_content % '1982')
test.run()
pdf_output_2 = test.read('bibertest.pdf')
pdf_output_1 = test.normalize_pdf(pdf_output_1)
pdf_output_2 = test.normalize_pdf(pdf_output_2)
# If the PDF file is the same as it was previously, then it didn't
# pick up the change from 1981 to 1982, so fail.
test.fail_test(pdf_output_1 == pdf_output_2)
test.pass_test()
|
__label__pos
| 0.930689 |
Home > Blog > Elasticsearch >
Elasticsearch Pagination
Elasticsearch's pagination options are limited, and if not implemented effectively, they might ruin the user experience. This article explains how to perform Elasticsearch Pagination effectively to its full potential.
Rating: 5
31362
Pagination
In the same way as SQL uses the LIMIT keyword to return a single “page” of results, Elasticsearch accepts the from and size parameters:
size
Indicates the number of results that should be returned, defaults to 10
from
Indicates the number of initial results that should be skipped, defaults to 0
Learn how to use Elasticsearch, from beginner basics to advanced techniques, Enroll Our Elasticsearch Training Today!
Elasticsearch Pagination
If a search request results in more than ten hits, ElasticSearch will, by default, only return the first ten hits. To override that default value in order to retrieve more or fewer hits, we can add a size parameter to the search request body. For instance, the below request finds all movies ordered by year and returns the first two:
A search request that searches for everything in the ‘movies’ index and sorts the result based on the ‘year’ property in descending order.
curl -XPOST "https://localhost:9200/movies/_search" -d'
{
"query": {
"match_all": {}
},
"sort": [
{
"year": "desc"
}
],
"size": 2
}'
It’s also possible to exclude the N first hits, which you typically would do when building pagination functionality. To do so, use the from parameter and inspect the total property in the response from ElasticSearch to know when to stop paging.
MindMajix Youtube Channel
The same request as the previous one, only this time the first two hits are excluded and hit 3 and 4 is returned.
curl -XPOST "https://localhost:9200/movies/_search" -d'
{
"query": {
"match_all": {}
},
"sort": [
{
"year": "desc"
}
],
"size": 2,
"from": 2
}'
Don’t set the size parameter to some huge number or you’ll get an exception back from ElasticSearch. While it’s typically fine to retrieve tens, hundreds and even thousands of results with a single request, you shouldn’t ask for millions of results using the size parameter.
If you truly need to fetch a huge number of results, or even all of them, you can either iterate over them using size + from or, better yet, you can use the scroll API.
Read these latest Elasticsearch Interview Questions that helps you grab high-paying jobs!
Retrieving only parts of DOCUMENTS
So far, in all of the examples that we’ve looked, the entire JSON object that we’ve indexed has been included as the value of the _source property in each search results hit. In many cases, that’s not needed and only results in unused data being sent over the wire.
There are a couple of ways to control what is returned for each hit. One is by adding a _source parameter to the search request body. This parameter can have values of different kinds. For instance, we can give it the value false to not include a _source property at all in the hits:
A search request that only retrieves a single hit and instructs ES not to include the _source property in the hits.
curl -XPOST "https://localhost:9200/movies/_search" -d'
{
"size": 1,
"_source": false
}'
The response to the above request will look something like the one below. Note that the only thing that is returned for each hit is the score and the meta data.
Related Page: Curl Syntax In Elasticsearch With Examples
A response without the _source property.
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 6,
"max_score": 1,
"hits": [
{
"_index": "movies",
"_type": "movie",
"_id": "4",
"_score": 1
}
]
}
}
Another way to use the _source parameter is to give it a string with a single field name. This will result in the _source property in the hits to contain only that field.
A search request instructing ElasticSearch to only include the ‘title’ property in the _source.
curl -XPOST "https://localhost:9200/movies/_search" -d'
{
"size": 1,
"_source": "title"
}'
The response to the request, which includes the ‘title’ property in the _source.
"hits": [
{
"_index": "movies",
"_type": "movie",
"_id": "4",
"_score": 1,
"_source": {
"title": "Apocalypse Now"
}
}
]
If we want multiple fields, we can instead use an array:
A search request instructing ElasticSearch to only include the ‘title’ and ‘director’ properties in the _source.
curl -XPOST "https://localhost:9200/movies/_search" -d'
{
"size": 1,
"_source": ["title", "director"]
}'
It’s also possible to include, and exclude, fields whose names match one or more patterns.
Explore Elasticsearch Sample Resumes! Download & Edit, Get Noticed by Top Employers!Download Now!
Join our newsletter
inbox
Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!
Course Schedule
NameDates
Elasticsearch TrainingAug 27 to Sep 11
Elasticsearch TrainingAug 30 to Sep 14
Elasticsearch TrainingSep 03 to Sep 18
Elasticsearch TrainingSep 06 to Sep 21
Last updated: 26 August 2022
About Author
Yamuna Karumuri
Yamuna Karumuri is a content writer at Mindmajix.com. Her passion lies in writing articles on IT platforms including Machine learning, PowerShell, DevOps, Data Science, Artificial Intelligence, Selenium, MSBI, and so on. You can connect with her via LinkedIn.
Recommended Courses
1 /13
|
__label__pos
| 0.851837 |
Xeon vs Core Comparison
Apple has made a big splash in the month of June 2019 when it presented a renovated Mac Pro desktop computer dripping with processing and as well as graphics power. The primary mechanisms behind the latest Mac beast are Intel Xeon processors. They variety from an unidentified eight-core, 3.5 GHz Xeon W (perhaps, the Xeon W-3223), to additional as-yet-unnamed 2.5 GHz, 28-core Intel Xeon W processor, likely the Xeon W-3275 or W-3275M.
Related:
Xeon vs Core comparison
Let’s outside it; Apple’s recent workspace is not real for so many of us. Pricing for the recent Mac Pro to starts at $6,000 and intensifies up to “small business advance.” The recent desktops also own restricted promotion opportunities due to branded connectors, and its absence its vast betting potential on its Windows side.
Should you depart the abundances of Core i7 and i9 processors overdue to research with the world of Xeo, Perhaps not, and here’s why.
What is the Xeon CPU?
A Xeon is an Intel’s CPU team, and also it’s pointed primarily at business workplaces and waiters. These CPUs usually bids more cores than mainstream/majority PCs, however, the clock speeds are a slight unreliable when linked with their Core i7 and also i9 counterparts.
Intel Xeon W-3275+W-3275M, for instance, has a clock fastness which starts at 2.5 GHz and then moves up to 4.40 GHz, with an extra boost to 4.60 GHz under some definite loads. Liken that the general Core i9-9900K, that has a base clock of 3.60 GHz and an upgrade of 5.0 GHz. Obviously, its Core i9-9900K’s clock urgencies are loads enhanced for the regular PC user.
And then, you own the Xeon W-3223. This also has an eight-core, and 16-thread chip, just like the Core i9-9900K, but its own clock speed add out at 4.0 GHz, and the MSRP is about $250 greater than the i9-9900K. In short, Xeon clock speediness can also be nearby to the top Core part or well downward.
Where the Xeon instructions are power magnet and its heat creation —and not in a nice way. The Xeon chips have more power-hungry and become hotter. The 28-core, 56-thread Xeon W-3275M, for instance, it has a thermal project/design power (TDP) of 205 watts, and its W-3223 consumes a (TDP) of 160 watts. The i9-9900K, temporarily, consumes a TDP of 95 watts. You can also get closer to its Xeon with just something like the “prosumer” 16-core, 32-thread Core i9-9960X, that has a TDP of just 165 watts. Motionless, its huge mainstream of the Core i7 and i9 pieces, don’t have higher power and heat thoughts.
Why is Xeons More Expensive?
The Xeon CPUs tend to have more built-in, business- grave technology. For instance, they upkeep error- modifying code {ECC} memory, that avoids data dishonesty and the system crashes. (ECC) RAM is more luxurious and slower, so some home users locate the trade-off value it, as home PCs are pretty dependable.
For businesses wherever the uptime is mission- grave, a limited hour can rate far more than the ECC memory is value. Take monetary trading, for illustration, where dealings happen quicker than humans can recognize. If your computer goes down, or your data gets messed up, it’s a lot of lost currency for these businesses, which is why they’re eager to devote to precise technologies.
The Xeon processors which also upkeep far more RAM than Core chips-do, as healthy as heaps of PCIe lanes for linking growth cards.
When you have added a heap of cores that support for ECC tons of PCIe paths, and a big RAM provision and the price will reproduce that.
If you should ask the more cynical PC fans, however, they’ll notify you that Intel charges a greater price for Xeon since it can. Whatsoever built for a business tends to come with a higher price tag than the consumer-grade apparatus/equipment.
Is It Advisable For Me To Buy A Xeon For My PC?
Intel Core i9 processors.
More lite, all said Xeon sounds, very good: tons of cores, a respectable clock fastness in such cases, and heaps of PCIe paths. Heck, its power problems is just a request to work on usual cooling setup, right?
Perhaps. The Xeons aren’t the greatest and best choice for the regular home power user. This is also counted among the line, make a choice.
Leave a Comment
|
__label__pos
| 0.617316 |
Answers
Solutions by everydaycalculation.com
Answers.everydaycalculation.com » Compare fractions
Compare 4/24 and 2/10
4/24 is smaller than 2/10
Steps for comparing fractions
1. Find the least common denominator or LCM of the two denominators:
LCM of 24 and 10 is 120
2. For the 1st fraction, since 24 × 5 = 120,
4/24 = 4 × 5/24 × 5 = 20/120
3. Likewise, for the 2nd fraction, since 10 × 12 = 120,
2/10 = 2 × 12/10 × 12 = 24/120
4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction
5. 20/120 < 24/120 or 4/24 < 2/10
MathStep (Works offline)
Download our mobile app and learn to work with fractions in your own time:
Android and iPhone/ iPad
Related:
© everydaycalculation.com
|
__label__pos
| 0.967051 |
Back to home page
Project CMSSW displayed by LXR
File indexing completed on 2022-08-08 03:11:32
0001 /****************************************************************************
0002 *
0003 * This is a part of TOTEM offline software.
0004 * Authors:
0005 * Jan Kašpar ([email protected])
0006 * Nicola Minafra
0007 *
0008 ****************************************************************************/
0009
0010 #include "EventFilter/CTPPSRawToDigi/interface/RawDataUnpacker.h"
0011 #include "FWCore/MessageLogger/interface/MessageLogger.h"
0012 #include "DataFormats/FEDRawData/interface/FEDNumbering.h"
0013
0014 using namespace std;
0015 using namespace edm;
0016 using namespace pps;
0017
0018 RawDataUnpacker::RawDataUnpacker(const edm::ParameterSet &iConfig)
0019 : verbosity(iConfig.getUntrackedParameter<unsigned int>("verbosity", 0)) {}
0020
0021 int RawDataUnpacker::run(int fedId,
0022 const FEDRawData &data,
0023 vector<TotemFEDInfo> &fedInfoColl,
0024 SimpleVFATFrameCollection &coll) const {
0025 unsigned int size_in_words = data.size() / 8; // bytes -> words
0026 if (size_in_words < 2) {
0027 if (verbosity)
0028 LogWarning("Totem") << "Error in RawDataUnpacker::run > "
0029 << "Data in FED " << fedId << " too short (size = " << size_in_words << " words).";
0030 return 1;
0031 }
0032
0033 fedInfoColl.emplace_back(fedId);
0034
0035 return processOptoRxFrame((const word *)data.data(), size_in_words, fedInfoColl.back(), &coll);
0036 }
0037
0038 int RawDataUnpacker::processOptoRxFrame(const word *buf,
0039 unsigned int frameSize,
0040 TotemFEDInfo &fedInfo,
0041 SimpleVFATFrameCollection *fc) const {
0042 // get OptoRx metadata
0043 unsigned long long head = buf[0];
0044 unsigned long long foot = buf[frameSize - 1];
0045
0046 fedInfo.setHeader(head);
0047 fedInfo.setFooter(foot);
0048
0049 unsigned int boe = (head >> 60) & 0xF;
0050 unsigned int h0 = (head >> 0) & 0xF;
0051
0052 unsigned long lv1 = (head >> 32) & 0xFFFFFF;
0053 unsigned long bx = (head >> 20) & 0xFFF;
0054 unsigned int optoRxId = (head >> 8) & 0xFFF;
0055 unsigned int fov = (head >> 4) & 0xF;
0056
0057 unsigned int eoe = (foot >> 60) & 0xF;
0058 unsigned int f0 = (foot >> 0) & 0xF;
0059 unsigned int fSize = (foot >> 32) & 0x3FF;
0060
0061 // check header and footer structure
0062 if (boe != 5 || h0 != 0 || eoe != 10 || f0 != 0 || fSize != frameSize) {
0063 if (verbosity)
0064 LogWarning("Totem") << "Error in RawDataUnpacker::processOptoRxFrame > "
0065 << "Wrong structure of OptoRx header/footer: "
0066 << "BOE=" << boe << ", H0=" << h0 << ", EOE=" << eoe << ", F0=" << f0
0067 << ", size (OptoRx)=" << fSize << ", size (DATE)=" << frameSize << ". OptoRxID=" << optoRxId
0068 << ". Skipping frame." << endl;
0069 return 0;
0070 }
0071
0072 LogDebug("Totem") << "RawDataUnpacker::processOptoRxFrame: "
0073 << "OptoRxId = " << optoRxId << ", BX = " << bx << ", LV1 = " << lv1
0074 << ", frameSize = " << frameSize;
0075
0076 if (optoRxId >= FEDNumbering::MINTotemRPTimingVerticalFEDID &&
0077 optoRxId <= FEDNumbering::MAXTotemRPTimingVerticalFEDID) {
0078 processOptoRxFrameSampic(buf, frameSize, fedInfo, fc);
0079 return 0;
0080 }
0081
0082 // parallel or serial transmission?
0083 switch (fov) {
0084 case 1:
0085 return processOptoRxFrameSerial(buf, frameSize, fc);
0086 case 2:
0087 case 3:
0088 return processOptoRxFrameParallel(buf, frameSize, fedInfo, fc);
0089 default:
0090 break;
0091 }
0092
0093 if (verbosity)
0094 LogWarning("Totem") << "Error in RawDataUnpacker::processOptoRxFrame > "
0095 << "Unknown FOV = " << fov << endl;
0096
0097 return 0;
0098 }
0099
0100 int RawDataUnpacker::processOptoRxFrameSerial(const word *buf,
0101 unsigned int frameSize,
0102 SimpleVFATFrameCollection *fc) const {
0103 // get OptoRx metadata
0104 unsigned int optoRxId = (buf[0] >> 8) & 0xFFF;
0105
0106 // get number of subframes
0107 unsigned int subFrames = (frameSize - 2) / 194;
0108
0109 // process all sub-frames
0110 unsigned int errorCounter = 0;
0111 for (unsigned int r = 0; r < subFrames; ++r) {
0112 for (unsigned int c = 0; c < 4; ++c) {
0113 unsigned int head = (buf[1 + 194 * r] >> (16 * c)) & 0xFFFF;
0114 unsigned int foot = (buf[194 + 194 * r] >> (16 * c)) & 0xFFFF;
0115
0116 LogDebug("Totem") << "r = " << r << ", c = " << c << ": "
0117 << "S = " << (head & 0x1) << ", BOF = " << (head >> 12) << ", EOF = " << (foot >> 12)
0118 << ", ID = " << ((head >> 8) & 0xF) << ", ID' = " << ((foot >> 8) & 0xF);
0119
0120 // stop if this GOH is NOT active
0121 if ((head & 0x1) == 0)
0122 continue;
0123
0124 LogDebug("Totem") << "Header active (" << head << " -> " << (head & 0x1) << ").";
0125
0126 // check structure
0127 if (head >> 12 != 0x4 || foot >> 12 != 0xB || ((head >> 8) & 0xF) != ((foot >> 8) & 0xF)) {
0128 std::ostringstream oss;
0129 if (head >> 12 != 0x4)
0130 oss << "\n\tHeader is not 0x4 as expected (0x" << std::hex << head << ").";
0131 if (foot >> 12 != 0xB)
0132 oss << "\n\tFooter is not 0xB as expected (0x" << std::hex << foot << ").";
0133 if (((head >> 8) & 0xF) != ((foot >> 8) & 0xF))
0134 oss << "\n\tIncompatible GOH IDs in header (0x" << std::hex << ((head >> 8) & 0xF) << ") and footer (0x"
0135 << std::hex << ((foot >> 8) & 0xF) << ").";
0136
0137 if (verbosity)
0138 LogWarning("Totem") << "Error in RawDataUnpacker::processOptoRxFrame > "
0139 << "Wrong payload structure (in GOH block row " << r << " and column " << c
0140 << ") in OptoRx frame ID " << optoRxId << ". GOH block omitted." << oss.str() << endl;
0141
0142 errorCounter++;
0143 continue;
0144 }
0145
0146 // allocate memory for VFAT frames
0147 unsigned int goh = (head >> 8) & 0xF;
0148 vector<VFATFrame::word *> dataPtrs;
0149 for (unsigned int fi = 0; fi < 16; fi++) {
0150 TotemFramePosition fp(0, 0, optoRxId, goh, fi);
0151 dataPtrs.push_back(fc->InsertEmptyFrame(fp)->getData());
0152 }
0153
0154 LogDebug("Totem").log([&](auto &l) {
0155 l << "transposing GOH block at prefix: " << (optoRxId * 192 + goh * 16) << ", dataPtrs = ";
0156 for (auto p : dataPtrs) {
0157 l << p << " ";
0158 }
0159 });
0160 // deserialization
0161 for (int i = 0; i < 192; i++) {
0162 int iword = 11 - i / 16; // number of current word (11...0)
0163 int ibit = 15 - i % 16; // number of current bit (15...0)
0164 unsigned int w = (buf[i + 2 + 194 * r] >> (16 * c)) & 0xFFFF;
0165
0166 // Fill the current bit of the current word of all VFAT frames
0167 for (int idx = 0; idx < 16; idx++) {
0168 if (w & (1 << idx))
0169 dataPtrs[idx][iword] |= (1 << ibit);
0170 }
0171 }
0172 }
0173 }
0174
0175 return errorCounter;
0176 }
0177
0178 int RawDataUnpacker::processOptoRxFrameParallel(const word *buf,
0179 unsigned int frameSize,
0180 TotemFEDInfo &fedInfo,
0181 SimpleVFATFrameCollection *fc) const {
0182 // get OptoRx metadata
0183 unsigned long long head = buf[0];
0184 unsigned int optoRxId = (head >> 8) & 0xFFF;
0185
0186 // recast data as buffer or 16bit words, skip header
0187 const uint16_t *payload = (const uint16_t *)(buf + 1);
0188
0189 // read in OrbitCounter block
0190 const uint32_t *ocPtr = (const uint32_t *)payload;
0191 fedInfo.setOrbitCounter(*ocPtr);
0192 payload += 2;
0193
0194 // size in 16bit words, without header, footer and orbit counter block
0195 unsigned int nWords = (frameSize - 2) * 4 - 2;
0196
0197 // process all VFAT data
0198 for (unsigned int offset = 0; offset < nWords;) {
0199 unsigned int wordsProcessed = processVFATDataParallel(payload + offset, nWords, optoRxId, fc);
0200 offset += wordsProcessed;
0201 }
0202
0203 return 0;
0204 }
0205
0206 int RawDataUnpacker::processVFATDataParallel(const uint16_t *buf,
0207 unsigned int maxWords,
0208 unsigned int optoRxId,
0209 SimpleVFATFrameCollection *fc) const {
0210 // start counting processed words
0211 unsigned int wordsProcessed = 1;
0212
0213 // padding word? skip it
0214 if (buf[0] == 0xFFFF)
0215 return wordsProcessed;
0216
0217 // check header flag
0218 unsigned int hFlag = (buf[0] >> 8) & 0xFF;
0219 if (hFlag != vmCluster && hFlag != vmRaw && hFlag != vmDiamondCompact) {
0220 if (verbosity)
0221 LogWarning("Totem") << "Error in RawDataUnpacker::processVFATDataParallel > "
0222 << "Unknown header flag " << hFlag << ". Skipping this word." << endl;
0223 return wordsProcessed;
0224 }
0225
0226 // compile frame position
0227 // NOTE: DAQ group uses terms GOH and fiber in the other way
0228 unsigned int gohIdx = (buf[0] >> 4) & 0xF;
0229 unsigned int fiberIdx = (buf[0] >> 0) & 0xF;
0230 TotemFramePosition fp(0, 0, optoRxId, gohIdx, fiberIdx);
0231
0232 // prepare temporary VFAT frame
0233 VFATFrame f;
0234 VFATFrame::word *fd = f.getData();
0235
0236 // copy footprint, BC, EC, Flags, ID, if they exist
0237 uint8_t presenceFlags = 0;
0238
0239 if (((buf[wordsProcessed] >> 12) & 0xF) == 0xA) // BC
0240 {
0241 presenceFlags |= 0x1;
0242 fd[11] = buf[wordsProcessed];
0243 wordsProcessed++;
0244 }
0245
0246 if (((buf[wordsProcessed] >> 12) & 0xF) == 0xC) // EC, flags
0247 {
0248 presenceFlags |= 0x2;
0249 fd[10] = buf[wordsProcessed];
0250 wordsProcessed++;
0251 }
0252
0253 if (((buf[wordsProcessed] >> 12) & 0xF) == 0xE) // ID
0254 {
0255 presenceFlags |= 0x4;
0256 fd[9] = buf[wordsProcessed];
0257 wordsProcessed++;
0258 }
0259
0260 // save offset where channel data start
0261 unsigned int dataOffset = wordsProcessed;
0262
0263 // find trailer
0264 switch (hFlag) {
0265 case vmCluster: {
0266 unsigned int nCl = 0;
0267 while ((buf[wordsProcessed + nCl] >> 12) != 0xF && (wordsProcessed + nCl < maxWords))
0268 nCl++;
0269 wordsProcessed += nCl;
0270 } break;
0271 case vmRaw:
0272 wordsProcessed += 9;
0273 break;
0274 case vmDiamondCompact: {
0275 wordsProcessed--;
0276 while ((buf[wordsProcessed] & 0xFFF0) != 0xF000 && (wordsProcessed < maxWords))
0277 wordsProcessed++;
0278 } break;
0279 }
0280
0281 // process trailer
0282 unsigned int tSig = buf[wordsProcessed] >> 12;
0283 unsigned int tErrFlags = (buf[wordsProcessed] >> 8) & 0xF;
0284 unsigned int tSize = buf[wordsProcessed] & 0xFF;
0285
0286 f.setDAQErrorFlags(tErrFlags);
0287
0288 // consistency checks
0289 bool skipFrame = false;
0290 stringstream ess;
0291
0292 if (tSig != 0xF) {
0293 if (verbosity)
0294 ess << " Wrong trailer signature (" << tSig << ")." << endl;
0295 skipFrame = true;
0296 }
0297
0298 if (tErrFlags != 0) {
0299 if (verbosity)
0300 ess << " Error flags not zero (" << tErrFlags << ")." << endl;
0301 skipFrame = true;
0302 }
0303
0304 wordsProcessed++;
0305
0306 if (tSize != wordsProcessed) {
0307 if (verbosity)
0308 ess << " Trailer size (" << tSize << ") does not match with words processed (" << wordsProcessed << ")."
0309 << endl;
0310 skipFrame = true;
0311 }
0312
0313 if (skipFrame) {
0314 if (verbosity)
0315 LogWarning("Totem") << "Error in RawDataUnpacker::processVFATDataParallel > Frame at " << fp
0316 << " has the following problems and will be skipped.\n"
0317 << endl
0318 << ess.rdbuf();
0319
0320 return wordsProcessed;
0321 }
0322
0323 // get channel data - cluster mode
0324 if (hFlag == vmCluster) {
0325 for (unsigned int nCl = 0; (buf[dataOffset + nCl] >> 12) != 0xF && (dataOffset + nCl < maxWords); ++nCl) {
0326 const uint16_t &w = buf[dataOffset + nCl];
0327 unsigned int upperBlock = w >> 8;
0328 unsigned int clSize = upperBlock & 0x7F;
0329 unsigned int clPos = (w >> 0) & 0xFF;
0330
0331 // special case: upperBlock=0xD0 => numberOfClusters
0332 if (upperBlock == 0xD0) {
0333 presenceFlags |= 0x10;
0334 f.setNumberOfClusters(clPos);
0335 continue;
0336 }
0337
0338 // special case: size=0 means chip full
0339 if (clSize == 0)
0340 clSize = 128;
0341
0342 // activate channels
0343 // convention - range <pos, pos-size+1>
0344 signed int chMax = clPos;
0345 signed int chMin = clPos - clSize + 1;
0346 if (chMax < 0 || chMax > 127 || chMin < 0 || chMin > 127 || chMin > chMax) {
0347 if (verbosity)
0348 LogWarning("Totem") << "Error in RawDataUnpacker::processVFATDataParallel > "
0349 << "Invalid cluster (pos=" << clPos << ", size=" << clSize << ", min=" << chMin
0350 << ", max=" << chMax << ") at " << fp << ". Skipping this cluster." << endl;
0351 continue;
0352 }
0353
0354 for (signed int ch = chMin; ch <= chMax; ch++) {
0355 unsigned int wi = ch / 16;
0356 unsigned int bi = ch % 16;
0357 fd[wi + 1] |= (1 << bi);
0358 }
0359 }
0360 }
0361
0362 // get channel data and CRC - raw mode
0363 if (hFlag == vmRaw) {
0364 for (unsigned int i = 0; i < 8; i++)
0365 fd[8 - i] = buf[dataOffset + i];
0366
0367 // copy CRC
0368 presenceFlags |= 0x8;
0369 fd[0] = buf[dataOffset + 8];
0370 }
0371
0372 // get channel data for diamond compact mode
0373 if (hFlag == vmDiamondCompact) {
0374 for (unsigned int i = 1; (buf[i + 1] & 0xFFF0) != 0xF000 && (i + 1 < maxWords); i++) {
0375 if ((buf[i] & 0xF000) == VFAT_HEADER_OF_EC) {
0376 // Event Counter word is found
0377 fd[10] = buf[i];
0378 continue;
0379 }
0380 switch (buf[i] & 0xF800) {
0381 case VFAT_DIAMOND_HEADER_OF_WORD_2:
0382 // word 2 of the diamond VFAT frame is found
0383 fd[1] = buf[i + 1];
0384 fd[2] = buf[i];
0385 break;
0386 case VFAT_DIAMOND_HEADER_OF_WORD_3:
0387 // word 3 of the diamond VFAT frame is found
0388 fd[3] = buf[i];
0389 fd[4] = buf[i - 1];
0390 break;
0391 case VFAT_DIAMOND_HEADER_OF_WORD_5:
0392 // word 5 of the diamond VFAT frame is found
0393 fd[5] = buf[i];
0394 fd[6] = buf[i - 1];
0395 break;
0396 case VFAT_DIAMOND_HEADER_OF_WORD_7:
0397 // word 7 of the diamond VFAT frame is found
0398 fd[7] = buf[i];
0399 fd[8] = buf[i - 1];
0400 break;
0401 default:
0402 break;
0403 }
0404 presenceFlags |= 0x8;
0405 }
0406 }
0407
0408 // save frame to output
0409 f.setPresenceFlags(presenceFlags);
0410 fc->Insert(fp, f);
0411
0412 return wordsProcessed;
0413 }
0414
0415 int RawDataUnpacker::processOptoRxFrameSampic(const word *buf,
0416 unsigned int frameSize,
0417 TotemFEDInfo &fedInfo,
0418 SimpleVFATFrameCollection *fc) const {
0419 unsigned int optoRxId = (buf[0] >> 8) & 0xFFF;
0420
0421 LogDebug("RawDataUnpacker::processOptoRxFrameSampic")
0422 << "Processing sampic frame: OptoRx " << optoRxId << " framesize: " << frameSize;
0423
0424 unsigned int orbitCounterVFATFrameWords = 6;
0425 unsigned int sizeofVFATPayload = 12;
0426
0427 const VFATFrame::word *VFATFrameWordPtr = (const VFATFrame::word *)buf;
0428 VFATFrameWordPtr += orbitCounterVFATFrameWords - 1;
0429
0430 LogDebug("RawDataUnpacker::processOptoRxFrameSampic")
0431 << "Framesize: " << frameSize << "\tframes: " << frameSize / (sizeofVFATPayload + 2);
0432
0433 unsigned int nWords = (frameSize - 2) * 4 - 2;
0434
0435 for (unsigned int i = 1; i * (sizeofVFATPayload + 2) < nWords; ++i) {
0436 // compile frame position
0437 // NOTE: DAQ group uses terms GOH and fiber in the other way
0438 unsigned int fiberIdx = (*(++VFATFrameWordPtr)) & 0xF;
0439 unsigned int gohIdx = (*VFATFrameWordPtr >> 4) & 0xF;
0440 TotemFramePosition fp(0, 0, optoRxId, gohIdx, fiberIdx);
0441
0442 LogDebug("RawDataUnpacker::processOptoRxFrameSampic")
0443 << "OptoRx: " << optoRxId << " Goh: " << gohIdx << " Idx: " << fiberIdx;
0444
0445 // prepare temporary VFAT frame
0446 VFATFrame frame(++VFATFrameWordPtr);
0447 VFATFrameWordPtr += sizeofVFATPayload;
0448
0449 if (*(VFATFrameWordPtr) != 0xf00e) {
0450 edm::LogError("RawDataUnpacker::processOptoRxFrameSampic") << "Wrong trailer " << *VFATFrameWordPtr;
0451 continue;
0452 }
0453 // save frame to output
0454 frame.setPresenceFlags(1);
0455 fc->Insert(fp, frame);
0456
0457 LogDebug("RawDataUnpacker::processOptoRxFrameSampic") << "Trailer: " << std::hex << *VFATFrameWordPtr;
0458 }
0459
0460 return 0;
0461 }
|
__label__pos
| 0.903271 |
FRSCA: Tekton
Overview
In our blog series about FRSCA we’ve already deployed a hardened Kubernetes cluster with the help of kOps, Trivy, the NSA/CISA Kubernetes Hardening Guide, and the CIS Benchmark, then we added SPIFFE/SPIRE to our cluster for workload identities and Vault for secret management. In this final installment of the series we deploy Tekton and integrate it with SPIRE and Vault to generate container images with signed provenance.
Tekton is a Kubernetes-native open source framework for creating continuous integration and delivery (CI/CD) systems. Using the Kubernetes model of declarative primitives and specifications, adopters can build, test, and deploy across multiple cloud providers or on-premises systems without having to worry about any underlying implementation details.
Some of the benefits to using Tekton include:
• The ability to define pipelines, the individual tasks undertaken by pipelines, and the parameters consumed by pipelines as code
• Each task in a pipeline runs inside its own pod, allowing users to allocate just the resources necessary to perform a task; no need for bloated CI/CD servers loaded with (exploitable) tools necessary
• Like Kubernetes, Tekton is extremely expandable. Tasks can be shared through the Tekton community hub to provide functionality for many use cases
Tekton Pipelines
Tekton itself is a collection of tools. The most basic is the Pipelines tool. There are others, including the Chains tool, which allows Tekton to perform artifact signing with Cosign (among other options). Tekton installs and runs as an extension on a Kubernetes cluster and comprises a set of Kubernetes Custom Resources (CRDs) that define the building blocks used to create and reuse pipelines.
Once installed, Tekton Pipelines become available via the Kubernetes CLI (kubectl) and via API calls, just like pods and other resources. Tekton also has the tkn command line client, though for the sake of simplicity and to demonstrate just how Kubernetes-native its approach is, you will be shown how to do all of the Tekton operations using the manifests and kubectl.
To install Tekton we can simply tell Kubernetes that our desired state is to have Tekton running. Deploy the Tekton GA release manifests to the cluster:
$ kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
role.rbac.authorization.k8s.io/tekton-pipelines-controller created
role.rbac.authorization.k8s.io/tekton-pipelines-webhook created
role.rbac.authorization.k8s.io/tekton-pipelines-leader-election created
role.rbac.authorization.k8s.io/tekton-pipelines-info created
serviceaccount/tekton-pipelines-controller created
serviceaccount/tekton-pipelines-webhook created
serviceaccount/tekton-events-controller created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-leaderelection created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-info created
rolebinding.rbac.authorization.k8s.io/tekton-events-controller created
rolebinding.rbac.authorization.k8s.io/tekton-events-controller-leaderelection created
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/customruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/resolutionrequests.resolution.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/verificationpolicies.tekton.dev created
secret/webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pipeline.tekton.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.pipeline.tekton.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.pipeline.tekton.dev created
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-edit created
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-view created
configmap/config-defaults created
configmap/feature-flags created
configmap/pipelines-info created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-registry-cert created
configmap/config-spire created
deployment.apps/tekton-pipelines-controller created
service/tekton-pipelines-controller created
deployment.apps/tekton-events-controller created
service/tekton-events-controller created
namespace/tekton-pipelines-resolvers created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-resolvers-resolution-request-updates created
role.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created
serviceaccount/tekton-pipelines-resolvers created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created
configmap/bundleresolver-config created
configmap/cluster-resolver-config created
configmap/resolvers-feature-flags created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/git-resolver-config created
configmap/hubresolver-config created
deployment.apps/tekton-pipelines-remote-resolvers created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
$
As you can see from the kubectl output, the Tekton release creates a namespace called tekton-pipelines and then generates many resources, including a host of security primitives (service accounts, RBAC roles and bindings, etc.).
You may also notice that Tekton defines several Custom Resource Definitions (CRDs), in particular:
• Task – useful for simple workloads such as running a test, a lint, or building a Kaniko cache; a single Task executes a sequence of steps in a single Kubernetes Pod, uses a single disk, and generally keeps things simple
• Pipeline – useful for complex workloads, such as static analysis, as well as testing, building, and deploying complex projects; pipelines are defined as a series of Tasks
Both Tasks and Pipelines can be executed multiple times. Each instance of a run is known as a TaskRun or a PipelineRun respectively.
List the pods in the tekton-pipelines namespace:
$ kubectl get po -n tekton-pipelines
NAME READY STATUS RESTARTS AGE
tekton-events-controller-77857f9b75-lkvk2 1/1 Running 0 59s
tekton-pipelines-controller-6987c95899-cxhqb 1/1 Running 0 59s
tekton-pipelines-webhook-7d9c8c6f8-xslkv 1/1 Running 0 59s
$
When you see all of the pods report STATUSRunning” and READY1/1“, you are clear to continue.
Establishing the pull-build-containerize-push pipeline
Now that Tekton Pipelines is installed and functional, it is time to assemble the initial Pipeline. It will be simple:
• Pull the source code from the RX-M Hostinfo public git repo
• Perform the Docker build using Kaniko
• Kaniko pushes the built image to an on-cluster registry
To perform these, the user must define several Kubernetes objects:
• Tekton Tasks that define the tools necessary
• Tekton Pipelines that organize Tasks into a sequence and define things like common storage and variables
• Tekton Pipelineruns that feed user-defined parameters into Pipelines and effectively trigger them
Look and see if your cluster has any of these things in it:
$ kubectl get tasks,pipelines,pipelineruns -A
No resources found
$
Nothing. The Tekton Pipelines installation you did earlier is very lightweight.
To begin, you need to add a couple of Tasks. Tekton tasks are available through both the Tekton community hub at https://hub.tekton.dev/ or ArtifactHub at https://artifacthub.io/.
We need two Tasks for this initial pipeline: git-clone and kaniko.
Installing these is as simple as applying their YAML files to the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.9/git-clone.yaml \
-f https://raw.githubusercontent.com/tektoncd/catalog/main/task/kaniko/0.6/kaniko.yaml
task.tekton.dev/git-clone created
task.tekton.dev/kaniko created
$ kubectl get tasks
NAME AGE
git-clone 29s
kaniko 29s
$
Some of the major elements of each Task are:
• The image of the container that will run when the Task is in progress
• Parameters that can be passed into the application executed by the Task
• The kinds of workspace (volume) options available to the Task
Now that you have Tasks, it’s time to assemble them into a sequence. This is defined as a Pipeline resource.
Create a specification for a Pipeline as shown below (we will explain the parts after):
$ nano pipeline.yaml && cat $_
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-build-push
spec:
description: |
This pipeline clones a git repo, builds a Docker image with Kaniko and
pushes it to a registry
params:
- name: context-path
type: string
- name: dockerfile-path
type: string
- name: extra-args
type: array
- name: image-reference
type: string
- name: repo-url
type: string
workspaces:
- name: shared-data
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: build-push
runAfter: ["fetch-source"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: shared-data
params:
- name: CONTEXT
value: $(params.context-path)
- name: DOCKERFILE
value: $(params.dockerfile-path)
- name: EXTRA_ARGS
value: ["$(params.extra-args[*])"]
- name: IMAGE
value: $(params.image-reference)
$
This Pipeline defines the following:
• The unique identifier of name: clone-build-push
• Four parameters that will allow users to provide arguments at runtime: context-path, dockerfile-path, extra-args, image-reference, and repo-url
params:
- name: context-path
type: string
- name: dockerfile-path
type: string
- name: extra-args
type: array
- name: image-reference
type: string
- name: repo-url
type: string
• A workspace, which defines a Kubernetes persistent volume (PV) that allows each Task pod to share data
workspaces:
- name: shared-data
• The sequence of Tasks, which references the existing Tasks in the namespace, defines what workspace they use, and how each Task consumes that defined parameters in the pipeline
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: build-push
runAfter: ["fetch-source"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: shared-data
params:
- name: CONTEXT
value: $(params.context-path)
- name: DOCKERFILE
value: $(params.dockerfile-path)
- name: EXTRA_ARGS
value: ["$(params.extra-args[*])"]
- name: IMAGE
value: $(params.image-reference)
Once the Pipeline is defined, apply it to the cluster:
$ kubectl apply -f pipeline.yaml
pipeline.tekton.dev/clone-build-push created
$
It is now ready to run!
To run the Pipeline, you need to create a PipelineRun object in the Kubernetes API. This object defines:
• The run-specific values for the parameters defined in the Pipeline
• The definition of storage to be used by the run of the Pipeline
• Other modifications such as security context settings for the Task pods
Create a yaml file for the PipelineRun with the following contents:
$ nano pipelinerun.yaml && cat $_
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: clone-build-push-run-
spec:
pipelineRef:
name: clone-build-push
taskRunTemplate:
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
params:
- name: context-path
value: ./python
- name: dockerfile-path
value: ./python/Dockerfile
- name: extra-args
value:
- --insecure=true
- name: image-reference
value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
- name: repo-url
value: https://github.com/RX-M/hostinfo.git
$
As soon as you apply this PipelineRun to the cluster, the Pipeline will trigger.
Before we do that, we need a container registry where we can push the image. To keep things simple and self-contained we will show you how to install a basic, on-cluster registry so you can reproduce this blog without any external dependencies. If you want to use an external registry, you will need to change 2 settings in the PipelineRun:
- name: extra-args
value: "--insecure=true"
- name: image-reference
value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
• Change the --insecure flag to false if you are using a secure registry (which you should!)
• Change the FQIN reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton to a valid registry FQIN in the pattern <hostname:port/account/repo:tag>
Deploy the on-cluster registry by adding the following repo to Helm and install the registry with the service port set to 80:
$ helm repo add twuni https://helm.twun.io
$ helm install reg twuni/docker-registry --namespace registry --create-namespace --set=”service.port=80”
NAME: reg
LAST DEPLOYED: Thu Mar 14 04:56:40 2024
NAMESPACE: registry
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace registry -l "app=docker-registry,release=reg" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl -n registry port-forward $POD_NAME 8080:5000
$
This registry uses a ClusterIP service so it is only available within our cluster. This is fine as the Kaniko builder is running there too!
Make sure you use kubectl create to allow the name to automatically generate:
$ kubectl create -f pipelinerun.yaml
pipelinerun.tekton.dev/clone-build-push-run-zw28x created
$
Now watch the pipeline work! A good way is to use the -w switch so you can keep track of status changes on the pods:
$ kubectl get po -w
NAME READY STATUS RESTARTS AGE
affinity-assistant-2793d681c2-0 1/1 Running 0 9s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 Init:1/2 0 8s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 Init:1/2 0 9s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 PodInitializing 0 10s
clone-build-push-run-wtqq7-fetch-source-pod 1/1 Running 0 14s
clone-build-push-run-wtqq7-fetch-source-pod 1/1 Running 0 14s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 Completed 0 17s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 Completed 0 18s
clone-build-push-run-wtqq7-build-push-pod 0/2 Pending 0 0s
clone-build-push-run-wtqq7-build-push-pod 0/2 Pending 0 0s
clone-build-push-run-wtqq7-build-push-pod 0/2 Init:0/3 0 1s
clone-build-push-run-wtqq7-fetch-source-pod 0/1 Completed 0 19s
clone-build-push-run-wtqq7-build-push-pod 0/2 Init:1/3 0 2s
clone-build-push-run-wtqq7-build-push-pod 0/2 Init:2/3 0 3s
clone-build-push-run-wtqq7-build-push-pod 0/2 PodInitializing 0 5s
clone-build-push-run-wtqq7-build-push-pod 2/2 Running 0 10s
clone-build-push-run-wtqq7-build-push-pod 2/2 Running 0 10s
clone-build-push-run-wtqq7-build-push-pod 0/2 Completed 0 2m18s
affinity-assistant-2793d681c2-0 1/1 Terminating 0 2m37s
affinity-assistant-2793d681c2-0 0/1 Terminating 0 2m37s
affinity-assistant-2793d681c2-0 0/1 Terminating 0 2m38s
affinity-assistant-2793d681c2-0 0/1 Terminating 0 2m38s
affinity-assistant-2793d681c2-0 0/1 Terminating 0 2m38s
clone-build-push-run-wtqq7-build-push-pod 0/2 Completed 0 2m19s
clone-build-push-run-wtqq7-build-push-pod 0/2 Completed 0 2m20s
^C
$
To make sure it worked, issue a curl command to your local registry’s /v2/_catalog from a temporary pod:
$ kubectl run -it --rm curl --image rxmllc/tools
/ # curl reg-docker-registry.registry/v2/_catalog
{"repositories":["hostinfo"]}
/ # exit
Session ended, resume using 'kubectl attach curl -c curl -i -t' command when the pod is running
pod "curl" deleted
$
The pipeline worked!
Since the build used pods, you can view the logs on each pod to audit what they may have done. Tekton uses tekton.dev/pipelineTask labels to label its pods by Task. Recall that our Tasks were named fetch-source and build-push; use these labels to get the logs for your pods:
$ kubectl logs -l tekton.dev/pipelineTask=fetch-source
Defaulted container "step-clone" out of: step-clone, prepare (init), place-scripts (init)
+ cd /workspace/output/
+ git rev-parse HEAD
+ RESULT_SHA=d69d7ff7101a093225f6a830753d1d40a928e423
+ EXIT_CODE=0
+ '[' 0 '!=' 0 ]
+ git log -1 '--pretty=%ct'
+ RESULT_COMMITTER_DATE=1706322186
+ printf '%s' 1706322186
+ printf '%s' d69d7ff7101a093225f6a830753d1d40a928e423
+ printf '%s' https://github.com/RX-M/hostinfo.git
$ kubectl logs -l tekton.dev/pipelineTask=build-push
Defaulted container "step-build-and-push" out of: step-build-and-push, step-write-url, prepare (init), place-scripts (init), working-dir-initializer (init)
INFO[0157] args: [-c chown 1000:1000 __main__.py]
INFO[0157] Running: [/bin/sh -c chown 1000:1000 __main__.py]
INFO[0157] Taking snapshot of full filesystem...
INFO[0157] USER 1000
INFO[0157] cmd: USER
INFO[0157] ENV PYTHONUNBUFFERED=1
INFO[0157] ENTRYPOINT ["./__main__.py"]
INFO[0157] CMD ["9898"]
INFO[0157] Pushing image to reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
INFO[0933] Pushed image to 1 destinations
$
To clean these up, we remove just the PipelineRun object (below we use the alias pr):
$ kubectl get pr
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
clone-build-push-run-v5p4z True Succeeded 26m 10m
$ kubectl delete pr clone-build-push-run-v5p4z
pipelinerun.tekton.dev "clone-build-push-run-v5p4z" deleted
$
If you ever needed to debug the PipelineRun, you can see relevant information using kubectl describe.
Now, how about we add an SBOM to the mix?
Add an SBOM to the Pipeline
Expanding a pipeline in Tekton is a matter of adding the appropriate task to the pipeline. For SBOMs, Anchore has prepared a syft task which you can install onto your cluster, documented here: https://hub.tekton.dev/tekton/task/syft. The main purpose of this task is to give your CI/CD pipelines the ability to automatically create a new SBOM after a container is created (or at any point of your pipeline, really).
Install the syft Tekton task:
$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/syft/0.1/syft.yaml
task.tekton.dev/syft created
$
The syft Task takes an array of arguments accepted by the standalone syft binary.
Next, modify the Pipeline so it now has a syft Task:
$ nano sbom-pipeline.yaml && cat $_
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: clone-build-push
spec:
description: |
This pipeline clones a git repo, builds a Docker image with Kaniko and
pushes it to a registry
params:
- name: context-path
type: string
- name: dockerfile-path
type: string
- name: extra-args
type: array
- name: image-reference
type: string
- name: repo-url
type: string
- name: syft-args # add this
type: array # add this
workspaces:
- name: shared-data
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: build-push
runAfter: ["fetch-source"]
taskRef:
name: kaniko
workspaces:
- name: source
workspace: shared-data
params:
- name: CONTEXT
value: $(params.context-path)
- name: DOCKERFILE
value: $(params.dockerfile-path)
- name: EXTRA_ARGS
value: ["$(params.extra-args[*])"]
- name: IMAGE
value: $(params.image-reference)
- name: syft-sbom # add everything from here down
runAfter: ["build-push"]
taskRef:
name: syft
workspaces:
- name: source-dir
workspace: shared-data
params:
- name: ARGS
value: ["$(params.syft-args[*])"]
$
This new Task will use the shared-data workspace and consumes an array of arguments known as syft-args. The arguments are passed into the task’s ARGS variable.
Apply the updated Pipeline:
$ kubectl apply -f sbom-pipeline.yaml
pipeline.tekton.dev/clone-build-push configured
$
Next, create a PipelineRun that accommodates for the new argument (ARGS) from syft:
$ nano sbom-pipelinerun.yaml && cat $_
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: clone-build-push-run-
spec:
pipelineRef:
name: clone-build-push
taskRunTemplate:
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
params:
- name: context-path
value: ./python
- name: dockerfile-path
value: ./python/Dockerfile
- name: extra-args
value:
- --insecure=true
- name: image-reference
value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
- name: repo-url
value: https://github.com/RX-M/hostinfo.git
- name: syft-args # add this and everything below
value:
- reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
$
By providing just the image as part of the arguments, syft will generate an SBOM in its own format and output it its container’s stdout stream.
Create the new PipelineRun and watch the pods again using the -w:
$ kubectl create -f sbom-pipelinerun.yaml ; kubectl get po -w
pipelinerun.tekton.dev/clone-build-push-run-dztl4 created
NAME READY STATUS RESTARTS AGE
affinity-assistant-89945f810c-0 1/1 Running 0 11s
clone-build-push-run-dztl4-build-push-pod 0/2 Init:1/3 0 3s
clone-build-push-run-dztl4-fetch-source-pod 0/1 Completed 0 11s
clone-build-push-run-dztl4-build-push-pod 0/2 Init:2/3 0 4s
clone-build-push-run-dztl4-build-push-pod 0/2 PodInitializing 0 5s
clone-build-push-run-dztl4-build-push-pod 2/2 Running 0 6s
clone-build-push-run-dztl4-build-push-pod 2/2 Running 0 6s
clone-build-push-run-dztl4-build-push-pod 1/2 NotReady 0 117s
clone-build-push-run-dztl4-build-push-pod 0/2 Completed 0 118s
clone-build-push-run-dztl4-build-push-pod 0/2 Completed 0 119s
clone-build-push-run-dztl4-build-push-pod 0/2 Completed 0 2m
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Pending 0 0s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Pending 0 0s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Init:0/2 0 0s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Init:0/2 0 1s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Init:1/2 0 2s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 PodInitializing 0 3s
clone-build-push-run-dztl4-syft-sbom-pod 1/1 Running 0 5s
clone-build-push-run-dztl4-syft-sbom-pod 1/1 Running 0 5s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Completed 0 7s
affinity-assistant-89945f810c-0 1/1 Terminating 0 2m15s
affinity-assistant-89945f810c-0 0/1 Terminating 0 2m16s
affinity-assistant-89945f810c-0 0/1 Terminating 0 2m16s
affinity-assistant-89945f810c-0 0/1 Terminating 0 2m16s
affinity-assistant-89945f810c-0 0/1 Terminating 0 2m16s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Completed 0 8s
clone-build-push-run-dztl4-syft-sbom-pod 0/1 Completed 0 9s
^C
$
That new syft-sbom-pod indicates that Tekton is successfully running the new SBOM-based step!
Once everything completes, check the logs to see if the SBOM was actually created:
$ kubectl logs -l tekton.dev/pipelineTask=syft-sbom
Defaulted container "step-syft" out of: step-syft, prepare (init), working-dir-initializer (init)
python 3.12.2 binary
readline 8.2.1-r2 apk
scanelf 1.3.7-r2 apk
setuptools 69.0.3 python
sqlite-libs 3.44.2-r0 apk
ssl_client 1.36.1-r15 apk
tzdata 2023d-r0 apk
wheel 0.42.0 python
xz-libs 5.4.5-r0 apk
zlib 1.3.1-r0 apk
$
Great! Syft is now properly running and generates a SBOM.
Remember that with SBOM, machine readability and access to the SBOM document are key. So, with that in mind, we will want to produce an artifact that we can sign later and push. In fact, say you have a requirement to provide a SPDX-formatted JSON SBOM.
Since the syft Task takes arguments accepted by the standalone binary, those can easily be added as arguments to the PipelineRun:
$ nano sbom-pipelinerun.yaml && cat $_
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: clone-build-push-run-
spec:
pipelineRef:
name: clone-build-push
taskRunTemplate:
podTemplate:
securityContext:
fsGroup: 65532
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
params:
- name: context-path
value: ./python
- name: dockerfile-path
value: ./python/Dockerfile
- name: extra-args
value:
- --insecure=true
- name: image-reference
value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
- name: repo-url
value: https://github.com/RX-M/hostinfo.git
- name: syft-args
value:
- reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
- -o # add this argument and the one below it
- spdx-json
$
With these arguments, syft will generate the SBOM in the SPDX JSON format.
Delete the old PipelineRun and create the new one:
$ kubectl delete pr --all; sleep 5; kubectl create -f sbom-pipelinerun.yaml
pipelinerun.tekton.dev "clone-build-push-run-dztl4" deleted
pipelinerun.tekton.dev/clone-build-push-run-tqldn created
$
Watch the PipelineRun’s status on the API:
$ kubectl get pr -w
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
clone-build-push-run-tqldn Unknown Running 40s
clone-build-push-run-tqldn Unknown Running 87s
clone-build-push-run-tqldn True Succeeded 92s 0s
^C
$
ctrl c after you see it in the Suceeded Reason. Examine the logs of the syft-sbom pod:
$ kubectl logs $(kubectl get po -l tekton.dev/pipelineTask=syft-sbom -o name)
Defaulted container "step-syft" out of: step-syft, prepare (init), working-dir-initializer (init)
{
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",
"name": "reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton",
"documentNamespace": "https://anchore.com/syft/image/reg-docker-registry.registry.svc.cluster.local/hostinfo-tekton-a20e356f-8006-46b1-838c-84ee3d4a7982",
"creationInfo": {
"licenseListVersion": "3.21",
"creators": [
"Organization: Anchore, Inc",
"Tool: syft-0.85.0"
],
"created": "2024-03-14T17:25:22Z"
},
"packages": [
{
"name": ".python-rundeps",
"SPDXID": "SPDXRef-Package-apk-.python-rundeps-0ed183b6f816579a",
"versionInfo": "20240207.221705",
"downloadLocation": "NONE",
"filesAnalyzed": false,
"sourceInfo": "acquired package info from APK DB: /lib/apk/db/installed",
"licenseConcluded": "NOASSERTION",
"licenseDeclared": "NOASSERTION",
"copyrightText": "NOASSERTION",
"description": "virtual meta package",
"externalRefs": [
{
"referenceCategory": "SECURITY",
"referenceType": "cpe23Type",
"referenceLocator": "cpe:2.3:a:.python-rundeps:.python-rundeps:20240207.221705:*:*:*:*:*:*:*"
},
...
Now that you have created the SBOM you can do a variety of next steps:
• Save it as a file rather than echoing it to stdout
• Sign the SBOM with Cosign and push it to a container registry
• Generate an attestation with in-toto to help assure your consumers that they can trust your SBOM
We will do all of these to get as close to SLSA Level 3 as possible; to do so we will need Tekton Chains.
Set up Tekton Chains
Tekton Chains is a Kubernetes CRD controller that observes all Tekton TaskRun executions in your cluster, then, when a TaskRun completes, Chains takes a snapshot of it. Chains then converts this snapshot to one or more standard payload formats, signs it and stores them somewhere.
Current features include:
• Signing TaskRun results and OCI Images with user provided cryptographic keys
• Attestation formats like in-toto and SLSA
• Signing with a variety of cryptographic key types and services (x509, KMS)
• Support for multiple storage backends for signatures
We can install Chains in much the same way we installed Tekton Pipelines. Apply the GA release manifests for Tekton Chains to your Kubernetes cluster:
$ kubectl apply -f https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml
namespace/tekton-chains created
secret/signing-secrets created
configmap/chains-config created
deployment.apps/tekton-chains-controller created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
serviceaccount/tekton-chains-controller created
role.rbac.authorization.k8s.io/tekton-chains-leader-election created
rolebinding.rbac.authorization.k8s.io/tekton-chains-controller-leaderelection created
role.rbac.authorization.k8s.io/tekton-chains-info created
rolebinding.rbac.authorization.k8s.io/tekton-chains-info created
configmap/chains-info created
configmap/config-logging created
configmap/tekton-chains-config-observability created
service/tekton-chains-metrics created
$
Like Tekton Pipelines, Tekton Chains creates its own namespace, tekton-chains.
To verify the installation was successful, wait until the Tekton Chains Controller has STATUSRunning“:
$ kubectl get po -n tekton-chains
NAME READY STATUS RESTARTS AGE
tekton-chains-controller-84c978f497-5mcvg 1/1 Running 0 33s
$
Give the Chains Controller a SVID
In previous blogs, we deployed SPIRE and Vault on our FRSCA Kubernetes cluster, established trust between our SPIRE server and our Vault server, and used a SPIFFE Verifiable Identity Document (SVID) to retrieve a Vault secret. In this step we will create a registration entry in our SPIRE server for the Kubernetes Service Account used by the Tekton Chains controller so that it too can use a SVID to retrieve Vault secrets.
Make sure you change the cluster name frsca.rx-m.net in the example below to your cluster name.
$ kubectl exec -n spire spire-server-0 -- \
/opt/spire/bin/spire-server entry create \
-spiffeID spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller \
-parentID spiffe://frsca.rx-m.net/ns/spire/sa/spire-agent \
-selector k8s:ns:tekton-chains \
-selector k8s:sa:tekton-chains-controller
Defaulted container "spire-server" out of: spire-server, spire-oidc
Entry ID : 37cc234e-4cd8-43fe-9336-e392ca49044e
SPIFFE ID : spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller
Parent ID : spiffe://frsca.rx-m.net/ns/spire/sa/spire-agent
Revision : 0
X509-SVID TTL : default
JWT-SVID TTL : default
Selector : k8s:ns:tekton-chains
Selector : k8s:sa:tekton-chains-controller
$
With our registration complete, next we configure Vault to trust the Tekton Chains controller’s SVID.
Enable Vault Transit
In this step we will enable the Vault Transit engine to perform secretless/keyless code signing with Tekton Chains. The primary use case for Transit is to encrypt data from applications while still storing that encrypted data in some primary data store. Meaning, keys never leave the Vault. Instead the data is sent to Vault to get encrypted/decrypted/signed/verified. This resolves the issue of having signing keys on a local machine and resolves the issue of managing K8s secrets to access the signing keys. Instead we utilize the SVID of the Chains controller to authenticate against Vault for signed provenance.
Exec an interactive shell in the Vault pod:
$ kubectl exec -n vault -it vault-0 -- /bin/sh
/ $
You should not need to login again but if you do, take the initial Vault root token from the Initialization and export it as an environment variable, then use vault login to authenticate with the server:
```
/ $ export VAULT_ROOT_KEY=hvs.UoYoI2i…
/ $ vault login $VAULT_ROOT_KEY
Success! You are now authenticated…
/ $
First we will update the JWT config we created in our Vault blog so that the default_role is set to the spire-chains-controller, a role we will create subsequently.
/ $ vault write auth/jwt/config oidc_discovery_url=https://oidc-discovery.rx-m.net default_role="spire-chains-controller"
Success! Data written to: auth/jwt/config
/ $ vault read auth/jwt/config
Key Value
--- -----
bound_issuer n/a
default_role spire-chains-controller
jwks_ca_pem n/a
jwks_url n/a
jwt_supported_algs []
jwt_validation_pubkeys []
namespace_in_state true
oidc_client_id n/a
oidc_discovery_ca_pem n/a
oidc_discovery_url https://oidc-discovery.rx-m.net
oidc_response_mode n/a
oidc_response_types []
provider_config map[]
/ $
Now, create the role, ensuring that the bound_subject matches the value we used for the SPIFFIE ID (-spiffeID) with the spire-server entry create command from the previous section (spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller in our case)
Make sure you change the cluster name frsca.rx-m.net in the example below to your cluster name.
/ $ vault write auth/jwt/role/spire-chains-controller \
role_type=jwt \
user_claim=sub \
bound_audiences=BLOG \
bound_subject=spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller \
token_ttl=15m \
token_policies=spire-transit
Success! Data written to: auth/jwt/role/spire-chains-controller
/ $
Enable the transit engine:
/ $ vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/
/ $
Write the spire-transit configuration that enables access to the transit engine for the “frsca” key, which we will create immediately afterwards.
/ $ vault policy write spire-transit - <<EOF
path "transit/*" {
capabilities = ["read"]
}
path "transit/sign/frsca" {
capabilities = ["create", "read", "update"]
}
path "transit/sign/frsca/*" {
capabilities = ["read", "update"]
}
path "transit/verify/frsca" {
capabilities = ["create", "read", "update"]
}
path "transit/verify/frsca/*" {
capabilities = ["read", "update"]
}
EOF
Success! Uploaded policy: spire-transit
/ $
Generate the transit key:
/ $ vault write transit/keys/frsca type=ecdsa-p521
Key Value
--- -----
allow_plaintext_backup false
auto_rotate_period 0s
deletion_allowed false
derived false
exportable false
imported_key false
keys map[1:map[certificate_chain: creation_time:2024-03-15T04:34:59.042408917Z name:P-521 public_key:-----BEGIN PUBLIC KEY-----
MIGbMBAGByqGSM49AgEGBSuBBAAjA...
-----END PUBLIC KEY-----
]]
latest_version 1
min_available_version 0
min_decryption_version 1
min_encryption_version 0
name frsca
supports_decryption false
supports_derivation false
supports_encryption false
supports_signing true
type ecdsa-p521
/ $
Exit your interactive exec session (we need to use tools that aren’t in the Vault container).
/ $ exit
$
Use a one-time kubectl exec command to read the key, parse it with jq and store it in a local file on the bastion server:
$ kubectl exec -i -n vault vault-0 -- /bin/sh -c "vault read -format=json transit/keys/frsca" | jq -r .data.keys.\"1\".public_key > "frsca.pem"
$
Create a configmap from the local file:
$ kubectl -n vault create configmap frsca-certs --from-file=frsca.pem
configmap/frsca-certs created
$
Finally, create the signing secret from the same file:
$ kubectl -n tekton-chains create secret generic signing-secrets --from-file=cosign.pub=frsca.pem
secret/signing-secrets created
$
Configure Chains
Tekton Chains creates the provenance for Task and Pipeline runs, then signs it using our secure private key. Chains then uploads the signed provenance to a user-specified location. Chains can be configured to upload to various systems:
• OCI compliant registry, convenient because image and provenance can be stored together
• A backend implementing the Grafeas API, defined by Google for storing provenance
• A Google Cloud Storage Bucket, standard object storage
• A Firestore document store
• Others
To update Tekton Chains we will patch its configmap with a yaml file that:
• Updates the storage for an OCI registry
• Sets the attestation format to SLSA
• Sets the signer to kms and points to Vault’s ClusterIP
• Uses Vault keys with cosign for signing and verification
• The URI format for Hashicorp Vault KMS is: hashivault://$keyname, in our case hashivault://frsca (the key we created in Vault earlier)
• Specifies the OIDC role to the JWT config in Vault, linked to our SVID
$ nano chains-patch-config.yaml && cat $_
data:
artifacts.taskrun.storage: tekton,oci
artifacts.taskrun.format: slsa/v1
artifacts.pipelinerun.storage: tekton,oci
artifacts.pipelinerun.format: slsa/v1
artifacts.oci.signer: kms
artifacts.taskrun.signer: kms
artifacts.pipelinerun.signer: kms
signers.kms.kmsref: "hashivault://frsca"
signers.kms.auth.address: "http://vault.vault:8200"
signers.kms.auth.oidc.path: jwt
signers.kms.auth.oidc.role: "spire-chains-controller"
signers.kms.auth.spire.sock: "unix:///spiffe-workload-api/agent.sock"
signers.kms.auth.spire.audience: BLOG
$
Use the file to patch the configmap:
$ kubectl -n tekton-chains patch cm chains-config --patch-file chains-patch-config.yaml
configmap/chains-config patched
$
Now, edit the tekton-chains-controller deployment as follows:
• Under spec.template.containers[0].volumeMounts for the only container add a volume mount for the spire-agent-socket so that Chains can use the SPIRE Workload API
• Under spec.template.volumes add the spire-agent-socket as a hostPath volume (our SPIRE agent DaemonSet is currently using hostPath instead of the SPIRE CSI driver but we may change that later)
$ kubectl -n tekton-chains edit deploy tekton-chains-controller
…
volumeMounts:
- mountPath: /etc/signing-secrets
name: signing-secrets
- mountPath: /var/run/sigstore/cosign
name: oidc-info
- mountPath: /spiffe-workload-api # Add this
name: spire-agent-socket # Add this
...
volumes:
- name: signing-secrets
secret:
defaultMode: 420
secretName: signing-secrets
- name: oidc-info
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sigstore
expirationSeconds: 600
path: oidc-token
- hostPath: # Add this
path: /run/spire/sockets # Add this
name: spire-agent-socket # Add this
Save your edits and exit the editor.
The Chains Controller runs under a Kubernetes Deployment and should start a rolling update:
$ kubectl -n tekton-chains get deploy,po
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tekton-chains-controller 1/1 1 1 25h
NAME READY STATUS RESTARTS AGE
pod/tekton-chains-controller-84c978f497-x2l2b 1/1 Running 0 25h
pod/tekton-chains-controller-d5cb79688-24gc7 0/1 ContainerCreating 0 3s
$
Success! Now the Tekton Chains controller pod has access to the SPIRE workload API and has the configuration to use it!
SLSA Level 2 Pipeline
With the prerequisites in place we can turn our focus to creating a Pipeline that generates our SBOM and attestations, signs them, and stores them in our on-cluster registry. First we need to replace the basic syft task with one that can perform all that. Create the following Task:
$ nano demo-bom-task-syft.yaml && cat $_
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: demo-syft-bom-generator
spec:
params:
- name: image-ref
type: string
- name: image-digest
type: string
- default: frsca-sbom.json
description: filepath to store the sbom artifacts
name: sbom-filepath
type: string
- default: "true"
name: syft-http
type: string
- default: "debug"
name: syft-log-level
type: string
- default: "true"
name: syft-skip-tls
type: string
results:
- description: status of syft task, possible value are-success|failure
name: status
type: string
- description: name of the uploaded SBOM artifact
name: SBOM_IMAGE_URL
type: string
- description: digest of the uploaded SBOM artifact
name: SBOM_IMAGE_DIGEST
type: string
stepTemplate:
computeResources: {}
env:
- name: SYFT_LOG_LEVEL
value: $(params.syft-log-level)
- name: SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY
value: $(params.syft-skip-tls)
- name: SYFT_REGISTRY_INSECURE_USE_HTTP
value: $(params.syft-http)
steps:
- args:
- -o
- spdx-json
- --file
- $(workspaces.source.path)/$(params.sbom-filepath)
- $(params.image-ref)
image: anchore/syft:v0.58.0@sha256:b764278a9a45f3493b78b8708a4d68447807397fe8c8f59bf21f18c9bee4be94
name: syft-bom-generator
- args:
- attach
- sbom
- --sbom
- $(workspaces.source.path)/$(params.sbom-filepath)
- --type
- spdx
- $(params.image-ref)
image: gcr.io/projectsigstore/cosign:v2.0.0@sha256:728944a9542a7235b4358c4ab2bcea855840e9d4b9594febca5c2207f5da7f38
name: attach-sbom
workspaces:
- name: source
$
Note the args for the steps; rather than outputting the SBOM to stdout like we did before, we are configuring the Task to output files which will get pushed to our registry.
Apply the new Task to the cluster:
$ kubectl apply -f demo-bom-task-syft.yaml
task.tekton.dev/demo-syft-bom-generator created
$
Our new Pipeline will reference this task as well as the git-clone and kaniko tasks we used earlier. Create the Pipeline:
$ nano demo-pipeline.yaml && cat $_
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: demo-pipeline
spec:
params:
- name: context-path
type: string
- name: dockerfile-path
type: string
- name: extra-args
type: array
- name: image
type: string
- name: imageRepo
type: string
- name: imageTag
type: string
- name: SOURCE_URL
type: string
- name: syft-skip-tls
type: string
- name: syft-http
type: string
tasks:
- name: clone-repo
params:
- name: url
value: $(params.SOURCE_URL)
- name: deleteExisting
value: "true"
taskRef:
kind: Task
name: git-clone
workspaces:
- name: output
workspace: git-source
- name: build-and-push-image
params:
- name: CONTEXT
value: $(params.context-path)
- name: DOCKERFILE
value: $(params.dockerfile-path)
- name: EXTRA_ARGS
value: ["$(params.extra-args[*])"]
- name: IMAGE
value: $(params.image)
runAfter:
- clone-repo
taskRef:
kind: Task
name: kaniko
workspaces:
- name: source
workspace: git-source
- name: generate-bom
params:
- name: image-ref
value: $(params.image)
- name: image-digest
value: $(tasks.build-and-push-image.results.IMAGE_DIGEST)
- name: syft-skip-tls
value: $(params.syft-skip-tls)
- name: syft-http
value: $(params.syft-http)
runAfter:
- build-and-push-image
taskRef:
kind: Task
name: demo-syft-bom-generator
workspaces:
- name: source
workspace: git-source
workspaces:
- name: git-source
$
Like before, this Pipeline clones the source repo, builds and pushes the image, and generates the SBOM. The difference is that Chains will attest to these steps, sign the attestation and add the attestation to the registry.
Apply the Pipeline to the cluster:
$ kubectl apply -f demo-pipeline.yaml
pipeline.tekton.dev/demo-pipeline created
$
Last step, create the PipelineRun:
$ nano demo-pipelinerun.yaml && cat $_
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: demo-pipeline-run-
spec:
params:
- name: context-path
value: ./python
- name: dockerfile-path
value: ./python/Dockerfile
- name: extra-args
value:
- --insecure=true
- --verbosity=debug
- name: image
value: reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2
- name: imageRepo
value: reg-docker-registry.registry.svc.cluster.local/hostinfo
- name: imageTag
value: slsa2
- name: SOURCE_URL
value: https://github.com/RX-M/hostinfo.git
- name: syft-skip-tls
value: true
- name: syft-http
value: true
pipelineRef:
name: demo-pipeline
taskRunTemplate:
podTemplate:
securityContext:
fsGroup: 65532
timeouts:
pipeline: 1h0m0s
workspaces:
- name: git-source
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
$
Several of the params in this PipelineRun let us push the artifacts to our on-cluster registry which is insecure. Again, for the purposes of this blog that lets us keep things self-contained but when setting this up for a secure registry you should remove the --insecure flag:
- name: extra-args
value:
- --insecure=true
Also set the following parameters to false:
- name: syft-skip-tls
value: false
- name: syft-http
value: false
Looking back at the Task, these parameters hydrate the following environment variables:
- name: SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY
value: $(params.syft-skip-tls)
- name: SYFT_REGISTRY_INSECURE_USE_HTTP
value: $(params.syft-http)
If you wanted to make sure that an insecure registry was never used you would need to modify the Task, the Pipeline, and the PipelineRun to remove all the insecure references. Those modifications are beyond the scope of this blog.
Create your PipelineRun:
$ kubectl create -f demo-pipelinerun.yaml
pipelinerun.tekton.dev/demo-pipeline-run-qggsq created
$
Get your PipelineRun (pr), TaskRuns (tr), and pods (po):
$ kubectl get pr,tr,po
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
pipelinerun.tekton.dev/demo-pipeline-run-qggsq True Succeeded 52s 16s
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
taskrun.tekton.dev/demo-pipeline-run-qggsq-build-and-push-image True Succeeded 38s 25s
taskrun.tekton.dev/demo-pipeline-run-qggsq-clone-repo True Succeeded 52s 38s
taskrun.tekton.dev/demo-pipeline-run-qggsq-generate-bom True Succeeded 25s 16s
NAME READY STATUS RESTARTS AGE
pod/demo-pipeline-run-qggsq-build-and-push-image-pod 0/2 Completed 0 38s
pod/demo-pipeline-run-qggsq-clone-repo-pod 0/1 Completed 0 52s
pod/demo-pipeline-run-qggsq-generate-bom-pod 0/2 Completed 0 25s
$
Wait until the PipelineRun and TaskRuns say Succeeded and the pods’ status are Completed (as they are in the example above) and the image and its related metadata files should be in our registry.
To examine the artifacts we will install Crane, a tool created by Google for interacting with remote images and registries. Install Crane using the following commands:
$ VERSION=$(curl -s "https://api.github.com/repos/google/go-containerregistry/releases/latest" | jq -r '.tag_name')
$ OS=Linux
$ ARCH=x86_64
$ curl -sL "https://github.com/google/go-containerregistry/releases/download/${VERSION}/go-containerregistry_${OS}_${ARCH}.tar.gz" > go-containerregistry.tar.gz
$ sudo tar -zxvf go-containerregistry.tar.gz -C /usr/local/bin/ crane
Now we can use crane ls to list the artifacts in our registry. In the example below we use the LoadBalancer DNS which is reachable from the kOps Bastion server. You can also use the Cluster IP from anywhere inside the cluster.
$ kubectl get svc -n registry
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
reg-docker-registry LoadBalancer 100.65.5.252 a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com 80:30681/TCP,443:32647/TCP 48m
$ crane ls a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo --insecure
slsa2
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.sbom
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.att
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.sig
$
The artifacts in our registry are as follows:
• slsa2 – this is our tagged image
• sha256-<hash>.sbom – this is the SBOM generated by the syft task
• sha256-<hash>.att – the attestation file generated by Chains
• sha256-<hash>.sig – the signature file
We can download the attestation and verify the signature using Cosign, which should already be installed but if it isn’t you can install it with the following commands:
$ curl -O -L "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64"
$ sudo mv cosign-linux-amd64 /usr/local/bin/cosign
$ sudo chmod +x /usr/local/bin/cosign
Using the cosign download command download the attestation and decode the payload:
$ cosign download attestation --allow-insecure-registry --allow-http-registry a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2 | jq -r .payload | base64 --decode > att.json
Examine the attestation:
$ cat att.json | jq
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://slsa.dev/provenance/v0.2",
"subject": [
{
"name": "reg-docker-registry.registry.svc.cluster.local/hostinfo",
"digest": {
"sha256": "d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9"
}
}
],
"predicate": {
"builder": {
"id": "https://tekton.dev/chains/v2"
},
"buildType": "tekton.dev/v1beta1/TaskRun",
"invocation": {
"configSource": {},
"parameters": {
"BUILDER_IMAGE": "gcr.io/kaniko-project/executor:v1.5.1@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5",
"CONTEXT": "./python",
"DOCKERFILE": "./python/Dockerfile",
"EXTRA_ARGS": [
"--insecure=true",
"--verbosity=debug"
],
"IMAGE": "reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2"
},
"environment": {
"annotations": {
"pipeline.tekton.dev/affinity-assistant": "affinity-assistant-91cdf251b8",
"pipeline.tekton.dev/release": "d714545",
"tekton.dev/categories": "Image Build",
"tekton.dev/displayName": "Build and upload container image using Kaniko",
"tekton.dev/pipelines.minVersion": "0.17.0",
"tekton.dev/platforms": "linux/amd64,linux/arm64,linux/ppc64le",
"tekton.dev/tags": "image-build"
},
"labels": {
"app.kubernetes.io/managed-by": "tekton-pipelines",
"app.kubernetes.io/version": "0.6",
"tekton.dev/memberOf": "tasks",
"tekton.dev/pipeline": "demo-pipeline",
"tekton.dev/pipelineRun": "demo-pipeline-run-qggsq",
"tekton.dev/pipelineTask": "build-and-push-image",
"tekton.dev/task": "kaniko"
}
}
},
"buildConfig": {
"steps": [
{
"entryPoint": "",
"arguments": [
"--insecure=true",
"--verbosity=debug",
"--dockerfile=./python/Dockerfile",
"--context=/workspace/source/./python",
"--destination=reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2",
"--digest-file=/tekton/results/IMAGE_DIGEST"
],
"environment": {
"container": "build-and-push",
"image": "oci://gcr.io/kaniko-project/executor@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5"
},
"annotations": null
},
{
"entryPoint": "set -e\nimage=\"reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2\"\necho -n \"${image}\" | tee \"/tekton/results/IMAGE_URL\"\n",
"arguments": null,
"environment": {
"container": "write-url",
"image": "oci://docker.io/library/bash@sha256:c523c636b722339f41b6a431b44588ab2f762c5de5ec3bd7964420ff982fb1d9"
},
"annotations": null
}
]
},
"metadata": {
"buildStartedOn": "2024-05-24T22:32:40Z",
"buildFinishedOn": "2024-05-24T22:32:53Z",
"completeness": {
"parameters": false,
"environment": false,
"materials": false
},
"reproducible": false
},
"materials": [
{
"uri": "oci://gcr.io/kaniko-project/executor",
"digest": {
"sha256": "c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5"
}
},
{
"uri": "oci://docker.io/library/bash",
"digest": {
"sha256": "c523c636b722339f41b6a431b44588ab2f762c5de5ec3bd7964420ff982fb1d9"
}
}
]
}
}
$
We can also verify the signature using the cosign verify command:
$ cosign verify --insecure-ignore-tlog --allow-insecure-registry --key k8s://tekton-chains/signing-secrets a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2
WARNING: Skipping tlog verification is an insecure practice that lacks of transparency and auditability verification for the signature.
Verification for a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2 --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
[{"critical":{"identity":{"docker-reference":"reg-docker-registry.registry.svc.cluster.local/hostinfo"},"image":{"docker-manifest-digest":"sha256:d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9"},"type":"cosign container image signature"},"optional":null}]
$
Cosign has verified the signature! We can now provide signed provenance to our users/customers.
Conclusion
Using FRSCA tooling we have achieved SLSA Build Level 2, which as a reminder requires:
All of Build Level 1:
• Software producer follows a consistent build process so that others can form expectations about what a “correct” build looks like
• The Dockerfile in our git repo and the Pipeline definition are both considered “consistent build process”
• Provenance exists
• The Syft task generates our SBOM and Chains provides the attestations
• Software producer distributes provenance to consumers, preferably using a convention determined by the package ecosystem
• Since our FRSCA cluster uses the registry, we are distributing the provenance using a convention of the container ecosystem
Plus Build Level 2:
• Build platform runs on dedicated infrastructure
• Our kOps based cluster meets this requirement
• Provenance is tied to build infrastructure through a digital signature
• Our provenance is signed by a key that is only accessible to the build platform in the Vault server
At this point this is as far as we can go; SLSA Build Level 3 is not possible in a FRSCA-based cluster at the moment. This is because Chains has no way to verify that a given TaskRun it received wasn’t modified by anybody other than Tekton, during or after execution and Tekton Pipelines can’t verify that the results it reads weren’t modified. This means that currently unfalsifiable provenance is impossible to achieve. The solution is a tighter integration with SPIRE and there is ongoing work in the community to make that happen; check out TEP-0089 for more details.
While this is the last blog in our FRSCA series, the road to unfalsifiable provenance for software artifacts is a marathon and as the projects in the FRSCA architecture evolve and improve we will be following along. Thanks for taking this journey with us!
|
__label__pos
| 0.523308 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
This question already has an answer here:
I have a class which holds a list of data's (Eg: List), there are methods available in classes
1. to update the list
2. insert new items to list and
3. delete any items from the list
the above insert, update and delete methods are being called from multiple threads. So i have to provide lock as the following Object locker = new Object();
// Insert method
lock(locker)
{
// Insert to list
}
// Update method
lock(locker)
{
// Update the list
}
Now my question is which kind of locking method is good, whether to use a lock object as above or use the "syncroot" method of locking the list as below. Please advice.
// Insert method
lock(((ICollection)myList).SynRoot)
{
// Insert to list
}
// Update method
lock(((ICollection)myList).SynRoot)
{
// Update the list
}
Thanks
share|improve this question
marked as duplicate by Frédéric Hamidi, Marc Gravell Mar 4 '13 at 9:02
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1
Since .NET 4 there are specialised collections in the framework such as ConcurrentBag, have you had a look at using any of those? – Adrian Thompson Phillips Mar 4 '13 at 9:03
1 Answer 1
i would suggest do some re factoring and
try looking into Blocking Collection instead of using lock explicitly if it suites with your requirement
share|improve this answer
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.846848 |
Change Username and full name.
• 03 Nov 2023
• 1 Minute to read
• PDF
Change Username and full name.
• PDF
Article Summary
The following PowerShell script helps the IT Admins change the username and full name of the users on the managed Windows devices.
1. Create a file on your desktop, for example, To_change_username_and_fullname.ps1 and open it in a text editor like notepad++
2. Copy the contents below to the file or click here to download the file.
Shell
# Define the old and new usernames and full names
$oldUsername = "old_username"
$newUsername = "new_username"
$newFullName = "new_fullname"
# Check if the old username exists
if (Get-WmiObject Win32_UserAccount | Where-Object { $_.Name -eq $oldUsername }) {
# Rename the user account (username)
Rename-LocalUser -Name $oldUsername -NewName $newUsername
# Modify the user's full name
Get-WmiObject Win32_UserAccount -Filter "Name='$newUsername'" | ForEach-Object {
$_.FullName = $newFullName
$_.Put()
}
Write-Host "User account $oldUsername has been renamed to $newUsername, and the full name has been updated to $newFullName."
} else {
Write-Host "User account $oldUsername not found."
}
1. Replace the data in the script as shown below with the correct new usernames and new full name, for example:
Shell
$oldUsername = "Jinny"
$newUsername = "Liza"
$newFullName = "Liza"
3. Follow our guide to upload & publish the PowerShell script using Scalefusion Dashboard.
4. Once the script is successfully executed, you will be able to see the status of the same in the View Status report on the Scalefusion dashboard.
Please note that to use the PowerShell scripts, the Scalefusion MDM Agent Application must be installed on the device(s). Please follow our guideto publish and install the Scalefusion MDM Agent Application.
Notes:
1. The scripts and their contents are sourced from various albeit authenticated Microsoft sources and forums.
2. Please validate the scripts on a test machine before deploying them on all your managed devices.
3. Scalefusion has tested these scripts, however, Scalefusion will not be responsible for any loss of data or system malfunction that may arise due to the incorrect usage of these scripts.
Was this article helpful?
|
__label__pos
| 0.962715 |
Changeset 28edb3adc96a3b765795c2ed0be9da72e031acbf
Show
Ignore:
Timestamp:
03/07/07 21:58:57 (8 years ago)
Author:
Theo Schlossnagle <[email protected]>
git-committer:
Theo Schlossnagle <[email protected]> 1173304737 +0000
git-parent:
[cb639856e3261b2963a940623c42f8a8481eb078]
git-author:
Theo Schlossnagle <[email protected]> 1173304737 +0000
Message:
first pass at lock handling, refs #6
Files:
Legend:
Unmodified
Added
Removed
Modified
Copied
Moved
• zetaback
rcb63985 r28edb3a
55use MIME::Base64;
66use POSIX qw/strftime/;
7use Fcntl qw/:flock/;
78use Pod::Usage;
89
9 use vars qw/%conf $version_string
10use vars qw/%conf %locks $version_string
1011 $CONF $BLOCKSIZE $DEBUG $HOST $BACKUP
1112 $RESTORE $RESTORE_HOST $RESTORE_ZFS $TIMESTAMP
357358 return "$bytes b";
358359}
360sub lock($;$$) {
361 my ($host, $file, $nowait) = @_;
362 print "Acquiring lock for $host:$file\n" if($DEBUG);
363 $file ||= 'master.lock';
364 my $store = config_get($host, 'store');
365 $store =~ s/%h/$host/g;
366 return 1 if(exists($locks{"$host:$file"}));
367 open(LOCK, "+>>$store/$file") || return 0;
368 unless(flock(LOCK, LOCK_EX | ($nowait ? LOCK_NB : 0))) {
369 close(LOCK);
370 return 0;
371 }
372 $locks{"$host:$file"} = \*LOCK;
373 return 1;
374}
375sub unlock($;$$) {
376 my ($host, $file, $remove) = @_;
377 print "Releasing lock for $host:$file\n" if($DEBUG);
378 $file ||= 'master.lock';
379 my $store = config_get($host, 'store');
380 $store =~ s/%h/$host/g;
381 return 0 unless(exists($locks{"$host:$file"}));
382 *UNLOCK = $locks{$file};
383 unlink("$store/$file") if($remove);
384 flock(UNLOCK, LOCK_UN);
385 close(UNLOCK);
386 return 1;
387}
359388sub scan_for_backups($) {
360389 my %info = ();
719748 print "Planning '$host'\n" if($DEBUG);
720749 my $agent = config_get($host, 'agent');
721 open(ZFSLIST, "ssh $host $agent -l |") || next;
722 foreach my $diskline (<ZFSLIST>) {
723 chomp($diskline);
724 next unless($diskline =~ /^(\S+) \[([^\]]*)\]/);
725 my $diskname = $1;
726 my %snaps;
727 map { $snaps{$_} = 1 } (split(/,/, $2));
728
729 # If we are being selective (via -z) now is the time.
730 next
731 if($diskpat && # if the pattern was specified it could
732 !($diskname eq $diskpat || # be a specific match or a
733 ($diskpat =~ /^\/(.+)\/$/ && $diskname =~ /$1/))); # regex
734
735 print " => Scanning '$store' for old backups of '$diskname'.\n" if($DEBUG);
736 # Make directory on demand
737 my $backup_info = scan_for_backups($store);
738 # That gave us info on all backups, we just want this disk
739 $backup_info = $backup_info->{$diskname} || {};
740
741 # Should we do a backup?
742 my $backup_type = 'no';
743 if(time() > $backup_info->{last_backup} + config_get($host, 'backup_interval')) {
744 $backup_type = 'incremental';
745 }
746 if(time() > $backup_info->{last_full} + config_get($host, 'full_interval')) {
747 $backup_type = 'full';
748 }
749
750 # If we want an incremental, but have no full, then we need to upgrade to full
751 if($backup_type eq 'incremental') {
752 my $have_full_locally = 0;
753 # For each local full backup, see if the full backup still exists on the other end.
754 foreach (keys %{$backup_info->{'full'}}) {
755 $have_full_locally = 1 if(exists($snaps{'__zb_full_' . $_}));
756 }
757 $backup_type = 'full' unless($have_full_locally);
758 }
759
760 print " => doing $backup_type backup\n" if($DEBUG);
761 # We need to drop a __zb_base snap or a __zb_incr snap before we proceed
762 unless($NUETERED) {
763 if($backup_type eq 'full') {
764 eval { zfs_full_backup($host, $diskname, $store); };
765 if ($@) {
766 chomp(my $err = $@);
767 print " => failure $err\n";
750 my $took_action = 1;
751 while($took_action) {
752 $took_action = 0;
753 my @disklist;
754
755 # We need a lock for the listing.
756 return unless(lock($host, ".list"));
757 open(ZFSLIST, "ssh $host $agent -l |") || next;
758 @disklist = grep { chomp } (<ZFSLIST>);
759 close(ZFSLIST);
760
761 foreach my $diskline (@disklist) {
762 chomp($diskline);
763 next unless($diskline =~ /^(\S+) \[([^\]]*)\]/);
764 my $diskname = $1;
765 my %snaps;
766 map { $snaps{$_} = 1 } (split(/,/, $2));
767
768 # If we are being selective (via -z) now is the time.
769 next
770 if($diskpat && # if the pattern was specified it could
771 !($diskname eq $diskpat || # be a specific match or a
772 ($diskpat =~ /^\/(.+)\/$/ && $diskname =~ /$1/))); # regex
773
774 print " => Scanning '$store' for old backups of '$diskname'.\n" if($DEBUG);
775
776 # Make directory on demand
777 my $backup_info = scan_for_backups($store);
778 # That gave us info on all backups, we just want this disk
779 $backup_info = $backup_info->{$diskname} || {};
780
781 # Should we do a backup?
782 my $backup_type = 'no';
783 if(time() > $backup_info->{last_backup} + config_get($host, 'backup_interval')) {
784 $backup_type = 'incremental';
785 }
786 if(time() > $backup_info->{last_full} + config_get($host, 'full_interval')) {
787 $backup_type = 'full';
788 }
789
790 # If we want an incremental, but have no full, then we need to upgrade to full
791 if($backup_type eq 'incremental') {
792 my $have_full_locally = 0;
793 # For each local full backup, see if the full backup still exists on the other end.
794 foreach (keys %{$backup_info->{'full'}}) {
795 $have_full_locally = 1 if(exists($snaps{'__zb_full_' . $_}));
768796 }
769 else {
770 # Unless there was an error backing up, remove all the other full snaps
771 foreach (keys %snaps) {
772 zfs_remove_snap($host, $diskname, $_) if(/^__zb_full_(\d+)/)
797 $backup_type = 'full' unless($have_full_locally);
798 }
799
800 print " => doing $backup_type backup\n" if($DEBUG);
801 # We need to drop a __zb_base snap or a __zb_incr snap before we proceed
802 unless($NUETERED || $backup_type eq 'no') {
803 # attempt to lock this action, if it fails, skip -- someone else is working it.
804 next unless(lock($host, dir_encode($diskname), 1));
805 unlock($host, '.list');
806
807 if($backup_type eq 'full') {
808 eval { zfs_full_backup($host, $diskname, $store); };
809 if ($@) {
810 chomp(my $err = $@);
811 print " => failure $err\n";
773812 }
813 else {
814 # Unless there was an error backing up, remove all the other full snaps
815 foreach (keys %snaps) {
816 zfs_remove_snap($host, $diskname, $_) if(/^__zb_full_(\d+)/)
817 }
818 }
819 $took_action = 1;
774820 }
775 }
776 if($backup_type eq 'incremental') {
777 zfs_remove_snap($host, $diskname, '__zb_incr') if($snaps{'__zb_incr'});
778 # Find the newest full from which to do an incremental (NOTE: reverse numeric sort)
779 my @fulls = sort { $b <=> $a } (keys %{$backup_info->{'full'}});
780 zfs_incremental_backup($host, $diskname, $fulls[0], $store);
781 }
782 }
783 }
784 close(ZFSLIST);
821 if($backup_type eq 'incremental') {
822 zfs_remove_snap($host, $diskname, '__zb_incr') if($snaps{'__zb_incr'});
823 # Find the newest full from which to do an incremental (NOTE: reverse numeric sort)
824 my @fulls = sort { $b <=> $a } (keys %{$backup_info->{'full'}});
825 zfs_incremental_backup($host, $diskname, $fulls[0], $store);
826 $took_action = 1;
827 }
828 unlock($host, dir_encode($diskname), 1);
829 }
830 last if($took_action);
831 }
832 unlock($host, '.list');
833 }
785834}
786835
|
__label__pos
| 0.828951 |
641
Does anybody know how to get hold of an element defined in a component template? Polymer makes it really easy with the $ and $$.
I was just wondering how to go about it in Angular.
Take the example from the tutorial:
import {Component} from '@angular/core';
@Component({
selector:'display',
template:`
<input #myname (input)="updateName(myname.value)"/>
<p>My name : {{myName}}</p>
`
})
export class DisplayComponent {
myName: string = "Aman";
updateName(input: String) {
this.myName = input;
}
}
How do I catch hold or get a reference of the p or input element from within the class definition?
0
14 Answers 14
1132
Instead of injecting ElementRef and using querySelector or similar from there, a declarative way can be used instead to access elements in the view directly:
<input #myname>
@ViewChild('myname') input;
element
ngAfterViewInit() {
console.log(this.input.nativeElement.value);
}
StackBlitz example
• @ViewChild() supports directive or component type as parameter, or the name (string) of a template variable.
• @ViewChildren() also supports a list of names as comma separated list (currently no spaces allowed @ViewChildren('var1,var2,var3')).
• @ContentChild() and @ContentChildren() do the same but in the light DOM (<ng-content> projected elements).
descendants
@ContentChildren() is the only one that allows to also query for descendants
@ContentChildren(SomeTypeOrVarName, {descendants: true}) someField;
{descendants: true} should be the default but is not in 2.0.0 final and it's considered a bug
This was fixed in 2.0.1
read
If there are a component and directives the read parameter allows to specify which instance should be returned.
For example ViewContainerRef that is required by dynamically created components instead of the default ElementRef
@ViewChild('myname', { read: ViewContainerRef }) target;
subscribe changes
Even though view children are only set when ngAfterViewInit() is called and content children are only set when ngAfterContentInit() is called, if you want to subscribe to changes of the query result, it should be done in ngOnInit()
https://github.com/angular/angular/issues/9689#issuecomment-229247134
@ViewChildren(SomeType) viewChildren;
@ContentChildren(SomeType) contentChildren;
ngOnInit() {
this.viewChildren.changes.subscribe(changes => console.log(changes));
this.contentChildren.changes.subscribe(changes => console.log(changes));
}
direct DOM access
can only query DOM elements, but not components or directive instances:
export class MyComponent {
constructor(private elRef:ElementRef) {}
ngAfterViewInit() {
var div = this.elRef.nativeElement.querySelector('div');
console.log(div);
}
// for transcluded content
ngAfterContentInit() {
var div = this.elRef.nativeElement.querySelector('div');
console.log(div);
}
}
get arbitrary projected content
See Access transcluded content
22
• 13
The angular teams advised against using ElementRef, this is the better solution. Mar 30 '16 at 14:32
• 8
Actually input also is an ElementRef, but you get the reference to the element you actually want, instead of querying it from the host ElementRef. Mar 30 '16 at 14:35
• 40
Actually using ElementRef is just fine. Also using ElementRef.nativeElement with Renderer is fine. What is discouraged is accessing properties of ElementRef.nativeElement.xxx directly. Jun 3 '16 at 12:33
• 2
@Natanael I don't know if or where this is explicitly documented but it is mentioned regularly in issues or other discussions (also from Angular team members) that direct DOM access should be avoided. Accessing the DOM directly (which is what accessing properties and methods of ElementRef.nativeElement) is, prevents you from using Angulars server side rendering and WebWorker feature (I don't know if it also breaks the upcoming offline template compiler - but I guess not). Jun 14 '16 at 10:30
• 11
As mentioned above in the read section, if you want to get the nativeElement for an element with ViewChild, you have to do the following: @ViewChild('myObj', { read: ElementRef }) myObj: ElementRef;
– jsgoupil
Aug 18 '16 at 23:02
215
You can get a handle to the DOM element via ElementRef by injecting it into your component's constructor:
constructor(private myElement: ElementRef) { ... }
Docs: https://angular.io/docs/ts/latest/api/core/index/ElementRef-class.html
8
• 1
@Brocco can you update your answer? I'd like to see a current solution since ElementRef is gone.
– Jefftopia
Nov 24 '15 at 2:07
• 23
ElementRef is available (again?). Feb 4 '16 at 19:15
• 10
link Use this API as the last resort when direct access to DOM is needed. Use templating and data-binding provided by Angular instead. Alternatively you take a look at Renderer which provides API that can safely be used even when direct access to native elements is not supported. Relying on direct DOM access creates tight coupling between your application and rendering layers which will make it impossible to separate the two and deploy your application into a web worker. Apr 26 '17 at 10:40
• @sandeeptalabathula What is a better option for finding an element to attach a floating date picker component from a third-party library to? I'm aware that this wasn't the original question, but you make it out that finding elements in the DOM is bad in all scenarios...
– Llama
Jul 24 '17 at 5:52
• 13
@john Ah.. okay. You may try out this - this.element.nativeElement.querySelector('#someElementId') and pass ElementRef to the constructor like this.. private element: ElementRef, Import lib... import { ElementRef } from '@angular/core'; Jul 25 '17 at 8:05
55
import { Component, ElementRef, OnInit } from '@angular/core';
@Component({
selector:'display',
template:`
<input (input)="updateName($event.target.value)">
<p> My name : {{ myName }}</p>
`
})
class DisplayComponent implements OnInit {
constructor(public element: ElementRef) {
this.element.nativeElement // <- your direct element reference
}
ngOnInit() {
var el = this.element.nativeElement;
console.log(el);
}
updateName(value) {
// ...
}
}
Example updated to work with the latest version
For more details on native element, here
0
21
Angular 4+: Use renderer.selectRootElement with a CSS selector to access the element.
I've got a form that initially displays an email input. After the email is entered, the form will be expanded to allow them to continue adding information relating to their project. However, if they are not an existing client, the form will include an address section above the project information section.
As of now, the data entry portion has not been broken up into components, so the sections are managed with *ngIf directives. I need to set focus on the project notes field if they are an existing client, or the first name field if they are new.
I tried the solutions with no success. However, Update 3 in this answer gave me half of the eventual solution. The other half came from MatteoNY's response in this thread. The result is this:
import { NgZone, Renderer } from '@angular/core';
constructor(private ngZone: NgZone, private renderer: Renderer) {}
setFocus(selector: string): void {
this.ngZone.runOutsideAngular(() => {
setTimeout(() => {
this.renderer.selectRootElement(selector).focus();
}, 0);
});
}
submitEmail(email: string): void {
// Verify existence of customer
...
if (this.newCustomer) {
this.setFocus('#firstname');
} else {
this.setFocus('#description');
}
}
Since the only thing I'm doing is setting the focus on an element, I don't need to concern myself with change detection, so I can actually run the call to renderer.selectRootElement outside of Angular. Because I need to give the new sections time to render, the element section is wrapped in a timeout to allow the rendering threads time to catch up before the element selection is attempted. Once all that is setup, I can simply call the element using basic CSS selectors.
I know this example dealt primarily with the focus event, but it's hard for me that this couldn't be used in other contexts.
UPDATE: Angular dropped support for Renderer in Angular 4 and removed it completely in Angular 9. This solution should not be impacted by the migration to Renderer2. Please refer to this link for additional information: Renderer migration to Renderer2
2
17
For people trying to grab the component instance inside a *ngIf or *ngSwitchCase, you can follow this trick.
Create an init directive.
import {
Directive,
EventEmitter,
Output,
OnInit,
ElementRef
} from '@angular/core';
@Directive({
selector: '[init]'
})
export class InitDirective implements OnInit {
constructor(private ref: ElementRef) {}
@Output() init: EventEmitter<ElementRef> = new EventEmitter<ElementRef>();
ngOnInit() {
this.init.emit(this.ref);
}
}
Export your component with a name such as myComponent
@Component({
selector: 'wm-my-component',
templateUrl: 'my-component.component.html',
styleUrls: ['my-component.component.css'],
exportAs: 'myComponent'
})
export class MyComponent { ... }
Use this template to get the ElementRef AND MyComponent instance
<div [ngSwitch]="type">
<wm-my-component
#myComponent="myComponent"
*ngSwitchCase="Type.MyType"
(init)="init($event, myComponent)">
</wm-my-component>
</div>
Use this code in TypeScript
init(myComponentRef: ElementRef, myComponent: MyComponent) {
}
13
*/
import {Component,ViewChild} from '@angular/core' /*Import View Child*/
@Component({
selector:'display'
template:`
<input #myname (input) = "updateName(myname.value)"/>
<p> My name : {{myName}}</p>
`
})
export class DisplayComponent{
@ViewChild('myname')inputTxt:ElementRef; /*create a view child*/
myName: string;
updateName: Function;
constructor(){
this.myName = "Aman";
this.updateName = function(input: String){
this.inputTxt.nativeElement.value=this.myName;
/*assign to it the value*/
};
}
}
2
• 12
Please provide some explanation to this code. Simply code dumping without explanation is highly discouraged.
– rayryeng
Jan 16 '17 at 14:38
• 5
This won't work: attributes set via @ViewChild annotations will only be available after ngAfterViewInit lifecycle event. Accessing the value in the constructor would yield an undefined value for inputTxt in that case.
– David M.
Mar 23 '17 at 19:38
13
import the ViewChild decorator from @angular/core, like so:
HTML Code:
<form #f="ngForm">
...
...
</form>
TS Code:
import { ViewChild } from '@angular/core';
class TemplateFormComponent {
@ViewChild('f') myForm: any;
.
.
.
}
now you can use 'myForm' object to access any element within it in the class.
Source
3
• But you should notice that you almost not need to access template elements in the component class, you just need to well understand the angular logic correctly.
– Hany
Nov 2 '17 at 14:47
• 7
Dont use any, the type is ElementRef
– Johannes
Dec 19 '17 at 14:25
• I ended up having to use 'any' because the element that I needed access to was another angular component which was wrapping a Kendo UI element, I needed to call a method on the component, which then calls a method on the Kendo element. Dec 17 '20 at 21:45
5
Note: This doesn't apply to Angular 6 and above as ElementRef became ElementRef<T> with T denoting the type of nativeElement.
I would like to add that if you are using ElementRef, as recommended by all answers, then you will immediately encounter the problem that ElementRef has an awful type declaration that looks like
export declare class ElementRef {
nativeElement: any;
}
this is stupid in a browser environment where nativeElement is an HTMLElement.
To workaround this you can use the following technique
import {Inject, ElementRef as ErrorProneElementRef} from '@angular/core';
interface ElementRef {
nativeElement: HTMLElement;
}
@Component({...}) export class MyComponent {
constructor(@Inject(ErrorProneElementRef) readonly elementRef: ElementRef) { }
}
2
• 1
This explains a problem I was having. This doesn't work because it'll say item needs to be an ElementRef, even though you're setting it to another ElementRef: let item:ElementRef, item2:ElementRef; item = item2; // no can do. . Very confusing. But this is fine: let item:ElementRef, item2:ElementRef; item = item2.nativeElement because of the implementation you pointed out.
– oooyaya
Mar 5 '17 at 3:24
• 1
Actually your first example let item: ElementRef, item2: ElementRef; item = item2 fails because of definite assignment analysis. Your second fails for the same reasons but both succeed if item2 is initialized for the reasons discussed (or as a useful quick check for assignability we can use declare let here). Regardless, truly a shame to see any on a public API like this. Mar 5 '17 at 23:29
3
Mimimum example for quick usage:
import { Component, ElementRef, ViewChild} from '@angular/core';
@Component({
selector: 'my-app',
template:
`
<input #inputEl value="hithere">
`,
styleUrls: [ './app.component.css' ]
})
export class AppComponent {
@ViewChild('inputEl') inputEl:ElementRef;
ngAfterViewInit() {
console.log(this.inputEl);
}
}
1. Put a template reference variable on the DOM element of interest. In our example this is the #inputEl on the <input> tag.
2. In our component class inject the DOM element via the @ViewChild decorator
3. Access the element in the ngAfterViewInit lifecycle hook.
Note:
If you want to manipulate the DOM elements use the Renderer2 API instead of accessing the elements directly. Permitting direct access to the DOM can make your application more vulnerable to XSS attacks
3
i have use two way :
First way :
You can get a handle to the DOM element via ElementRef by injecting it into your component's constructor:
constructor(private myElement: ElementRef) {
this.myElement.nativeElement // <- your direct element reference
}
Second way:
@Component({
selector: 'my-app',
template:
`
<input #input value="enterThere">
`,
styleUrls: [ './app.component.css' ]
})
export class AppComponent {
@ViewChild('input') input:ElementRef;
ngAfterViewInit() {
console.log(this.input);
}
2
to get the immediate next sibling ,use this
event.source._elementRef.nativeElement.nextElementSibling
1
Selecting target element from the list. It is easy to select particular element from the list of same elements.
component code:
export class AppComponent {
title = 'app';
listEvents = [
{'name':'item1', 'class': ''}, {'name':'item2', 'class': ''},
{'name':'item3', 'class': ''}, {'name':'item4', 'class': ''}
];
selectElement(item: string, value: number) {
console.log("item="+item+" value="+value);
if(this.listEvents[value].class == "") {
this.listEvents[value].class='selected';
} else {
this.listEvents[value].class= '';
}
}
}
html code:
<ul *ngFor="let event of listEvents; let i = index">
<li (click)="selectElement(event.name, i)" [class]="event.class">
{{ event.name }}
</li>
css code:
.selected {
color: red;
background:blue;
}
0
In case you are using Angular Material, you can take advantage of cdkFocusInitial directive.
Example: <input matInput cdkFocusInitial>
Read more here: https://material.angular.io/cdk/a11y/overview#regions
0
For components inside *ngIf, another approach:
The component I wanted to select was inside a div's *ngIf statement, and @jsgoupil's answer above probably works (Thanks @jsgoupil!), but I ended up finding a way to avoid using *ngIf, by using CSS to hide the element.
When the condition in the [className] is true, the div gets displayed, and naming the component using # works and it can be selected from within the typescript code. When the condition is false, it's not displayed, and I don't need to select it anyway.
Component:
@Component({
selector: 'bla',
templateUrl: 'bla.component.html',
styleUrls: ['bla.component.scss']
})
export class BlaComponent implements OnInit, OnDestroy {
@ViewChild('myComponentWidget', {static: true}) public myComponentWidget: any;
@Input('action') action: ActionType; // an enum defined in our code. (action could also be declared locally)
constructor() {
etc;
}
// this lets you use an enum in the HMTL (ActionType.SomeType)
public get actionTypeEnum(): typeOf ActionType {
return ActionType;
}
public someMethodXYZ: void {
this.myComponentWidget.someMethod(); // use it like that, assuming the method exists
}
and then in the bla.component.html file:
<div [className]="action === actionTypeEnum.SomeType ? 'show-it' : 'do-not-show'">
<my-component #myComponentWidget etc></my-component>
</div>
<div>
<button type="reset" class="bunch-of-classes" (click)="someMethodXYZ()">
<span>XYZ</span>
</button>
</div>
and the CSS file:
::ng-deep {
.show-it {
display: block; // example, actually a lot more css in our code
}
.do-not-show {
display: none';
}
}
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.942631 |
Categories
Application Security IT Security Outsourced IT
Email Encryption
What is Email Encryption?
Email encryption is an authentication process that disguises the contents of messages so that only the intended recipients can access and read them. This is done by scrambling plain text so that the email can only be read by an authorized recipient with a private key. With Public Key Infrastructure (PKI), a sender can use a public key to encrypt the message and then the private key is used to decrypt it. Email encryption is important as it protects sensitive data, prevents data breaches, and helps organizations with regulatory compliance with laws and regulations like GDPR, CCPA, HIPAA, and GLBA. Email encryption types include:
Pretty Good Privacy (PGP). PGP is a security program that encrypts and decrypts email messages using cryptographic authentication and digital signatures to facilitate secure online communication. Encryption techniques used include a combination of cryptography, data compression, symmetric and asymmetric key technology, and other hashing techniques. PGP also uses PKI.
Transport Layer Security (TLS). TLS is a cryptographic protocol that enables messages to pass over a computer network securely. It is commonly used for email, instant messaging, and VoIP. A common form of TLS is STARTTLS which is a command that converts plaintext messages to encrypted communications while the messages are in transit.
Secure Multi-Purpose Internet Mail Extension (S/MIME). S/MIME is an Internet Engineering Taskforce (IETF) stand used to deliver public-key encryption and digital signatures. It is similar to PGP but requires user to obtain keys from a specified Certificate Authority (CA).
AES 256-bit encryption. AES 256-bit encryption is a method that applies the same key for both encryption and decryption. The key is large and difficult to crack.
The following instructions can help you to encrypt your outgoing email:
Outlook
1. Enable S/MIME encryption. This process involves getting a certificate or digital ID from your organization’s administrator and installing S/MIME control. Follow Office’s steps for setting up to use S/MIME encryption.
2. Encrypt all messages or digitally sign all messages by going to the gear menu and clicking S/MIME settings. Choose to either encrypt contents and attachments of all messages or add a digital signature to all messages.
3. Encrypt or remove individual messages by selecting more options (three dots) at the top of a message and choosing message options. Select or deselect “Encrypt this message (S/MIME).” If the person you are sending a message to doesn’t have S/MIME enabled, you’ll want to deselect the box or else the recipient will not be able to read your message.
iOS
1. Go to advanced settings and switch S/MIME on.
2. Change “Encrypt by Default” to yes.
3. When you compose a message and lock icon will appear next to the recipient. Click the lock icon so it’s closed to encrypt the email.
Take note: If the lock is blue, the email can be encrypted. If the lock is red, the recipient needs to turn on their S/MIME setting.
Gmail
1. Enable hosted S/MIME. You can enable this setting by following Google’s instructions on enabling hosted S/MIME.
2. Compose your message as you normally would.
3. Click on the lock icon to the right of the recipient.
4. Click on “view details” to change the S/MIME settings or level of encryption.
Take note: The following color codes are used to indicate encryption levels:
• Green- Information is protected by S/MIME encryption and can only be decrypted with a private key.
• Gray- The email is protected with TLS (Transport Layer Security). This only works if both the sender and recipient have TLS capabilities.
• Red- The email has no encryption security.
|
__label__pos
| 0.899261 |
Clean code in very long prompts
I have code with more than 4096 characters to clean unnecessary code . How I can make that prompt to chatgpt?
1 Like
Break up your code block into smaller modules / methods :slight_smile:
Go modular :slight_smile:
Hi @danielf1692
It is a basic software engineering / programming task to manually review large code blocks and reduce the code block sizes to a collection of smaller size modules / subroutines / methods.
What programming language are you working on?
See Also, examples:
1 Like
Is HTML with css javascript and SVG. Javascript are are in external files separarated from html + SVG. HTM and SVG are in one html file. The stress is that SVG occupies a a lot of characters
Well, I agree; but you can simply do something like create references to external CSS, JS and IMAGE files and not include them directly in the ChatGPT validation.
Normally, when I write HTML I always host the CSS and JS files (and images, etc) somewhere outside of the HTML and just reference them in the HTML file. This approach is easier to maintain over time, debug, and make changes as well :slight_smile:
See, for example:
|
__label__pos
| 0.999938 |
src/Pure/Thy/present.ML
author wenzelm
Thu, 04 Oct 2007 14:42:11 +0200
changeset 24829 e1214fa781ca
parent 24561 7b4aa14d2491
child 26323 73efc70edeef
permissions -rw-r--r--
avoid gensym;
(* Title: Pure/Thy/present.ML
ID: $Id$
Author: Markus Wenzel and Stefan Berghofer, TU Muenchen
Theory presentation: HTML, graph files, (PDF)LaTeX documents.
*)
signature BASIC_PRESENT =
sig
val no_document: ('a -> 'b) -> 'a -> 'b
val section: string -> unit
end;
signature PRESENT =
sig
include BASIC_PRESENT
val session_name: theory -> string
val write_graph: {name: string, ID: string, dir: string, unfold: bool,
path: string, parents: string list} list -> Path.T -> unit
val display_graph: {name: string, ID: string, dir: string, unfold: bool,
path: string, parents: string list} list -> unit
val init: bool -> bool -> string -> bool -> string list -> string list ->
string -> (bool * Path.T) option -> Url.T option * bool -> bool -> theory list -> unit
val finish: unit -> unit
val init_theory: string -> unit
val verbatim_source: string -> (unit -> Symbol.symbol list) -> unit
val theory_output: string -> string -> unit
val begin_theory: int -> Path.T -> (Path.T * bool) list -> theory -> theory
val add_hook: (string -> (string * thm list) list -> unit) -> unit
val results: string -> (string * thm list) list -> unit
val theorem: string -> thm -> unit
val theorems: string -> thm list -> unit
val chapter: string -> unit
val subsection: string -> unit
val subsubsection: string -> unit
val drafts: string -> Path.T list -> Path.T
end;
structure Present: PRESENT =
struct
(** paths **)
val output_path = Path.variable "ISABELLE_BROWSER_INFO";
val tex_ext = Path.ext "tex";
val tex_path = tex_ext o Path.basic;
val html_ext = Path.ext "html";
val html_path = html_ext o Path.basic;
val index_path = Path.basic "index.html";
val readme_html_path = Path.basic "README.html";
val readme_path = Path.basic "README";
val documentN = "document";
val document_path = Path.basic documentN;
val doc_indexN = "session";
val graph_path = Path.basic "session.graph";
val graph_pdf_path = Path.basic "session_graph.pdf";
val graph_eps_path = Path.basic "session_graph.eps";
val session_path = Path.basic ".session";
val session_entries_path = Path.explode ".session/entries";
val pre_index_path = Path.explode ".session/pre-index";
fun mk_rel_path [] ys = Path.make ys
| mk_rel_path xs [] = Path.appends (replicate (length xs) Path.parent)
| mk_rel_path (ps as x :: xs) (qs as y :: ys) = if x = y then mk_rel_path xs ys else
Path.appends (replicate (length ps) Path.parent @ [Path.make qs]);
fun show_path path = Path.implode (Path.append (File.pwd ()) path);
(** additional theory data **)
structure BrowserInfoData = TheoryDataFun
(
type T = {name: string, session: string list, is_local: bool};
val empty = {name = "", session = [], is_local = false}: T;
val copy = I;
fun extend _ = empty;
fun merge _ _ = empty;
);
fun get_info thy =
if member (op =) [Context.ProtoPureN, Context.PureN, Context.CPureN] (Context.theory_name thy)
then {name = Context.PureN, session = [], is_local = false}
else BrowserInfoData.get thy;
val session_name = #name o get_info;
(** graphs **)
type graph_node =
{name: string, ID: string, dir: string, unfold: bool,
path: string, parents: string list};
fun write_graph gr path =
File.write path (cat_lines (map (fn {name, ID, dir, unfold, path, parents} =>
"\"" ^ name ^ "\" \"" ^ ID ^ "\" \"" ^ dir ^ (if unfold then "\" + \"" else "\" \"") ^
path ^ "\" > " ^ space_implode " " (map Library.quote parents) ^ " ;") gr));
fun display_graph gr =
let
val _ = writeln "Displaying graph ...";
val path = File.tmp_path (Path.explode "tmp.graph");
val _ = write_graph gr path;
val _ = File.isatool ("browser -c " ^ File.shell_path path ^ " &");
in () end;
fun ID_of sess s = space_implode "/" (sess @ [s]);
fun ID_of_thy thy = ID_of (#session (get_info thy)) (Context.theory_name thy);
(*retrieve graph data from initial collection of theories*)
fun init_graph remote_path curr_sess = rev o map (fn thy =>
let
val name = Context.theory_name thy;
val {name = sess_name, session, is_local} = get_info thy;
val path' = Path.append (Path.make session) (html_path name);
val entry =
{name = name, ID = ID_of session name, dir = sess_name,
path =
if null session then "" else
if is_some remote_path andalso not is_local then
Url.implode (Url.append (the remote_path) (Url.File
(Path.append (Path.make session) (html_path name))))
else Path.implode (Path.append (mk_rel_path curr_sess session) (html_path name)),
unfold = false,
parents = map ID_of_thy (Theory.parents_of thy)};
in (0, entry) end);
fun ins_graph_entry (i, entry as {ID, ...}) (gr: (int * graph_node) list) =
(i, entry) :: filter_out (fn (_, entry') => #ID entry' = ID) gr;
(** global browser info state **)
(* type theory_info *)
type theory_info = {tex_source: Buffer.T, html_source: Buffer.T, html: Buffer.T};
fun make_theory_info (tex_source, html_source, html) =
{tex_source = tex_source, html_source = html_source, html = html}: theory_info;
val empty_theory_info = make_theory_info (Buffer.empty, Buffer.empty, Buffer.empty);
fun map_theory_info f {tex_source, html_source, html} =
make_theory_info (f (tex_source, html_source, html));
(* type browser_info *)
type browser_info = {theories: theory_info Symtab.table, files: (Path.T * string) list,
tex_index: (int * string) list, html_index: (int * string) list, graph: (int * graph_node) list};
fun make_browser_info (theories, files, tex_index, html_index, graph) =
{theories = theories, files = files, tex_index = tex_index, html_index = html_index,
graph = graph}: browser_info;
val empty_browser_info = make_browser_info (Symtab.empty, [], [], [], []);
fun init_browser_info remote_path curr_sess thys = make_browser_info
(Symtab.empty, [], [], [], init_graph remote_path curr_sess thys);
fun map_browser_info f {theories, files, tex_index, html_index, graph} =
make_browser_info (f (theories, files, tex_index, html_index, graph));
(* state *)
val browser_info = ref empty_browser_info;
fun change_browser_info f = CRITICAL (fn () => change browser_info (map_browser_info f));
val suppress_tex_source = ref false;
fun no_document f x = setmp_noncritical suppress_tex_source true f x;
fun init_theory_info name info =
change_browser_info (fn (theories, files, tex_index, html_index, graph) =>
(Symtab.update (name, info) theories, files, tex_index, html_index, graph));
fun change_theory_info name f =
change_browser_info (fn (info as (theories, files, tex_index, html_index, graph)) =>
(case Symtab.lookup theories name of
NONE => (warning ("Browser info: cannot access theory document " ^ quote name); info)
| SOME info => (Symtab.update (name, map_theory_info f info) theories, files,
tex_index, html_index, graph)));
fun add_file file =
change_browser_info (fn (theories, files, tex_index, html_index, graph) =>
(theories, file :: files, tex_index, html_index, graph));
fun add_tex_index txt =
change_browser_info (fn (theories, files, tex_index, html_index, graph) =>
(theories, files, txt :: tex_index, html_index, graph));
fun add_html_index txt =
change_browser_info (fn (theories, files, tex_index, html_index, graph) =>
(theories, files, tex_index, txt :: html_index, graph));
fun add_graph_entry entry =
change_browser_info (fn (theories, files, tex_index, html_index, graph) =>
(theories, files, tex_index, html_index, ins_graph_entry entry graph));
fun add_tex_source name txt =
if ! suppress_tex_source then ()
else change_theory_info name (fn (tex_source, html_source, html) =>
(Buffer.add txt tex_source, html_source, html));
fun add_html_source name txt = change_theory_info name (fn (tex_source, html_source, html) =>
(tex_source, Buffer.add txt html_source, html));
fun add_html name txt = change_theory_info name (fn (tex_source, html_source, html) =>
(tex_source, html_source, Buffer.add txt html));
(** global session state **)
(* session_info *)
type session_info =
{name: string, parent: string, session: string, path: string list, html_prefix: Path.T,
info: bool, doc_format: string, doc_graph: bool, documents: (string * string) list,
doc_prefix1: (Path.T * Path.T) option, doc_prefix2: (bool * Path.T) option,
remote_path: Url.T option, verbose: bool, readme: Path.T option};
fun make_session_info
(name, parent, session, path, html_prefix, info, doc_format, doc_graph, documents,
doc_prefix1, doc_prefix2, remote_path, verbose, readme) =
{name = name, parent = parent, session = session, path = path, html_prefix = html_prefix,
info = info, doc_format = doc_format, doc_graph = doc_graph, documents = documents,
doc_prefix1 = doc_prefix1, doc_prefix2 = doc_prefix2, remote_path = remote_path,
verbose = verbose, readme = readme}: session_info;
(* state *)
val session_info = ref (NONE: session_info option);
fun with_session x f = (case ! session_info of NONE => x | SOME info => f info);
fun with_context f = f (Context.theory_name (ML_Context.the_context ()));
(** document preparation **)
(* maintain index *)
val session_entries =
HTML.session_entries o
map (fn name => (Url.File (Path.append (Path.basic name) index_path), name));
fun get_entries dir =
split_lines (File.read (Path.append dir session_entries_path));
fun put_entries entries dir =
File.write (Path.append dir session_entries_path) (cat_lines entries);
fun create_index dir =
File.read (Path.append dir pre_index_path) ^
session_entries (get_entries dir) ^ HTML.end_index
|> File.write (Path.append dir index_path);
fun update_index dir name = CRITICAL (fn () =>
(case try get_entries dir of
NONE => warning ("Browser info: cannot access session index of " ^ quote (Path.implode dir))
| SOME es => (put_entries ((remove (op =) name es) @ [name]) dir; create_index dir)));
(* document versions *)
fun read_version str =
(case space_explode "=" str of
[name] => (name, "")
| [name, tags] => (name, tags)
| _ => error ("Malformed document version specification: " ^ quote str));
fun read_versions strs =
rev (distinct (eq_fst (op =)) (rev ((documentN, "") :: map read_version strs)))
|> filter_out (equal "-" o #2);
(* init session *)
fun name_of_session elems = space_implode "/" ("Isabelle" :: elems);
fun init build info doc doc_graph doc_versions path name doc_prefix2
(remote_path, first_time) verbose thys = CRITICAL (fn () =>
if not build andalso not info andalso doc = "" andalso is_none doc_prefix2 then
(browser_info := empty_browser_info; session_info := NONE)
else
let
val parent_name = name_of_session (Library.take (length path - 1, path));
val session_name = name_of_session path;
val sess_prefix = Path.make path;
val html_prefix = Path.append (Path.expand output_path) sess_prefix;
val (doc_prefix1, documents) =
if doc = "" then (NONE, [])
else if not (File.exists document_path) then
(if verbose then Output.std_error "Warning: missing document directory\n" else ();
(NONE, []))
else (SOME (Path.append html_prefix document_path, html_prefix),
read_versions doc_versions);
val parent_index_path = Path.append Path.parent index_path;
val index_up_lnk = if first_time then
Url.append (the remote_path) (Url.File (Path.append sess_prefix parent_index_path))
else Url.File parent_index_path;
val readme =
if File.exists readme_html_path then SOME readme_html_path
else if File.exists readme_path then SOME readme_path
else NONE;
val docs =
(case readme of NONE => [] | SOME p => [(Url.File p, "README")]) @
map (fn (name, _) => (Url.File (Path.ext doc (Path.basic name)), name)) documents;
val index_text = HTML.begin_index (index_up_lnk, parent_name)
(Url.File index_path, session_name) docs (Url.explode "medium.html");
in
session_info := SOME (make_session_info (name, parent_name, session_name, path, html_prefix,
info, doc, doc_graph, documents, doc_prefix1, doc_prefix2, remote_path, verbose, readme));
browser_info := init_browser_info remote_path path thys;
add_html_index (0, index_text)
end);
(* isatool wrappers *)
fun isatool_document verbose format name tags path result_path =
let
val s = "\"$ISATOOL\" document -c -o '" ^ format ^ "' \
\-n '" ^ name ^ "' -t '" ^ tags ^ "' " ^ File.shell_path path ^
" 2>&1" ^ (if verbose then "" else " >/dev/null");
val doc_path = Path.append result_path (Path.ext format (Path.basic name));
in
if verbose then writeln s else ();
system s;
File.exists doc_path orelse
error ("No document: " ^ quote (show_path doc_path));
doc_path
end;
fun isatool_browser graph =
let
val pdf_path = File.tmp_path graph_pdf_path;
val eps_path = File.tmp_path graph_eps_path;
val gr_path = File.tmp_path graph_path;
val s = "browser -o " ^ File.shell_path pdf_path ^ " " ^ File.shell_path gr_path;
in
write_graph graph gr_path;
if File.isatool s <> 0 orelse not (File.exists eps_path) orelse not (File.exists pdf_path)
then error "Failed to prepare dependency graph"
else
let val pdf = File.read pdf_path and eps = File.read eps_path
in File.rm pdf_path; File.rm eps_path; File.rm gr_path; (pdf, eps) end
end;
(* finish session -- output all generated text *)
fun sorted_index index = map snd (sort (int_ord o pairself fst) (rev index));
fun index_buffer index = Buffer.add (implode (sorted_index index)) Buffer.empty;
fun write_tex src name path =
Buffer.write (Path.append path (tex_path name)) src;
fun write_tex_index tex_index path =
write_tex (index_buffer tex_index |> Buffer.add Latex.tex_trailer) doc_indexN path;
fun finish () = CRITICAL (fn () =>
with_session () (fn {name, info, html_prefix, doc_format, doc_graph,
documents, doc_prefix1, doc_prefix2, path, verbose, readme, ...} =>
let
val {theories, files, tex_index, html_index, graph} = ! browser_info;
val thys = Symtab.dest theories;
val parent_html_prefix = Path.append html_prefix Path.parent;
fun finish_tex path (a, {tex_source, ...}: theory_info) = write_tex tex_source a path;
fun finish_html (a, {html, ...}: theory_info) =
Buffer.write (Path.append html_prefix (html_path a)) (Buffer.add HTML.end_theory html);
val sorted_graph = sorted_index graph;
val opt_graphs =
if doc_graph andalso (is_some doc_prefix1 orelse is_some doc_prefix2) then
SOME (isatool_browser sorted_graph)
else NONE;
fun prepare_sources cp path =
(File.mkdir path;
if cp then File.copy_dir document_path path else ();
File.isatool ("latex -o sty " ^ File.shell_path (Path.append path (Path.basic "root.tex")));
(case opt_graphs of NONE => () | SOME (pdf, eps) =>
(File.write (Path.append path graph_pdf_path) pdf;
File.write (Path.append path graph_eps_path) eps));
write_tex_index tex_index path;
List.app (finish_tex path) thys);
in
if info then
(File.mkdir (Path.append html_prefix session_path);
Buffer.write (Path.append html_prefix pre_index_path) (index_buffer html_index);
File.write (Path.append html_prefix session_entries_path) "";
create_index html_prefix;
if length path > 1 then update_index parent_html_prefix name else ();
(case readme of NONE => () | SOME path => File.copy path html_prefix);
write_graph sorted_graph (Path.append html_prefix graph_path);
File.copy (Path.explode "~~/lib/browser/GraphBrowser.jar") html_prefix;
List.app (fn (a, txt) => File.write (Path.append html_prefix (Path.basic a)) txt)
(HTML.applet_pages name (Url.File index_path, name));
File.copy (Path.explode "~~/lib/html/isabelle.css") html_prefix;
List.app finish_html thys;
List.app (uncurry File.write) files;
if verbose then Output.std_error ("Browser info at " ^ show_path html_prefix ^ "\n") else ())
else ();
(case doc_prefix2 of NONE => () | SOME (cp, path) =>
(prepare_sources cp path;
if verbose then Output.std_error ("Document sources at " ^ show_path path ^ "\n") else ()));
(case doc_prefix1 of NONE => () | SOME (path, result_path) =>
documents |> List.app (fn (name, tags) =>
let
val _ = prepare_sources true path;
val doc_path = isatool_document true doc_format name tags path result_path;
in
if verbose then Output.std_error ("Document at " ^ show_path doc_path ^ "\n") else ()
end));
browser_info := empty_browser_info;
session_info := NONE
end));
(* theory elements *)
fun init_theory name = with_session () (fn _ => init_theory_info name empty_theory_info);
fun verbatim_source name mk_text =
with_session () (fn _ => add_html_source name (HTML.verbatim_source (mk_text ())));
fun theory_output name s =
with_session () (fn _ => add_tex_source name (Latex.isabelle_file name s));
fun parent_link remote_path curr_session thy =
let
val {name = _, session, is_local} = get_info thy;
val name = Context.theory_name thy;
val link =
if null session then NONE
else SOME
(if is_some remote_path andalso not is_local then
Url.append (the remote_path) (Url.File (Path.append (Path.make session) (html_path name)))
else Url.File (Path.append (mk_rel_path curr_session session) (html_path name)));
in (link, name) end;
fun begin_theory update_time dir orig_files thy =
with_session thy (fn {name = sess_name, session, path, html_prefix, remote_path, ...} =>
let
val name = Context.theory_name thy;
val parents = Theory.parents_of thy;
val parent_specs = map (parent_link remote_path path) parents;
val ml_path = ThyLoad.ml_path name;
val files = map (apsnd SOME) orig_files @
(if is_some (ThyLoad.check_file dir ml_path) then [(ml_path, NONE)] else []);
fun prep_file (raw_path, loadit) =
(case ThyLoad.check_ml dir raw_path of
SOME (path, _) =>
let
val base = Path.base path;
val base_html = html_ext base;
in
add_file (Path.append html_prefix base_html,
HTML.ml_file (Url.File base) (File.read path));
(SOME (Url.File base_html), Url.File raw_path, loadit)
end
| NONE =>
(warning ("Browser info: expected to find ML file" ^ quote (Path.implode raw_path));
(NONE, Url.File raw_path, loadit)));
val files_html = map prep_file files;
fun prep_html_source (tex_source, html_source, html) =
let
val txt = HTML.begin_theory (Url.File index_path, session)
name parent_specs files_html (Buffer.content html_source)
in (tex_source, Buffer.empty, Buffer.add txt html) end;
val entry =
{name = name, ID = ID_of path name, dir = sess_name, unfold = true,
path = Path.implode (html_path name),
parents = map ID_of_thy parents};
in
change_theory_info name prep_html_source;
add_graph_entry (update_time, entry);
add_html_index (update_time, HTML.theory_entry (Url.File (html_path name), name));
add_tex_index (update_time, Latex.theory_entry name);
BrowserInfoData.put {name = sess_name, session = path, is_local = is_some remote_path} thy
end);
val hooks = ref ([]: (string -> (string * thm list) list -> unit) list);
fun add_hook f = CRITICAL (fn () => change hooks (cons f));
fun results k xs =
(List.app (fn f => (try (fn () => f k xs) (); ())) (! hooks);
with_session () (fn _ => with_context add_html
(HTML.results (ML_Context.the_local_context ()) k xs)));
fun theorem a th = results Thm.theoremK [(a, [th])];
fun theorems a ths = results Thm.theoremK [(a, ths)];
fun chapter s = with_session () (fn _ => with_context add_html (HTML.chapter s));
fun section s = with_session () (fn _ => with_context add_html (HTML.section s));
fun subsection s = with_session () (fn _ => with_context add_html (HTML.subsection s));
fun subsubsection s = with_session () (fn _ => with_context add_html (HTML.subsubsection s));
(** draft document output **)
fun drafts doc_format src_paths =
let
fun prep_draft path i =
let
val base = Path.base path;
val name =
(case Path.implode (#1 (Path.split_ext base)) of
"" => "DUMMY" ^ serial_string ()
| s => s);
in
if File.exists path then
(((name, base, File.read path), (i, Latex.theory_entry name)), i + 1)
else error ("Bad file: " ^ quote (Path.implode path))
end;
val (srcs, tex_index) = split_list (fst (fold_map prep_draft src_paths 0));
val doc_path = File.tmp_path document_path;
val result_path = Path.append doc_path Path.parent;
val _ = File.mkdir doc_path;
val root_path = Path.append doc_path (Path.basic "root.tex");
val _ = File.copy (Path.explode "~~/lib/texinputs/draft.tex") root_path;
val _ = File.isatool ("latex -o sty " ^ File.shell_path root_path);
val _ = File.isatool ("latex -o syms " ^ File.shell_path root_path);
fun known name =
let val ss = split_lines (File.read (Path.append doc_path (Path.basic name)))
in member (op =) ss end;
val known_syms = known "syms.lst";
val known_ctrls = known "ctrls.lst";
val _ = srcs |> List.app (fn (name, base, txt) =>
Symbol.explode txt
|> Latex.symbol_source (known_syms, known_ctrls) (Path.implode base)
|> File.write (Path.append doc_path (tex_path name)));
val _ = write_tex_index tex_index doc_path;
in isatool_document false doc_format documentN "" doc_path result_path end;
end;
structure BasicPresent: BASIC_PRESENT = Present;
open BasicPresent;
|
__label__pos
| 0.997675 |
src/test/SDL_test_fuzzer.c
author Sam Lantinga <[email protected]>
Tue, 26 May 2015 06:27:46 -0700
changeset 9619 b94b6d0bff0f
parent 8149 681eb46b8ac4
child 9635 564b57497f2b
permissions -rw-r--r--
Updated the copyright year to 2015
1 /*
2 Simple DirectMedia Layer
3 Copyright (C) 1997-2015 Sam Lantinga <[email protected]>
4
5 This software is provided 'as-is', without any express or implied
6 warranty. In no event will the authors be held liable for any damages
7 arising from the use of this software.
8
9 Permission is granted to anyone to use this software for any purpose,
10 including commercial applications, and to alter it and redistribute it
11 freely, subject to the following restrictions:
12
13 1. The origin of this software must not be misrepresented; you must not
14 claim that you wrote the original software. If you use this software
15 in a product, an acknowledgment in the product documentation would be
16 appreciated but is not required.
17 2. Altered source versions must be plainly marked as such, and must not be
18 misrepresented as being the original software.
19 3. This notice may not be removed or altered from any source distribution.
20 */
21
22 /*
23
24 Data generators for fuzzing test data in a reproducible way.
25
26 */
27
28 #include "SDL_config.h"
29
30 /* Visual Studio 2008 doesn't have stdint.h */
31 #if defined(_MSC_VER) && _MSC_VER <= 1500
32 #define UINT8_MAX ~(Uint8)0
33 #define UINT16_MAX ~(Uint16)0
34 #define UINT32_MAX ~(Uint32)0
35 #define UINT64_MAX ~(Uint64)0
36 #else
37 #include <stdint.h>
38 #endif
39 #include <stdio.h>
40 #include <stdlib.h>
41 #include <limits.h>
42 #include <float.h>
43
44 #include "SDL_test.h"
45
46 /**
47 * Counter for fuzzer invocations
48 */
49 static int fuzzerInvocationCounter = 0;
50
51 /**
52 * Context for shared random number generator
53 */
54 static SDLTest_RandomContext rndContext;
55
56 /*
57 * Note: doxygen documentation markup for functions is in the header file.
58 */
59
60 void
61 SDLTest_FuzzerInit(Uint64 execKey)
62 {
63 Uint32 a = (execKey >> 32) & 0x00000000FFFFFFFF;
64 Uint32 b = execKey & 0x00000000FFFFFFFF;
65 SDL_memset((void *)&rndContext, 0, sizeof(SDLTest_RandomContext));
66 SDLTest_RandomInit(&rndContext, a, b);
67 fuzzerInvocationCounter = 0;
68 }
69
70 int
71 SDLTest_GetFuzzerInvocationCount()
72 {
73 return fuzzerInvocationCounter;
74 }
75
76 Uint8
77 SDLTest_RandomUint8()
78 {
79 fuzzerInvocationCounter++;
80
81 return (Uint8) SDLTest_RandomInt(&rndContext) & 0x000000FF;
82 }
83
84 Sint8
85 SDLTest_RandomSint8()
86 {
87 fuzzerInvocationCounter++;
88
89 return (Sint8) SDLTest_RandomInt(&rndContext) & 0x000000FF;
90 }
91
92 Uint16
93 SDLTest_RandomUint16()
94 {
95 fuzzerInvocationCounter++;
96
97 return (Uint16) SDLTest_RandomInt(&rndContext) & 0x0000FFFF;
98 }
99
100 Sint16
101 SDLTest_RandomSint16()
102 {
103 fuzzerInvocationCounter++;
104
105 return (Sint16) SDLTest_RandomInt(&rndContext) & 0x0000FFFF;
106 }
107
108 Sint32
109 SDLTest_RandomSint32()
110 {
111 fuzzerInvocationCounter++;
112
113 return (Sint32) SDLTest_RandomInt(&rndContext);
114 }
115
116 Uint32
117 SDLTest_RandomUint32()
118 {
119 fuzzerInvocationCounter++;
120
121 return (Uint32) SDLTest_RandomInt(&rndContext);
122 }
123
124 Uint64
125 SDLTest_RandomUint64()
126 {
127 Uint64 value = 0;
128 Uint32 *vp = (void *)&value;
129
130 fuzzerInvocationCounter++;
131
132 vp[0] = SDLTest_RandomSint32();
133 vp[1] = SDLTest_RandomSint32();
134
135 return value;
136 }
137
138 Sint64
139 SDLTest_RandomSint64()
140 {
141 Uint64 value = 0;
142 Uint32 *vp = (void *)&value;
143
144 fuzzerInvocationCounter++;
145
146 vp[0] = SDLTest_RandomSint32();
147 vp[1] = SDLTest_RandomSint32();
148
149 return value;
150 }
151
152
153
154 Sint32
155 SDLTest_RandomIntegerInRange(Sint32 pMin, Sint32 pMax)
156 {
157 Sint64 min = pMin;
158 Sint64 max = pMax;
159 Sint64 temp;
160 Sint64 number;
161
162 if(pMin > pMax) {
163 temp = min;
164 min = max;
165 max = temp;
166 } else if(pMin == pMax) {
167 return (Sint32)min;
168 }
169
170 number = SDLTest_RandomUint32();
171 /* invocation count increment in preceeding call */
172
173 return (Sint32)((number % ((max + 1) - min)) + min);
174 }
175
176 /* !
177 * Generates a unsigned boundary value between the given boundaries.
178 * Boundary values are inclusive. See the examples below.
179 * If boundary2 < boundary1, the values are swapped.
180 * If boundary1 == boundary2, value of boundary1 will be returned
181 *
182 * Generating boundary values for Uint8:
183 * BoundaryValues(UINT8_MAX, 10, 20, True) -> [10,11,19,20]
184 * BoundaryValues(UINT8_MAX, 10, 20, False) -> [9,21]
185 * BoundaryValues(UINT8_MAX, 0, 15, True) -> [0, 1, 14, 15]
186 * BoundaryValues(UINT8_MAX, 0, 15, False) -> [16]
187 * BoundaryValues(UINT8_MAX, 0, 0xFF, False) -> [0], error set
188 *
189 * Generator works the same for other types of unsigned integers.
190 *
191 * \param maxValue The biggest value that is acceptable for this data type.
192 * For instance, for Uint8 -> 255, Uint16 -> 65536 etc.
193 * \param boundary1 defines lower boundary
194 * \param boundary2 defines upper boundary
195 * \param validDomain Generate only for valid domain (for the data type)
196 *
197 * \returns Returns a random boundary value for the domain or 0 in case of error
198 */
199 Uint64
200 SDLTest_GenerateUnsignedBoundaryValues(const Uint64 maxValue, Uint64 boundary1, Uint64 boundary2, SDL_bool validDomain)
201 {
202 Uint64 b1, b2;
203 Uint64 delta;
204 Uint64 tempBuf[4];
205 Uint8 index;
206
207 /* Maybe swap */
208 if (boundary1 > boundary2) {
209 b1 = boundary2;
210 b2 = boundary1;
211 } else {
212 b1 = boundary1;
213 b2 = boundary2;
214 }
215
216 index = 0;
217 if (validDomain == SDL_TRUE) {
218 if (b1 == b2) {
219 return b1;
220 }
221
222 /* Generate up to 4 values within bounds */
223 delta = b2 - b1;
224 if (delta < 4) {
225 do {
226 tempBuf[index] = b1 + index;
227 index++;
228 } while (index < delta);
229 } else {
230 tempBuf[index] = b1;
231 index++;
232 tempBuf[index] = b1 + 1;
233 index++;
234 tempBuf[index] = b2 - 1;
235 index++;
236 tempBuf[index] = b2;
237 index++;
238 }
239 } else {
240 /* Generate up to 2 values outside of bounds */
241 if (b1 > 0) {
242 tempBuf[index] = b1 - 1;
243 index++;
244 }
245
246 if (b2 < maxValue) {
247 tempBuf[index] = b2 + 1;
248 index++;
249 }
250 }
251
252 if (index == 0) {
253 /* There are no valid boundaries */
254 SDL_Unsupported();
255 return 0;
256 }
257
258 return tempBuf[SDLTest_RandomUint8() % index];
259 }
260
261
262 Uint8
263 SDLTest_RandomUint8BoundaryValue(Uint8 boundary1, Uint8 boundary2, SDL_bool validDomain)
264 {
265 /* max value for Uint8 */
266 const Uint64 maxValue = UCHAR_MAX;
267 return (Uint8)SDLTest_GenerateUnsignedBoundaryValues(maxValue,
268 (Uint64) boundary1, (Uint64) boundary2,
269 validDomain);
270 }
271
272 Uint16
273 SDLTest_RandomUint16BoundaryValue(Uint16 boundary1, Uint16 boundary2, SDL_bool validDomain)
274 {
275 /* max value for Uint16 */
276 const Uint64 maxValue = USHRT_MAX;
277 return (Uint16)SDLTest_GenerateUnsignedBoundaryValues(maxValue,
278 (Uint64) boundary1, (Uint64) boundary2,
279 validDomain);
280 }
281
282 Uint32
283 SDLTest_RandomUint32BoundaryValue(Uint32 boundary1, Uint32 boundary2, SDL_bool validDomain)
284 {
285 /* max value for Uint32 */
286 #if ((ULONG_MAX) == (UINT_MAX))
287 const Uint64 maxValue = ULONG_MAX;
288 #else
289 const Uint64 maxValue = UINT_MAX;
290 #endif
291 return (Uint32)SDLTest_GenerateUnsignedBoundaryValues(maxValue,
292 (Uint64) boundary1, (Uint64) boundary2,
293 validDomain);
294 }
295
296 Uint64
297 SDLTest_RandomUint64BoundaryValue(Uint64 boundary1, Uint64 boundary2, SDL_bool validDomain)
298 {
299 /* max value for Uint64 */
300 const Uint64 maxValue = ULLONG_MAX;
301 return SDLTest_GenerateUnsignedBoundaryValues(maxValue,
302 (Uint64) boundary1, (Uint64) boundary2,
303 validDomain);
304 }
305
306 /* !
307 * Generates a signed boundary value between the given boundaries.
308 * Boundary values are inclusive. See the examples below.
309 * If boundary2 < boundary1, the values are swapped.
310 * If boundary1 == boundary2, value of boundary1 will be returned
311 *
312 * Generating boundary values for Sint8:
313 * SignedBoundaryValues(SCHAR_MIN, SCHAR_MAX, -10, 20, True) -> [-10,-9,19,20]
314 * SignedBoundaryValues(SCHAR_MIN, SCHAR_MAX, -10, 20, False) -> [-11,21]
315 * SignedBoundaryValues(SCHAR_MIN, SCHAR_MAX, -30, -15, True) -> [-30, -29, -16, -15]
316 * SignedBoundaryValues(SCHAR_MIN, SCHAR_MAX, -127, 15, False) -> [16]
317 * SignedBoundaryValues(SCHAR_MIN, SCHAR_MAX, -127, 127, False) -> [0], error set
318 *
319 * Generator works the same for other types of signed integers.
320 *
321 * \param minValue The smallest value that is acceptable for this data type.
322 * For instance, for Uint8 -> -127, etc.
323 * \param maxValue The biggest value that is acceptable for this data type.
324 * For instance, for Uint8 -> 127, etc.
325 * \param boundary1 defines lower boundary
326 * \param boundary2 defines upper boundary
327 * \param validDomain Generate only for valid domain (for the data type)
328 *
329 * \returns Returns a random boundary value for the domain or 0 in case of error
330 */
331 Sint64
332 SDLTest_GenerateSignedBoundaryValues(const Sint64 minValue, const Sint64 maxValue, Sint64 boundary1, Sint64 boundary2, SDL_bool validDomain)
333 {
334 Sint64 b1, b2;
335 Sint64 delta;
336 Sint64 tempBuf[4];
337 Uint8 index;
338
339 /* Maybe swap */
340 if (boundary1 > boundary2) {
341 b1 = boundary2;
342 b2 = boundary1;
343 } else {
344 b1 = boundary1;
345 b2 = boundary2;
346 }
347
348 index = 0;
349 if (validDomain == SDL_TRUE) {
350 if (b1 == b2) {
351 return b1;
352 }
353
354 /* Generate up to 4 values within bounds */
355 delta = b2 - b1;
356 if (delta < 4) {
357 do {
358 tempBuf[index] = b1 + index;
359 index++;
360 } while (index < delta);
361 } else {
362 tempBuf[index] = b1;
363 index++;
364 tempBuf[index] = b1 + 1;
365 index++;
366 tempBuf[index] = b2 - 1;
367 index++;
368 tempBuf[index] = b2;
369 index++;
370 }
371 } else {
372 /* Generate up to 2 values outside of bounds */
373 if (b1 > minValue) {
374 tempBuf[index] = b1 - 1;
375 index++;
376 }
377
378 if (b2 < maxValue) {
379 tempBuf[index] = b2 + 1;
380 index++;
381 }
382 }
383
384 if (index == 0) {
385 /* There are no valid boundaries */
386 SDL_Unsupported();
387 return minValue;
388 }
389
390 return tempBuf[SDLTest_RandomUint8() % index];
391 }
392
393
394 Sint8
395 SDLTest_RandomSint8BoundaryValue(Sint8 boundary1, Sint8 boundary2, SDL_bool validDomain)
396 {
397 /* min & max values for Sint8 */
398 const Sint64 maxValue = SCHAR_MAX;
399 const Sint64 minValue = SCHAR_MIN;
400 return (Sint8)SDLTest_GenerateSignedBoundaryValues(minValue, maxValue,
401 (Sint64) boundary1, (Sint64) boundary2,
402 validDomain);
403 }
404
405 Sint16
406 SDLTest_RandomSint16BoundaryValue(Sint16 boundary1, Sint16 boundary2, SDL_bool validDomain)
407 {
408 /* min & max values for Sint16 */
409 const Sint64 maxValue = SHRT_MAX;
410 const Sint64 minValue = SHRT_MIN;
411 return (Sint16)SDLTest_GenerateSignedBoundaryValues(minValue, maxValue,
412 (Sint64) boundary1, (Sint64) boundary2,
413 validDomain);
414 }
415
416 Sint32
417 SDLTest_RandomSint32BoundaryValue(Sint32 boundary1, Sint32 boundary2, SDL_bool validDomain)
418 {
419 /* min & max values for Sint32 */
420 #if ((ULONG_MAX) == (UINT_MAX))
421 const Sint64 maxValue = LONG_MAX;
422 const Sint64 minValue = LONG_MIN;
423 #else
424 const Sint64 maxValue = INT_MAX;
425 const Sint64 minValue = INT_MIN;
426 #endif
427 return (Sint32)SDLTest_GenerateSignedBoundaryValues(minValue, maxValue,
428 (Sint64) boundary1, (Sint64) boundary2,
429 validDomain);
430 }
431
432 Sint64
433 SDLTest_RandomSint64BoundaryValue(Sint64 boundary1, Sint64 boundary2, SDL_bool validDomain)
434 {
435 /* min & max values for Sint64 */
436 const Sint64 maxValue = LLONG_MAX;
437 const Sint64 minValue = LLONG_MIN;
438 return SDLTest_GenerateSignedBoundaryValues(minValue, maxValue,
439 boundary1, boundary2,
440 validDomain);
441 }
442
443 float
444 SDLTest_RandomUnitFloat()
445 {
446 return (float) SDLTest_RandomUint32() / UINT_MAX;
447 }
448
449 float
450 SDLTest_RandomFloat()
451 {
452 return (float) (SDLTest_RandomUnitDouble() * (double)2.0 * (double)FLT_MAX - (double)(FLT_MAX));
453 }
454
455 double
456 SDLTest_RandomUnitDouble()
457 {
458 return (double) (SDLTest_RandomUint64() >> 11) * (1.0/9007199254740992.0);
459 }
460
461 double
462 SDLTest_RandomDouble()
463 {
464 double r = 0.0;
465 double s = 1.0;
466 do {
467 s /= UINT_MAX + 1.0;
468 r += (double)SDLTest_RandomInt(&rndContext) * s;
469 } while (s > DBL_EPSILON);
470
471 fuzzerInvocationCounter++;
472
473 return r;
474 }
475
476
477 char *
478 SDLTest_RandomAsciiString()
479 {
480 return SDLTest_RandomAsciiStringWithMaximumLength(255);
481 }
482
483 char *
484 SDLTest_RandomAsciiStringWithMaximumLength(int maxLength)
485 {
486 int size;
487
488 if(maxLength < 1) {
489 SDL_InvalidParamError("maxLength");
490 return NULL;
491 }
492
493 size = (SDLTest_RandomUint32() % (maxLength + 1));
494
495 return SDLTest_RandomAsciiStringOfSize(size);
496 }
497
498 char *
499 SDLTest_RandomAsciiStringOfSize(int size)
500 {
501 char *string;
502 int counter;
503
504
505 if(size < 1) {
506 SDL_InvalidParamError("size");
507 return NULL;
508 }
509
510 string = (char *)SDL_malloc((size + 1) * sizeof(char));
511 if (string==NULL) {
512 return NULL;
513 }
514
515 for(counter = 0; counter < size; ++counter) {
516 string[counter] = (char)SDLTest_RandomIntegerInRange(32, 126);
517 }
518
519 string[counter] = '\0';
520
521 fuzzerInvocationCounter++;
522
523 return string;
524 }
|
__label__pos
| 0.993834 |
-1
I have created an ObjectOutputStream
ObjectOutputStream stream = new ObjectOutputStream(new ByteArrayOutputStream());
stream.writeObject(myObject);
but how do I now convert this back into an Object, or even a ByteArray?
I've tried getting an ObjectInputStream like this
ByteArrayOutputStream outputStream = (ByteArrayOutputStream) myProcess.getOutputStream();
final ObjectInputStream objectInputStream = new ObjectInputStream(
new ByteArrayInputStream(outputStream.toByteArray()));
however I get a compile error saying it can't cast the ObjectOutputStream to a ByteArrayOutputStream; yet there seem to be no methods on the ObjectOutputStream to get the data back?
1
Here how you do it
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream stream = new ObjectOutputStream(baos);
stream.writeObject(myObject);
ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
ObjectInputStream inputStream = new ObjectInputStream(bais);
Object o = inputStream.readObject();
• Ah, yes, operating on the ByteArrayOutputStream instead of the ObjectOutputStream made the difference – Wayneio Apr 26 at 11:44
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.999994 |
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other.
Join them; it only takes a minute:
Sign up
Join the Stack Overflow community to:
1. Ask programming questions
2. Answer and help your peers
3. Get recognized for your expertise
So I'm using PHP to execute a mysql command to backup / restore a database. On the localhost it worked without problems because I had to pinpoint to the executable. But on the server it doesn't work. This is what I'm using as PHP code:
public function backup() {
$backup = $this->backups.'/'.$this->db_name.'_backup_'.date('Y').'_'.date('m').'_'.date('d').'.sql';
$cmd = "mysqldump --opt -h $this->db_host -p $this->db_pass -u $this->db_user $this->db_name > $backup";
try {
system($cmd);
return $this->encode('Error',false,'Backup Successfuly Complete');
} catch(PDOException $error) {
return $this->encode('Error',true,$error->getMessage());
}
}
public function restore($backup) {
$cmd = "mysql --opt -h $this->db_host -p $this->db_pass -u $this->db_user $this->db_name < $backup";
try {
exec($cmd);
return $this->encode('Error',false,'Restore successfuly complete');
} catch(PDOException $error) {
return $this->encode('Error',true,$error->getMessage());
}
}
Please make abstraction of any variable that is there, I'm looking to find out why isn't the command working on server.
Also how do I check with PHP if the command was actually executed properly ? Because with try method I always get a positive answer.
share|improve this question
Do you have the appropriate writing permissions in the server? – David Robles Mar 23 '12 at 11:48
Print the $cmd value and try to run it manually on the server and report the error message here.. – barsju Mar 23 '12 at 11:49
@barsju - You mean in phpMyAdmin ? I'll try to run it there and see what will happen. – rolandjitsu Mar 23 '12 at 11:55
@wanstein - I'm not sure how to find that out. I'm using Godaddy's Shared Servers ( Linux CentOS ), but I don't see why I wouldn't be able to backup my database. – rolandjitsu Mar 23 '12 at 11:56
up vote 1 down vote accepted
"Please make abstraction of any variable that is there" - anyway I hope you make good use of escapeshellarg()
mysqldump without an absolute path might not be found in the webserver process' context, e.g. because it simply isn't in its search-PATH (or isn't present at all).
You might want to try redirecting STDERR to STDOUT by appending 2>&1 to the command to get error messages.
share|improve this answer
Oh, I have found a solution here : serverfault.com/questions/109435/… , it was a matter of the way I sent the password, I had to wrap it in quotes. – rolandjitsu Mar 23 '12 at 12:10
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.703954 |
Technology making us dumber or smarter?
0
162
Technology making us dumber or smarter?
The cell phone in your grasp empowers you to record a video, alter it and send it around the globe. With your telephone, you can explore in urban areas, purchase a vehicle, track your fundamental signs and achieve a huge number of different undertakings. Thus?
Every one of those exercises used to request learning explicit aptitudes and getting the fundamental assets to do them. Making a film? In the first place, get a motion picture camera and the supporting innovations (film, lights, altering hardware). Second, figure out how to utilize them and contract a group. Third, shoot the motion picture. Fourth, create and alter the film. Fifth, make duplicates and convey them.
‘Is innovation making us more brilliant or more idiotic?’ was bantered on Point Taken from PBS.
Presently those undertakings are tackled by innovation. We need never again become familiar with the mind boggling subtleties when the cell phone developers have dealt with to such an extent. In any case, movie producers are currently more liberated to concentrate on their specialty, and it is simpler than at any other time to wind up a movie producer. Generally, innovation has made us exclusively more idiotic and independently more brilliant – and on the whole more astute. Innovation has made us ready to accomplish more while seeing less about what we are doing, and has expanded our reliance on others.
These are not late patterns, however part of the historical backdrop of innovation since the main people started to cultivate. In ongoing decades, three noteworthy changes have quickened the procedure, beginning with the expanding pace of people having some expertise specifically aptitudes. What’s more, we re-appropriate more abilities to innovative devices, similar to a motion picture making application on a cell phone, that soothe us of the test of adapting a lot of specialized information. Furthermore, a lot a bigger number of individuals approach innovation than before, enabling them to utilize these apparatuses substantially more promptly.What do you think that technology making us dumber or smarter?
Specialized knowledge
Specialization empowers us to end up exceptionally great at certain exercises, however that interest in learning – for instance, how to be an ER medical caretaker or PC coder – comes to the detriment of different aptitudes like how to develop your own sustenance or manufacture your very own sanctuary.
Adam Smith, who represented considerable authority in considering and composing. Adam Smith Business School, CC BY-SA
As Adam Smith noted in his 1776 “Abundance of Nations,” specialization empowers individuals to end up increasingly effective and beneficial at one lot of errands, however with an exchange off of expanded reliance on others for extra needs. In principle, everybody benefits.
Specialization has good and sober minded outcomes. Gifted laborers are bound to be utilized and procure more than their incompetent partners. One reason the United States won World War II was that draft sheets kept some prepared laborers, specialists and researchers taking a shot at the home front as opposed to sending them to battle. A talented machine apparatus administrator or oil-rig laborer contributed more to winning the war by remaining at home and adhering to a specific job than by going to the front with a rifle. It likewise implied other men (and a few ladies) wore outfits and had an a lot more prominent shot of passing on.
Making machines for whatever is left of us
Joining human aptitudes into a machine – called “blackboxing” on the grounds that it makes the tasks imperceptible to the client – enables more individuals to, for instance, take a circulatory strain estimation without contributing the time, assets and exertion into learning the abilities recently expected to utilize a pulse sleeve. Putting the mastery in the machine brings down the hindrances to passage for accomplishing something on the grounds that the individual does not have to know to such an extent. For instance, differentiate figuring out how to drive a vehicle with a manual versus a programmed transmission.
Innovation makes murdering simpler: the AK-47. U.S. Armed force/SPC Austin Berner
Large scale manufacturing of blackboxed advances empowers their across the board use. Cell phones and mechanized circulatory strain screens would be far less powerful if just thousands rather than a huge number of individuals could utilize them. Less joyfully, delivering a huge number of programmed rifles like AK-47s methods people can slaughter unmistakably more individuals unquestionably more effectively contrasted and increasingly crude weapons like blades.
All the more for all intents and purposes, we rely upon others to do what we can’t do at all or also. City tenants specifically rely upon immense, generally undetectable structures to give their capacity, evacuate their waste and guarantee sustenance and countless different things are accessible.
Overreliance on innovation is unsafe
A noteworthy drawback of expanded reliance on advancements is the expanded results if those advances break or vanish. Lewis Dartnell’s “The Knowledge” offers an awesome (and unnerving) investigation of how overcomers of a humankind annihilating apocaplyse could rescue and keep up 21st-century advancements.
More vital than you may might suspect: utilizing a sextant. U.S. Naval force/PM3 M. Jeremie Yoder
Only one case of many is that the U.S. Maritime Academy just continued preparing officers to explore by sextants. Truly the best way to decide a ship’s area adrift, this method is being instructed again both as a reinforcement on the off chance that cyberattackers meddle with GPS signals and to give pilots a superior vibe of what their PCs are doing.
How do individuals endure and flourish in this universe of expanding reliance and change? It’s difficult to be genuinely independent, however it is conceivable to become familiar with the innovations we use, to learn essential aptitudes of fixing and fixing them (clue: dependably check the associations and read the manual) and to discover individuals who find out about specific subjects. Along these lines the Internet’s tremendous abundance of data can build our reliance as well as decline it (obviously, distrust about online data is never an awful thought). Pondering what occurs if something turns out badly can be a helpful exercise in arranging or a plummet into fanatical stressing.
Independently, we depend more on our advances than any other time in recent memory – yet we can accomplish like never before previously. All things considered, innovation has made us more astute, progressively fit and increasingly profitable. What innovation has not done is make us savvier.So technology making us dumber or smarter?
LEAVE A REPLY
Please enter your comment!
Please enter your name here
|
__label__pos
| 0.639031 |
St.Antario St.Antario - 8 months ago 56
Java Question
Does creating a new File instance cause creating an empty file?
I read the
File
class javadoc. Here is what's written there:
Creates a new File instance by converting the given pathname string
into an abstract pathname. If the given string is the empty string,
then the result is the empty abstract pathname.
QUESTION: Is it guarantee that if file doesn't exist it won't create an empty file or it depends on the system? I tried it on RedHat linux and an empty file is created only after I create
OutputStream
.
It's not obvious from java to me.
Answer
Yes, it's guaranteed that the file won't be created by calling new File(). It'll be created if you call createNewFile().
The pattern might be:
File f = new File(filePathString);
if(f.exists() && !f.isDirectory()) {
// do something
} else {
f.createNewFile();
}
|
__label__pos
| 0.999904 |
Skip to content
JavaScript Array
• by
In JavaScript, an array is a data structure that holds a collection of elements, which can be of any data type, including other arrays. They allow for the storage and manipulation of data and are accessed using their index values.
const myArray = [element1, element2, ..., elementN];
This creates a new array with the given elements, separated by commas and enclosed in square brackets.
Access an element in an array using its index value within square brackets:
myArray[index];
To add elements to an array, you can use the push() method to add an element to the end of the array, or the unshift() method to add an element to the beginning of the array:
myArray.push(elementN+1);
myArray.unshift(element0);
You can remove elements from an array using the pop() method to remove and return the last element, or the shift() method to remove and return the first element:
const removedElement = myArray.pop();
const removedElement2 = myArray.shift();
If you want to iterate over an array, you can use the forEach() method to execute a function for each element in the array:
myArray.forEach(function(element) {
// Do something with element
});
Use the sort() method to sort Array
myArray.sort();
To filter an array, you can use the filter() method to create a new array with only the elements that meet a certain condition:
const filteredArray = myArray.filter(function(element) {
return element > 5;
});
JavaScript Array example
Simple example code includes creating, accessing, adding, and removing elements, iterating over arrays, sorting, and filtering.
<!DOCTYPE html>
<html>
<body>
<script>
// Create a new array
const myArray = [1, "two", true, ["nested", "array"]];
// Access an element
console.log(myArray[0]);
// Add a new element to the end of the array
myArray.push("new element");
console.log(myArray);
// Remove the first element from the array
myArray.shift();
console.log(myArray);
// Iterate over the array and log each element
myArray.forEach(function(element) {
console.log(element);
});
// Create a new array with numbers
const numArray = [3, 1, 4, 2];
// Sort the array in ascending order
numArray.sort();
console.log(numArray);
// Filter the array to get only the elements that are greater than 2
const filteredArray = numArray.filter(function(element) {
return element > 2;
});
console.log(filteredArray);
</script>
</body>
</html>
Output:
JavaScript Array
JavaScript provides a variety of built-in methods that can be used to manipulate arrays. Here are some of the most commonly used array methods:
MethodDescription
concat()Merges two or more arrays into a new array
push()Adds one or more elements to the end of an array and returns the new length
pop()Removes the last element from an array and returns that element
shift()Removes the first element from an array and returns that element
unshift()Adds one or more elements to the beginning of an array and returns the new length
slice()Returns a new array containing a portion of an existing array
splice()Changes the contents of an array by removing or replacing existing elements and/or adding new elements
forEach()Executes a provided function once for each array element
map()Creates a new array with the results of calling a provided function on every element in the original array
filter()Creates a new array with all elements that pass the test implemented by the provided function
reduce()Applies a function to each element in an array to reduce the array to a single value
sort()Sorts the elements of an array in place according to a specified sorting order
reverse()Reverses the order of the elements in an array
Do comment if you have any doubts or suggestions on this JS array topic.
Note: The All JS Examples codes are tested on the Firefox browser and the Chrome browser.
OS: Windows 10
Code: HTML 5 Version
Leave a Reply
Discover more from Tutorial
Subscribe now to keep reading and get access to the full archive.
Continue reading
|
__label__pos
| 0.970864 |
man-woman-discusion-h100.png kb-appbar.png support-staff-h100.png
FireUp Knowledge Applications - Training
FireUp your firm today with the "just in time" training
quick-links-w240.png
kb-concepts-w240.png
ka-applications-w350.png
Key References
Consortium for Service Innovation
Consortium for Service Innovation
Vision - We have moved from a business model where value came from physical products, tangible things, to a world where value comes from non-physical, intangible things such as knowledge, influence and relationships. We believe this shift in the source of value requires new models, processes and practices.
KCS - Knowledge-Centered Support
Knowledge-Centered Support is a knowledge management strategy for service and support organizations. It defines a set of principles and practices that enable organizations to improve service levels to customers, gain operational efficiencies, and increase the organization's value to their company.
The KCS Operational Model (Knowledge-Centered Support)
The goal of KCS is to solve a problem once . . .and use the solution often! Adoption of KCS has improved operational efficiency, employee moral, and customer satisfaction. This brief examines the need for a knowledge-centered strategy as well as the organizing principles of KCS and its benefits.
kb-cookbook-w240.png
Intro
kb-process-recipes-w240.png
Under construction.
kb-howto-recipes-w240.png
General
1. Open new Window or Tab
Page Creation & Layout
1. Create a New Page
2. Create two column page
3. Create Three column page
4. Create an Article formatted page
5. PDF Plugin for Imaging and Indexing
Text Formatting
1. Indented paragraph below bullet point
2. Provide a Link To a Ticket
3. Add Inquiry Form to a Page
4. Center Graphic with Description
5. Wrap Text around Graphic
6. Resize an Image with GIMP
7. Make an Image a Link
8. Embed YouTube video
Account Management
1. Change My Password
User Rights/Permissions Management
1. Assign a User to the Admin Role in a KB Application
2. Basics of Permissions for Knowledge Components
3. Administrator Permissions for Knowledge Components
4. Global Permission Settings for Knowledge Components
5. Page Specific Permissions for Knowledge Components
CategoryToComplete
Univ/CIE/KA (last edited 2015-03-06 18:11:26 by localhost)
|
__label__pos
| 0.947639 |
Skip to content
Instantly share code, notes, and snippets.
Embed
What would you like to do?
This is a simple script used calculate average spec files running time across multiple builds using Knapsack API. It requires that the KNAPSACK_API_TOKEN variable is set. It runs by default for the last 10 builds, but you can customize it passing the number of builds you want as an argument. Lastly, it returns a specs.txt file with the average t…
# frozen_string_literal: true
require 'httparty'
REQUESTED_BUILDS = ARGV[0].to_i || 10
SPEC_FILES = {}
def get(url, options = {})
headers = {
'cache-control': 'no-cache',
'KNAPSACK-PRO-TEST-SUITE-TOKEN': ENV['KNAPSACK_API_TOKEN']
}
response = HTTParty.get(url, headers: headers, query: options, format: :plain)
JSON.parse(response, symbolize_names: true)
end
def ci_build_list
@ci_build_list ||= get('https://api.knapsackpro.com/v1/builds').take(REQUESTED_BUILDS)
end
def ci_build(ci_build_token)
get("https://api.knapsackpro.com/v1/builds/#{ci_build_token}")
end
def build_data(ci_build_token)
ci_build(ci_build_token).dig(:build_subsets)&.each do |build_subset|
build_subset.dig(:test_files)&.each do |test_file|
spec_file_data(test_file)
end
end
end
def spec_file_data(test_file)
file_name = test_file.dig(:path)&.to_sym
file_execution_time = test_file.dig(:time_execution)&.to_f
return if file_name.nil? || file_execution_time.nil?
spec_file_data = SPEC_FILES.dig(file_name)
if spec_file_data.nil?
SPEC_FILES[file_name] = {
total_executions: 1,
total_execution_time: file_execution_time
}
else
spec_file_data[:total_execution_time] += file_execution_time
spec_file_data[:total_executions] += 1
end
end
def calculate_average_times
ci_build_list.each do |ci_build|
build_data(ci_build.dig(:id))
end
SPEC_FILES.each do |_key, file|
file_total_execution_time = file[:total_execution_time]
file_total_executions = file[:total_executions]
file[:average_execution_time] = file_total_execution_time / file_total_executions
end
end
def sorted_specs_by_time
SPEC_FILES.sort_by{ |_file_name, file_data| -file_data[:average_execution_time] }
end
def print_specs_info
calculate_average_times
File.open('specs.txt', 'w') do |file|
sorted_specs_by_time.each do |spec_data|
file.write("#{spec_data.first} - Average time: #{spec_data.last[:average_execution_time]}\n")
end
end
end
print_specs_info
@ArturT
This comment has been minimized.
Copy link
@ArturT ArturT commented May 28, 2018
Thanks for the script! Here is the article describing how to use Knapsack Pro API to fetch data. I added there the link to your gist :)
http://docs.knapsackpro.com/2018/how-to-export-test-suite-timing-data-from-knapsack-pro-api
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
|
__label__pos
| 0.613376 |
[操作系统]Android 用Fragment创建一个选项卡
你的位置:首页 > 操作系统
[操作系统]Android 用Fragment创建一个选项卡
本文结合之前的动态创建fragment来进行一个实践,来实现用Fragment创建一个选项卡
本文地址:http://www.cnblogs.com/wuyudong/p/5898075.html ,转载请注明源地址。
项目布局
<LinearLayout ="http://schemas.android.com/apk/res/android" ="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity" > <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" > <TextView android:id="@+id/tab1" android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" android:text="社会新闻" /> <TextView android:id="@+id/tab2" android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" android:text="生活新闻" /> <TextView android:id="@+id/tab3" android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" android:text="军事新闻" /> <TextView android:id="@+id/tab4" android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" android:text="娱乐新闻" /> </LinearLayout> <LinearLayout android:id="@+id/content" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout></LinearLayout>
新建Fragment1.java~Fragment4.java,其中Fragment1.java中的代码如下:
public class Fragment1 extends Fragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment1, null); }}
其他几个文件的代码类似
新建fragment1.
<??><LinearLayout ="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical" > <TextView android:id="@+id/textview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="社会新闻" android:textAppearance="?android:attr/textAppearanceLarge"/></LinearLayout>
其他几个文件的代码类似
MainActivity.java中的代码如下:
public class MainActivity extends Activity implements OnClickListener { private LinearLayout content; private TextView tv1, tv2, tv3, tv4; private FragmentManager fm; private FragmentTransaction ft; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); content = (LinearLayout) findViewById(R.id.content); tv1 = (TextView) findViewById(R.id.tab1); tv2 = (TextView) findViewById(R.id.tab2); tv3 = (TextView) findViewById(R.id.tab3); tv4 = (TextView) findViewById(R.id.tab4); tv1.setOnClickListener(this); tv2.setOnClickListener(this); tv3.setOnClickListener(this); tv4.setOnClickListener(this); fm = getFragmentManager(); ft = fm.beginTransaction(); ft.replace(R.id.content, new Fragment1()); // 默认情况下Fragment1 } @Override public void onClick(View v) { ft = fm.beginTransaction(); switch (v.getId()) { case R.id.tab1: ft.replace(R.id.content, new Fragment1()); break; case R.id.tab2: ft.replace(R.id.content, new Fragment2()); break; case R.id.tab3: ft.replace(R.id.content, new Fragment3()); break; case R.id.tab4: ft.replace(R.id.content, new Fragment4()); break; default: break; } ft.commit(); }}
运行项目后如下效果:
|
__label__pos
| 0.814829 |
Thursday, June 6, 2019
Raster Scan Display
Raster Scan Display
In raster scan display the beam is affected all over the screen one scan line at a time from top to bottom and then back to top. In this the refresh process is independent. Raster Scan Display mathematically smooths lines, polygon and boundaries. Its cost is low. Raster scan has ability to display areas filled with sailed colors or patterns. Raster display stores the display printers (such as line, characters sailed) in a refresh buffer in the form of pixel.
Architecture of a raster display.
It consists of display controller, central processing unit, refresh buffer, keyboard, mouse and the CRT. In raster scan display a special space of memory is dedicated to graphics only. This memory is called frame buffer. It holds a set of values for all the screen points. The store values are retrieving from frame buffer and display on the screen one row at a time. Each screen point referred to as a pixel. Each pixel on the screen are often specific by its row and column range. Thus, by specifying row and column number. We can specify the picture element position on the screen. The raster screen display system is the most common method of displaying images on the CRT screen. In this method, the horizontal & vertical deflection signals are generating to move the beam all over the screen in a pattern shown in figure. Picture def. in a memory area hold the set of intensity values for all screen point stored intensity value are then retrieve from the refresh buffer and “pointed” on the screen one row at a time. Each screen point is noted as a pixel. Home television sets and printers are examples of other system using raster scan methods.
Image result for raster scan display diagram
ADVANTAGES:
1. Realistic image
2. 2Million Different colors to be generated
3. Shadow scenes are possible
DISADVANTAGES:
1. Low resolution
1. Expensive
2. Electron beam directed to entire screen and only to that part of the screen where picture is to be draw.
In the raster scan approach the viewing screen is divided into a large no. of discrete phosphor picture elements, called pixels. The matrix of pixels constitutes the raster the no. of separate pixels in the raster display might typically range from 256*256 (total 65000) to 1024*1024 (total 1,000,000) each pixel on the screen can be made to glow with a different brightness. During operation an electromagnetic wave creates the image by sweeping along a horizontal line on the screen from left to right and provides the energy to the pixels in that line during the sweep. When the sweep of one line is completed, the electron beam moves to the next line below & proceeds in affixed pattern as in directed.
1 comment:
1. The professional team of Digit It works in a professional way. They have experience of 15 years. For offering a dependable digital embroidery and artwork, the company is the name of excellence. conversion vector
ReplyDelete
|
__label__pos
| 0.977511 |
Nodejs simple getting started tutorial (1): module mechanism _ node. js-js tutorial
Source: Internet
Author: User
This article mainly introduces the simple getting started course of Nodejs (1): module mechanism. This article describes the basic knowledge of the module, module loading, and Package content, if you need it, you can refer to the JavaScript specification (ECMAScript) without defining a complete set of standard libraries that can be applied to most programs. CommonJS provides a set of JavaScript standard library specifications. Node implements the CommonJS specification.
Module Basics
In Node, modules and files correspond one by one. We define a module:
The Code is as follows:
// Circle. js
Var PI = Math. PI;
// Export the function area
Exports. area = function (r ){
Return PI * r;
}
// Export the function circumference
Exports. circumference = function (r ){
Return 2 * PI * r;
};
Add the function to be exported to the exports object. The local variables of the module cannot be accessed externally (for example, the PI variable in the previous example ). Call require to load the module circle. js:
The Code is as follows:
Var circle = require ('./circle. js ');
Console. log ('the area of a circle of radius 4 is'
+ Circle. area (4 ));
It is mentioned that a module object exists in the module, indicating the module itself, and exports indicates the module attribute.
Module Loading
Node caches loaded modules to avoid overhead of re-loading:
The Code is as follows:
// Test. js
Console. log ("I'm here ");
Multiple Loading modules test. js
The Code is as follows:
// Output "I'm here" only once"
Require ('./test ');
Require ('./test ');
When the file to be loaded has no suffix, Node will try to add the suffix and load it:
1.. js (JavaScript source file)
2. node (C/C ++ extension module)
3. json (JSON file)
There are several main modules:
1. core modules. The core modules have been compiled into Node. We can find these core modules in the lib directory of the source code. Common core modules: net, http, and fs
2. File module. The file module is loaded through a relative or absolute path, for example, the circle. js shown above
3. Custom module. The custom module is located in the node_modules directory. The modules we install through npm are placed in the node_modules directory.
The core module is always loaded first. If there is a custom module http, the core module http will still be loaded while loading, not the custom module http. When loading a custom module, first find the node_modules directory under the current directory, then find the node_modules directory under the parent directory, and so on until the root directory.
When a module loaded by require is not a file but a directory, such a directory is called a package ). The package contains a file named package. json (package Description file), for example:
The Code is as follows:
{"Name": "some-library ",
"Main": "./lib/some-library.js? 1.1.23 "}
Main indicates the module to be loaded. If the main module is not specified in the package. json or package. json, Node will try to load index. js, index. node, and index. json.
When loading the JavaScript module, the loaded module is wrapped in a function:
The Code is as follows:
Function (module, exports, _ filename, _ dirname ,...){
JavaScript module
}
The module, exports, _ filename, and _ dirname accessed by each JavaScript module are actually transmitted through function parameters. Because of this package, the module's local variables cannot be accessed externally. However, sometimes there are hard-to-understand problems, such:
Test1.js
The Code is as follows:
Exports = {
Name: 'name5566 ',
}
Test2.js
The Code is as follows:
Module. exports = {
Name: 'name5566 ',
}
Load these two modules:
The Code is as follows:
Var test1 = require ('./test1.js ');
Console. log (test1.name); // undefined
Var test2 = require ('./test2.js ');
Console. log (test2.name); // Name5566
The exports parameter is passed to the module through exports. x can naturally add properties (or methods) to the exports object, but assigning values directly to exports (for example, exports = x) only changes the values of the form parameter rather than the real parameter. Therefore:
1. When adding properties for exports, use exports
2. When assigning values to exports, use module. exports
Package
According to CommonJS specifications, a complete package should include:
1. package. json package description file
2. bin binary file directory
3. lib JavaScript code directory
4. doc document directory
5. test code directory
NPM is a Node package management tool. Common usage:
View the command documentation:
The Code is as follows:
Npm help install
View the install command documentation.
Install a package:
The Code is as follows:
Npm install redis
Install the redis package. The install command installs the package in the node_modules directory under the current directory.
Remove a package:
The Code is as follows:
Npm remove redis
Remove the redis package. The remove command removes the packages in the current directory.
Related Article
Contact Us
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
|
__label__pos
| 0.982894 |
Data Processing Tour: Types of Shapefiles
Download video
There are three different types of shapefiles:
Each element is an individual space
You can think of a shapefile with suburbs, or a shapefile with all power plants in the city. Every item on the map represents a different reference space. They have a clear and logical individual name and identity. You can imagine writing a description or uploading a photo for these individual spaces.
Elements should be grouped together
These shapefiles contain some sort of a classification of different types of items. For instance, they could contain the land use in the city, or mineral deposits or vegetation types. In all of these cases, we want to group them together by type, so that even if the areas are individual items in the shapefile, they are seen as the same type by the system.
The entire shapefile represents a single entity
This is often the case with networks. For instance, when you have a shapefile with the road network, gas pipe network, or water reticulation system. All of these files will contain many individual segments, but it does not really make sense to see them as individual types. Instead, we want to simply join them all together and see them as a single entity.
Exception: if you have some sort of a network that can be subdivided in different types or groups. If that is the case, use the previous category. Example: the train line network which can be separated by line/route.
Outline of the video
• Types of shapefiles and the way to process them depending on their nature of content.
• 3 types: Each element is an individual space, Elements should be grouped together, The entire shapefile represents a single entity
• M0:13, first and most common type: Each element is an individual space
• Example: city with individual suburbs, each neighbourhood represents a different space;
• After the work item is assigned to you, in the next step, the system asks how this should be processed. In this case, we want to make 10 individual spaces, because the file has 10 items. Now the column with the name of the space has to be selected and clicked on next step, before completing the processing as learned before.
• M1:30, example of land use for the second type of shapefile: Elements should be grouped together
• This is not a different identity, but in the way in which they are classified. In this case, it is 3 main groupings into which the file should be split up.
• The way it works, in step 2 of processing this file, you select "group spaces by name". Type of land use is the correct column to choose in that example. Once saved and published, it distinguishes the 3 types: urban expansion, urban and rural.
• Classification systems are used for land use, soil type, mineral deposits, vegetation type.
• M5:00, third type of shapefile, which is The entire shapefile represents a single entity
• Example of a network, where we don't care about the individual items, but the network as a whole, e.g. transmission lines, water and sewer pipes, gas pipes; It helps to consider how it is named.
• In this example, we need to save it as "group into one single space". The table disappears because it is no longer relevant.
• Saving and publishing it brings us to the entire network. The geospatial information is not lost, but it records it as one network.
• M7:41, we have seen three different types, but we have to be careful, because the ones that represent a network can sometimes be grouped. Example: Network of the train network. Segments are part of a certain line. In that case, it makes sense to split them up and group them by names, in this case, the name of the train line.
• The 52 different segments come out into 5 different lines.
• This may make sense for train lines, but also road networks, but only if you have the big highways. The entire road network of a city is too detailed and in that case it makes sense to upload as one road network. Important to think critically about what would be done with it.
• M10:10, recap of three different types
Attachment(s)
Something wrong with this information? Report errors here.
|
__label__pos
| 0.69276 |
Can you get Internet service without having a landline?
Credit:Paul BradburyOJO ImagesGetty Images
Q:
Can you get Internet service without having a landline?
A:
Quick Answer
Options for Internet service without a landline include satellite, cellular tethering and MiFi. Some municipalities also provide free Wi-Fi for residences, or an arrangement might be made to use Wi-Fi from a nearby source.
Know More
Full Answer
Portable MiFi hotspots are available from companies such as Verizon, AT&T and Sprint. These devices are used in tandem with a bandwidth plan purchased from the company and connect to their cellular networks for Internet access. MiFi plans can be purchased without phone service from these companies, and the bandwidth allotments are usually more generous for the money than those included with comparably priced phone plans. However, they are still a relatively poor value compared to a landline. Use of MiFi also requires being within range of a 3G or 4G network for high-speed Internet access.
Satellite Internet access is available through companies that include dishNET and ViaSat. For home access, these systems generally involve the use of a satellite dish connected to a modem, but there are also portable satellite modems capable of functioning when pointed in the general direction of the satellite.
Internet through cellphone plans can also be tethered to a laptop or a desktop computer if the phone is capable of it.
Learn more about Internet & Networking
Related Questions
• Q:
How do you speed up an Internet connection?
A:
Some strategies to help speed up a slow Internet connection include troubleshooting hardware, tweaking Wi-Fi settings, turning off or managing programs that hog bandwidth and contacting the Internet service provider. Many factors can account for a slow Internet connection, so it is important to attempt a variety of troubleshooting solutions.
Full Answer >
• Q:
What are the different CLEAR Internet service plans?
A:
Clear no longer provides Internet service plans. The company has joined Sprint, and Sprint offers unlimited data plans for phones, as well as data only plans for mobile broadband cards, laptops, tablets and other devices. As of 2014, data plans range from 100 megabytes to 12 gigabytes of data and cost between $10.00 and $79.99 monthly.
Full Answer >
• Q:
Do you need a landline to set up a wireless Internet connection?
A:
Computer users do not need a landline to set up a wireless Internet connection. They can establish a wireless connection through a broadband Internet connection or a cellular network.
Full Answer >
• Q:
How does satellite Internet operate?
A:
Satellite Internet services use a dish similar to satellite television services, but a satellite Internet dish contains a broadcast element as well as a receiving element. The dish broadcasts requests from the user's PC, which the satellite bounces to a terrestrial station and into the global Internet. When the response comes back, the data is uploaded to the satellite and downloaded to the user's system via the dish.
Full Answer >
Explore
|
__label__pos
| 0.896353 |
Getting and extracting a BSP
You can install QNX BSPs from the QNX Software Center. For BSPs from other suppliers, you must contact QNX or the hardware vendor about how to obtain and install the BSPs.
Prerequisites
Note: These instructions are generic. For QNX SDP and board-specific instructions, see your BSP User's Guide, which is available from the QNX SDP 7.0 Board Support Documentation page.
Before you start to work with a QNX BSP, you need:
• a myQNX account; log in or register at: www.qnx.com/account/login.html.
• cables, such as a USB-to-micro USB cable or a serial-to-USB cable, for connecting to your host system, debugging, and if relevant for your system, connecting to a display
• installed on a Linux, Mac, or Windows host system:
• the QNX Software Development Platform (SDP) 7.0
• a terminal program (e.g., PuTTY); alternatively, you can the QNX Momentics IDE Console View
• drivers, such as USB drivers, to enable communication between the host and the target
Installing a QNX BSP
To install a QNX BSP:
• Open QNX Software Center, select the BSP to install, and follow the steps in the wizard.
Instructions on installing BSPs are given in the “Install addon packages” section in the QNX Software Center User's Guide. To learn where the BSP gets installed on the host, follow the instructions in the “Determining where your package has been installed” section in the same guide.
Your BSP is delivered as a .zip archive. To begin using it, you can either unzip it from the command line, or import it into the IDE.
Extracting from the command line
We recommend that you create a directory with the same name as your BSP, then extract the archive contents into there:
1. Create a directory specifically for storing the contents of the BSP you're working with. If necessary, create the parent directory structure (e.g., /home/bspdir). For example:
mkdir /home/bspdir/sabresmart
2. Change into the directory you just created, then extract the BSP. For example:
cd /home/bspdir/sabresmart
unzip BSP_nxp-imx6q-sabresmart_br-700_be_build.zip
where build is the BSP build ID.
You should now be ready to build your BSP (see Building a BSP in this chapter).
Importing into the IDE
To import a BSP into the IDE:
1. Start the IDE, switch to the C/C++ Perspective, and select File > Import.
2. In the Import window, expand the QNX folder.
3. Select QNX Source Package and BSP from the list, then click Next.
4. In the Select the archive file dialog, click Browse... to open a file selector from which you can choose the BSP archive that you downloaded; after you've selected the archive file, click Next.
5. In the Package selected to import dialog, confirm that this is the BSP package you want (there's a brief description of the package), then click Next to proceed.
6. Define the settings for the project to be created. Specify a project name and optionally, a non-default storage location and working sets to which the project should be added.
7. Click Finish.
The project gets created and the source is brought in from the archive. You should now be ready to build your BSP (see Building a BSP).
|
__label__pos
| 0.599024 |
• Christian Persch's avatar
build: Work around include directory issue · d6a6aca2
Christian Persch authored
The correct way to include vte.h is <vte/vte.h> but the source directoy
is named src/ not vte/ so this doesn't work while building vte itself,
and it breaks when building the vala test application.
Make vte/vte.h symlink to src/vte.h, and dist that. Ugly, but it works.
d6a6aca2
Makefile.am 940 Bytes
|
__label__pos
| 0.505409 |
All
Appwrite Series Tiger Globalsawersventurebeat
Area of study Computer Science Engineering Languages Computer Science Computer and information science are two of the most rapidly-growing fields in the world today. In fact, they are so much more than just a field of study; they are now being called ‘communication studies’. Less known though is that these two fields also have something in common: both believe that technology and data will reshape society within a few decades. The field of computer science emerged from the computer game industry in the early 90s, with games serving as an excellent example of how digitalization can be used for good and evil at the same time (e.g., interactivity). Today, this sector is considered one of the most developed and important areas of the humanities and social sciences, with many prominent scholars working on its behalf. In this article we’ll explore what it means to be a ‘t ticketed space’ in computer science, why you should care about it, and potential effects of digitalization on communication research.
What is a t ticketed space?
A t ticketed space is a physical space where there are no walls, doors, or security cameras. This space has been created by, or through the efforts of, one or more individuals. These individuals may be researchers, researchers’ families, businesspeople, or other types of individuals who want to maintain a level of privacy in their dealings with third parties. In many cases, the individuals who have built this space have also sought to keep the surrounding areas as clean and noise-free as possible. It is these individuals who have created a safe and open space for the rest of us to gather and share information.
How to be a t ticketed space in computer science
Any space where data can be shared, or where communication can take place, is a t ticketed space. If the data is electronic, it can be shared with anyone in the digital space. If the communication is not visual, but still takes place on some form of electronic communication, then that space is also a t ticketed place.
Why is communication research interested in t ticketed spaces?
One of the reasons that computer science is so highly regarded is probably due to the fact that it is one of the few fields that has been able to create a functioning internet and web. This makes it an ideal field for exploring the effects of digitalization on communication, and it is also one of the few areas that has been able to explore the impact of social media on the population at large.
Another reason that computer science is so applicable to communication research is that it has been a part of modern civilisation for quite some time. Humans have always traded and shared information with one another, whether that be through the written word, images, or video. The more accessible this information is, the more likely it will be shared and understood. In this case, computer science could help by understanding how digitalization could cause a big change in the way that communication is shared and received.
Potential effects of digitalization on communication research
When people think of potential effects of digitalization on communication research, they usually picture large-scale adoption of new communication technologies, like the internet. Digitalization is not new to communication, but it has been on the rise in the last few years. According to one study, 77% of all conversations take place online, which makes it possible to take advantage of this rising trend by building a digital infrastructure that is both scalable and flexible. Other important potential effects of digitalization include the following: mobile communications become increasingly common; this may include mobile phones, smart meters, or other connected devices; widespread adoption of voice-based communication (e.g., voice-over-IP); increased use of mobile devices for purpose-built apps
–END> HAPPY Trails
Related Articles
Leave a Reply
Back to top button
|
__label__pos
| 0.974206 |
Skip to content Skip to navigation
Connexions
You are here: Home » Content » Hypothesis Testing of Two Means and Two Proportions: Lab I
Navigation
Recently Viewed
This feature requires Javascript to be enabled.
Tags
(What is a tag?)
These tags come from the endorsement, affiliation, and other lenses that include this content.
Hypothesis Testing of Two Means and Two Proportions: Lab I
Module by: Barbara Illowsky, Ph.D., Susan Dean. E-mail the authors
Note: You are viewing an old version of this document. The latest version is available here.
Class Time:
Names:
Student Learning Outcomes:
• The student will select the appropriate distributions to use in each case.
• The student will conduct hypothesis tests and interpret the results.
Supplies:
• The business section from two consecutive days’ newspapers
• 3 small packages of M&Ms®
• 5 small packages of Reeses Pieces®
Increasing Stocks Survey
Look at yesterday’s newspaper business section. Conduct a hypothesis test to determine if the proportion of New York Stock Exchange (NYSE) stocks that increased is greater than the proportion of NASDAQ stocks that increased. As randomly as possible, choose 40 NYSE stocks and 32 NASDAQ stocks and complete the following statements.
1. H o H o
2. H a H a
3. In words, define the Random Variable. ____________=
4. The distribution to use for the test is:
5. Calculate the test statistic using your data.
6. Draw a graph and label it appropriately. Shade the actual level of significance.
• a. Graph:
Figure 1
Blank graph with vertical and horizontal axes.
• b. Calculate the p-value:
7. Do you reject or not reject the null hypothesis? Why?
8. Write a clear conclusion using a complete sentence.
Decreasing Stocks Survey
Randomly pick 8 stocks from the newspaper. Using two consecutive days’ business sections, test whether the stocks went down, on average, for the second day.
1. H o H o
2. H a H a
3. In words, define the Random Variable. ____________=
4. The distribution to use for the test is:
5. Calculate the test statistic using your data.
6. Draw a graph and label it appropriately. Shade the actual level of significance.
• a. Graph:
Figure 2
Blank graph with vertical and horizontal axes.
• b. Calculate the p-value:
7. Do you reject or not reject the null hypothesis? Why?
8. Write a clear conclusion using a complete sentence.
Candy Survey
Buy three small packages of M&Ms and 5 small packages of Reeses Pieces (same net weight as the M&Ms). Test whether or not the average number of candy pieces per package is the same for the two brands.
1. HoHo size 12{H rSub { size 8{o} } } {}:
2. HaHa size 12{H rSub { size 8{a} } } {}:
3. In words, define the random variable. __________=
4. What distribution should be used for this test?
5. Calculate the test statistic using your data.
6. Draw a graph and label it appropriately. Shade the actual level of significance.
• a. Graph:
Figure 3
Blank graph with vertical and horizontal axes.
• b. Calculate the p-value:
7. Do you reject or not reject the null hypothesis? Why?
8. Write a clear conclusion using a complete sentence.
9. Explain how your results might differ if 10 people pooled their raw data together and the test were redone.
10. Would this new test or the original one be more accurate? Explain your answer in complete sentences.
Shoe Survey
Test whether women have, on average, more pairs of shoes than men. Include all forms of sneakers, shoes, sandals, and boots. Use your class as the sample.
1. H o H o
2. H a H a
3. In words, define the Random Variable. ____________=
4. The distribution to use for the test is:
5. Calculate the test statistic using your data.
6. Draw a graph and label it appropriately. Shade the actual level of significance.
• a. Graph:
Figure 4
Blank graph with vertical and horizontal axes.
• b. Calculate the p-value:
7. Do you reject or not reject the null hypothesis? Why?
8. Write a clear conclusion using a complete sentence.
Content actions
Download module as:
Add module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags? tag icon
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
|
__label__pos
| 0.999811 |
GB2340264A - Filtering user input into data entry fields - Google Patents
Filtering user input into data entry fields Download PDF
Info
Publication number
GB2340264A
GB2340264A GB9816482A GB9816482A GB2340264A GB 2340264 A GB2340264 A GB 2340264A GB 9816482 A GB9816482 A GB 9816482A GB 9816482 A GB9816482 A GB 9816482A GB 2340264 A GB2340264 A GB 2340264A
Authority
GB
United Kingdom
Prior art keywords
entry
filter
user input
instances
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9816482A
Other versions
GB2340264B (en
GB9816482D0 (en
Inventor
Andrew John Smith
David Clark
Ian Holt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to GB9816482A priority Critical patent/GB2340264B/en
Publication of GB9816482D0 publication Critical patent/GB9816482D0/en
Publication of GB2340264A publication Critical patent/GB2340264A/en
Application granted granted Critical
Publication of GB2340264B publication Critical patent/GB2340264B/en
Anticipated expiration legal-status Critical
Application status is Expired - Fee Related legal-status Critical
Links
Classifications
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
• G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
• G06F3/048Interaction techniques based on graphical user interfaces [GUI]
• G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
• G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
• G06F3/04895Guidance during keyboard input operation, e.g. prompting
• GPHYSICS
• G06COMPUTING; CALCULATING; COUNTING
• G06FELECTRIC DIGITAL DATA PROCESSING
• G06F9/00Arrangements for program control, e.g. control units
• G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
• G06F9/44Arrangements for executing specific programs
• G06F9/451Execution arrangements for user interfaces
Description
2340264 ENTRY FILTERS The present invention relates to entry filters for
entry field components of a user interface for a computer system.
A common requirement in a computer system user interface is entering information. This is typically done using entry fields. For text entry, typical characteristics of entry fields include a means for showing the insertion position, a means for moving the insertion position, and a means for entering text at the insertion position. Further characteristics may include the ability to select a section of the entered text, and operate on the selected text in a variety of ways.
There are certain constraints that can be enforced in order to get different types of data from the user. An example of this is ensuring the correct capitalization of a title. Another example is the restriction of entry to numeric characters only. A common way of achieving this is to extend the function of an entry field to monitor user input and impose the necessary constraints. This approach has several disadvantages. AS user interfaces increase in diversity, and many variants of entry fields become available, the logic for applying constraints is not easily applied to these alternative entry fields. It may also be advantageous to combine constraint behaviours, which is not generally feasible when such behaviours are packaged with a particular entry field.
The present invention provides an entry filter according to claim 1.
The invention provides filters which encapsulate a set of rules or conditions and which can be combined for controlling or defining how a user's input is restricted or assisted. In the context of the invention, an entry field is anything allowing input from a plurality of input devices, and doesn't necessarily have to be textual.
The entry field cooperabl6 with the entry filter according to the invention includes a means of querying and modifying the content of the entry field, and preferably of querying and modifying other characteristics of the entry field such as insertion point position, selection, etc. In addition, a means of monitoring changes to the content or other characteristics of the entry field is provided, along with a means of receiving notification of user input events and a means of modifying or suppressing those events.
2 Preferably, the entry filter includes means for specifying the entry field to which an entry filter is associated. This allows one or more filters to be associated with an entry field, although when more than one filter is associated with an entry field it is important to ensure that the sets of constraints do not adversely interfere with each other.
Preferably, the entry filter also includes means for receiving notification of significant events, such as the acceptance, rejection.and modification of user input.
It should be noted that while entry filters according to the invention encapsulate the rules for a particular set of constraints, they are independent of any particular entry field implementation.
Embodiments of the invention will now be described with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of the classes required to implement entry filters according to a preferred embodiment of the invention; 20 Figure 2 extends the diagram of Figure 1 to illustrate the classes required to enable external objects to communicate with the entry filters of Figure 1; 25 Figure 3 illustrates the classes of Figure 1 adapted to allow entry filters to work with a subset of an editable area managed by a structured entry filter; Figure 4 extends the diagram of Figure 3 to illustrate the classes 30 required to enable external objects to communicate with both the entry filters and structured entry filters of Figure 3; Figures 5(a) to (e) illustrate some entry fields operating with entry filters and structured entry filters according to the invention; and 35 Figure 6 is a block diagram showing the components operating in an example of a structured entry filter. The present invention according to a preferred embodiment is implemented 40 as a software component for controlling the operation of a computer system on which it is run. The software component may be modified to run
3 on many different types of computer and operating system platforms. The computer program code according to the preferred embodiment is written in the Java programming language (Java is a trademark of Sun Microsystems Inc.). AS is known in the art, the Java language provides for the abstract definition of required behaviours by a construct known as an interface. Concrete implementation of these behaviours can then be provided by classes which implement these interfaces.
In general terms, entry fields expose methods which allow their text to be queried, set and modified, for example, getText, setText, append, insert. Entry fields also generate events, for example, keyPressed or textValueCompleted. (In the present embodiment, these events are in fact method calls issued by the entry field to listeners which the entry field knows implement such methods.) The present invention operates by adapting an entry field to allow for the addition of listeners which implement methods for one or more entry field events and, according to the constraints they apply to the text entered in the entry field, determine the final value of the entry field. It is possible to associate one or more entry filters with an entry field, so that the desired composite behaviour can be obtained.
Examples of such listeners are an AutoCompleteTextFilter, NumericTextFilter and TitleTextFilter. The functionality of these filters is well known and computer users will be used to filling in entry fields having such functionality in on-screen forms. However, this functionality is commonly implemented as an integral part of an entry field, rather than as a plug-in behaviour as in the case of the present invention. This plug-in behaviour not only enables the same entry field implementation to be used for many individual types of behaviour, but also allows more than one entry filter to be applied to a field to generate complex behaviour.
For example, an AutoCompleteTextFilter and a TitleTextFilter according to the invention can be applied to an entry field so that when a candidate is automatically returned by the AutoComplete filter to complete the field, it is then capitalised in a manner determined by the
TitleTextFilter.
Referring now to Figure 1, a number of classes are shown, illustrating the operation of the preferred embodiment:
TextEditable
4 An encapsulated definition of the services expected of a text entry field is embodied in a TextEditable interface. Each characteristic required of a text entry field is thus supported by one or more methods defined by this interface. As is known in the art, the Java Abstract Windowing Toolkit (AWT) includes an entry field component within the java.awt.TextComponent class. many of the methods defined by the TextEditable interface are also found on the java.awt.TextComponent class. A concrete implementation of the TextEditable interface is implemented within the TextComponentEditable class which is a simple wrapper for java.awt.TextComponent. A separate implementation TextElementEditable, Figure 3, allows filters to work with a sub-set of an editable area managed by a structured text filter, whose operation will be explained later.
The TextEditable interface defines the following methods which, unless otherwise noted, correspond to the methods implemented by java.awt.TextComponent:
getCaretPosition and setCaretPosition allow the text insertion position to be queried and modified.
getSelectionStart, setSelectionStart, getSelectionEnd and setSelectionEnd allow the start and end positions of the currently selected text to be queried and modified individually.
select allows the start and end positions of the currently selected text to be modified together.
selectAll selects the entire text.
getText and SetText allow the current text to be queried and modified.
append, insert and replaceRange allow the current text to be manipulated in various ways.
transferFocus allows user input focus to be transferred to the next available component.
getvalid and setvalid, not in TextComponent, control whether the current text is to be shown as valid according to any associated constraints.
addTextEditableListener and removeTextEditableListener, not in TextComponent, allow the addition and removal of filters which are listeners for TextEditable events. Such filters must implement a TextEditableListener interface, described below, which describes the events which may be generated by implementations of TextEditable.
TextEditableListener This interface describes the events which may be generated by implementations of TextEditable. It defines the following methods:
keyPressed, keyReleased and keyTyped allow keyboard events to be intercepted and any necessary actions to be taken. Using the methods on the TextEditableEvent class, described below, the details of the event may be modified, or the event suppressed.
caretPositionChanged is called when the text insertion point's position changes.
textValueChanged is called when the value in the text field changes.
textValueCompleted is called when the user has indicated that data entry is complete.
focusGained and foCusLost are called when the text field gains or loses user input focus. TextEditableRvent
An instance of TextEditablezvent is delivered to each of the methods on TextEditableListener. It provides information about the event, and a means of modifying or suppressing certain of those events. It defines the following methods: 35 getTextEditable returns a reference to the instance of TextEditable that generated the event. consume allows the event to be suppressed. Subsequent listeners will still receive the event but can determine that it has been suppressed. 40 isConsumed returns whether or not the event has been suppressed.
6 getmodifiers and setmodifiers allow modifiers to be queried and modified. For keyboard events, these modifiers indicate which of the keyboard augmentation keys was active when the event was generated, as defined in the java.awt.event.KeyEvent class. 5 getKeyCode and setKeyCode allow key codes to be queried and modified. For keyboard events, these key codes indicate exactly which key generated the event, as defined in the java. awt.event.KeyEvent class.
getKeyChar and setKeyChar allow key characters to be queried and modified. For keyboard events, these key characters indicate which character, if any, corresponds to the key which generated the event.
isTemporary allows the focus transfer state to be queried, to determine whether it has permanently or temporarily transferred. Temporary focus loss occurs when the window that contains the text field loses focus.
TextFilter An encapsulated definition of the services provided by a text entry filter is embodied in the TextFilter interface. Each characteristic of a text entry filter is thus supported by one or more methods defined by this interface. TitleTextFilter, AutoCompleteTextFilter and NumericTextFilter are concrete implementations of the TextFilter 25 interface. The TextFilter interface defines the following methods: setTextEditable allows the text filter to be associated with a single 30 implementation of TextEditable. One or more text filters may be associated with a single implementation of TextEditable, although when more than one text filter is associated with a TextEditable it is important to ensure that the sets of constraints do not adversely interfere with each other. 35 getTextEditable returns a reference to the currently associated implementation of TextEditable. getLocale and setLocale allow the java.util.Locale currently used by the 40 text filter to be queried and modified. Implementations of the TextFilter 7 interface can use this java.util.Locale to assist with the provision of internationalisation support.
getValue and setValue allow the text in the associated TextEditable to be queried and modified using a Java class that is appropriate to the function of the text filter.
getSample returns a sample of valid entry for this text filter. This embodiment is able to use this to provide user assistance for data entry.
getExitCharacters and setExitCharacters allow the characters which will trigger focus transfer out of the associated TextEditable to be queried and modified.
getRemoveSelectionOnFocusLost, setRemoveSelectiononFocusLost, getSelectAllOnFocusGained and setSelectAllOnFocusGained allow the focus behaviour of the filter to be queried and modified.
addTextFilterListener, removeTextFilterListener, addTextFilterlnputListener and removeTextFilterInputListener, Figure 2, allow the addition and removal of listeners for TextFilter events. It will be seen that other objects in an application may want to benefit from the validation provided by text filters. In a database application, it may not be satisfactory for an object using information input by a user to rely upon the conventional textValueCompleted event to allow the object to receive the completed contents of an entry field, as this may anticipate the contents of the entry field being updated with a modified value reflecting the proper behaviour of the text filter.
Thus, in a preferred embodiment, Figure 2, two further interfaces TextFilterListener and TextFilterinputListener interfaces are provided.
These interfaces described in more detail later, define events which may be generated by implementations of TextFilter, for example, textValueValidated tells a listener that the current value in the entry f ield, which may have been modified by the f ilter after completion by the user, is correct.
TextComponentEditable This class is an example of a concrete implementation of the TextEditable interface. It enables a java.awt.TextComponent to be used with the other 8 classes in this embodiment. It primarily maps the methods defined by TextEditable to the equivalent methods on the java.awt.TextComponent class.
getvalid and setValid have no equivalent on the java.awt.TextComponent class, and so are implemented by storing a Boolean data member. TextComponentEditable, via the java.awt.TextComponent it encapsulates, listens to user generated events and as such acts as a key listener, a 10 focus listener, and a text listener, in order that it can in turn issue the required TextEditable events, at the appropriate times. It is also a mouse listener, in order that it can monitor the caret position of the java.awt.TextComponent and issue the required caretPositionChanged events at the appropriate times. The textValueCompleted event is issued whenever 15 the enter key is pressed or the java.awt.TextComponent loses user input focus. In use, an application instantiates a TextComponentEditable and calls the setTextEditable method on each of the previously instantiated filters. 20 setTextEditable in turn calls addTextEditableListener on the TextComponentEditable to add the appropriate filter, for example TitleTextFilter, to the entry field. The filters then listen to any subsequent events and apply the appropriate constraints to the text that may have been entered. 25 Filters such as NumeriCTextFilter will be inclined to wait until the textValueCompleted event before deciding whether a value entered is valid or needs modification, as such their implementation of events such as keyPressed, keyReleased or keyTyped will be minimal. 30 AutoCompleteTextFilter on the other hand listens for every character entered to determine if there is a match in its list of candidates for the text entered. TextFilterListener 35 This interface describes the events which may be generated by implementations of TextFilter. It defines the following method: textValueValidated is called whenever the text filter has attempted to 40 establish whether or not the current text in the associated TextEditable is valid. This normally occurs when the associated TextEditable
9 indicates that data entry is complete by issuing the textValueCompleted event.
TextFilterEvent An instance of TextFilterEvent is delivered to the method on TextFilterListener. It provides information about the event. It defines the following methods:
getTextFilter returns a reference to the instance of TextFilter that generated the event.
getID returns the unique identifier of the event.
getText returns the text which was in the associated TextEditable when the event was generated.
isValid returns whether or not the text in the associated TextEditable was found to be valid when the event was generated. 20 getNumberOfFailures returns the number of consecutive validation failures since the last successful validation.
TextFilterInputListener 25 A more detailed interface than TextFilterListener, this interface describes further events which may be generated by implementations of TeXtFilter. It defines the following methods:
inputAccepted is called when text entry has conformed to the constraints imposed by the text filter. inputRejected is called when text entry has not conformed to the constraints imposed by the text filter. 35 inputFormatted is called when text entry is successfully modified to conform to a format required by the text filter. inputExtended is called when the text entry is automatically added to by 40 the text filter.
TeXtFilterInputRvent Similar to TextFilterEvent, an instance of TextFilterInputEvent is delivered to each of the methods on TextFilterInputListener, and provides information about the event. it defines the following methods:
getTextFilter returns a reference to the instance of TextFilter that generated the event.
getID returns the unique identifier of the event.
getText returns the text which was placed into the associated TextEditable as the event was generated.
getPreviousText returns the text which was in the associated TextEditable before the event was generated. For inputFormatted and inputExtended events, the value returned will be different from that returned by getText. 20 getNumberofFailures returns the number of input rejections since the last input that was accepted. It will be seen that the invention could also be implemented with a single class combining TextFilterEvent and TextFilterInputEvent and/or 25 another class combining TextFilterListener and TextFilterInputListener, although it is felt that developers employing the invention may prefer the added granularity provided by separate classes. TitleTextFilter 30 This class is an example of a concrete implementation of the TextFilter interface, an example of which is shown on the right of Figure 5(c) while the main interface parameters are shown on the left of Figure 5(c). For more information on demonstrating user interface component operation in 35 this fashion see British Patent Application No. 9713616.2 or US Application No. 08/037,605 (Docket No. UK9-97-043). The purpose of TitleTextFilter is to ensure that text entered conforms to one of a set of defined capitalization schemes. In order to be able to 40 intercept events from the associated TextEditable, it also implements the TextEditableListener interface.
11 TitletextFilter provides the following additional methods:
getCaseoption and setCaseOption allow the capitalization scheme that is to be used by the filter to be queried and modified.
TitleTextFilter implements the methods of the TextFilter interface as follows:
setTextEditable adds the TitleTextFilter as a TextEditableListener to the supplied TextEditable, in order that the TextEditable events relevant to text capitalization are received. A reference to the supplied TextEditable is stored.
getTextEditable returns a reference to the currently associated TextEditable.
setLocale stores a reference to the supplied java.util.Locale. Locale services are used to perform text capitalization in the textValueChanged method described below.
getLocale returns a reference to the stored java.util.Locale.
getValue and setValue allow the current value to be queried and modified using the java.lang.String class. They are mapped directly to the getText and setText methods on the associated TextEditable.
getSample uses the currently selected capitalization scheme to return a sample of valid entry for this text filter.
setExitCharacters stores the supplied array of characters. These characters are used to trigger focus transfer in the keyPressed method described below. getExitCharacters returns the stored array of exit characters. 35 setRemoveSelectionOnFocusLost and setSelectAllOnFocusGained store the supplied Boolean values. These values are used in the focusLost and focusGained methods described below. 40 getRemoveSelectionOnFocusLost and getSelectAllOnFocusGained return the stored Boolean values.
12 addTextFilterListener and addTextFilterInputListener add the supplied listener to the appropriate collection of listeners. These collections of listeners are used to issue appropriate events in the textvalueChanged, textValueCompleted, and keyTyped methods described below.
removeTextFilterLiStener and removeTextFilterInputListener remove the supplied listener from the appropriate collection of listeners.
TitleTextFilter implements the methods of the TextEditableListener interface as follows:
keyPressed matches the key character that generated the event against the stored array of exit characters. If a match is found then focus is transferred by calling the transferFocus method on the associated TextEditable.
keyTyped uses the services provided by the stored java.util.Locale to determine whether the key character that generated the event is the first character of a new word. This information, in conjunction with the current capitalization scheme, is used to determine the correct case for the character. If the correct case differs from that of the key character, the key character is modified by using the setKeychar method on the supplied TextEditableEvent, and an inputFormatted event is issued to the collection of TextFilterInputListeners. Finally, an inputAccepted event is issued to the collection of TextFilterInputListeners.
textValueChanged uses the services provided by the stored java.util.Locale to correctly capitalise the current text in the associated TextEditable using the current capitalization scheme, and an inputFormatted event is issued to the collection of TextFilterInputListeners.
textValueCompleted issues a textValueValidated event to the collection of TextFilterListeners.
focusGained uses the stored Boolean value set by setSelectAllOnFocusGained to determine whether the text in the associated TextEditable is to be selected. If it is, the selectAll method on the associated TextEditable is called.
13 focusLost uses the stored Boolean value set by setRemoveSelectionOnFocusLost to determine whether the text in the associated TextEditable is to be de-selected. If it is, the select method on the associated TextEditable is called in such a way as to remove the current selection without moving the text insertion point.
keyReleased and caretPositionChanged have empty implementations, as these events are not relevant to text capitalization.
It will be seen that the implementation of TitleTextFilter does not affect the content of the information in the entry field, only its format. other implementations of TextEditableListener, AutoCompleteTextFilter and NumericTextFilter shown in Figures 5(a) and 5(b) respectively, define assistance for or a number of constraints on 15 the information entered. AutoCompleteTextFilter, uses a supplied list of expected values to provide prompted entry. It may also be provided with a list of separators to enable the user to insert a list of entries in a single entry field. 20 In the case of Figure 5(a), this separator list comprises a semi-colon and a slash. AutoCompleteTextFilter also provides the method setCompletionDelay, which allows other objects to specify the time in milliseconds after the last 25 user input before auto-completion is applied. NumericTextFilter on the other hand needs to be provided with minimum and maximum valid values, minimum and maximum lengths of numbers as well as formatting parameters. Unlike the other two filters NumericTextFilter 30 does not need to expose any additional methods to other objects to allow it to be configured appropriately. In an enhancement of the present embodiment, entry filters according to the invention work with a sub-set of an editable area managed by a 35 structured text filter, Figure 3. Each subset of an editable area is implemented by a TextElementEditable class, which is an implementation of TextEditable, adapted to work within a structured entry field. TextElementEditable 40
14 TextElementEditable is a concrete implementation of the TextEditable interface. It is used in conjunction with StructuredTextFilter, described below, and represents an editable element in a complex structure.
Instances of TextElementEditable are constructed by a factory method on StructuredTextFilter, and remain permanently linked to a single instance of StructuredTextFilter.
TextElementEditable stores the start position and the end position of the subset of the editable area which the element represents. It also stores the current text for this element, and a reference to the StructuredTextFilter to which it is linked.
This class implements the methods of the TextEditable interface as follows:
addTextEditableListener adds the supplied listener to the collection of listeners. This collection of listeners is used to issue appropriate events in the methods described below.
removeTextEditableListener removes the supplied listener from the collection of listeners.
setText updates the stored text for this element, and then calls an updateElement method on StructuredTextFilter, described below. It then issues a textValueChanged event to the collection of TextEditableListeners.
append, insert and replaceRange manipulate the current text in various ways, and call the setText method with the result.
getCaretPosition, setCaretPosition, getSelectionStart, setSelectionStart, getSelectionEnd, setSelectionEnd and select call the corresponding methods on the TextEditable associated with the StructuredTextFilter, adjusting the parameters and return values by using the stored start and end positions.
selectAll calls the corresponding method on the TextEditable associated with the StructuredTextFilter.
transferFocus attempts to locate the next element in the collection of active elements for the StructuredTextFilter and set the text insertion point position of the associated TextEditable to the start position of that element ' thus transferring focus to the next available element. If no next element is found, the transferFocus method on the associated TextEditable is called, thus transferring focus to the next available component.
StructuredTextFilter This class is a concrete implementation of the TextFilter interface. It is used in conjunction with one or more TextElementEditables to provide a structured text entry mechanism. In order to be able to intercept events from the associated TextEditable, it also implements the TextEditableListener interface.
This class provides the following additional methods:
createTextElementEditable returns a new instance of TextElementEditable linked to the StructuredTextFilter.
setStructure defines the structure of editable and non-editable elements that will be managed by the StructuredTextFilter. This method accepts a text string to be used as the non-editable prefix, an array of TextElementEditables that will act as the editable elements of the structure, and an array of text strings to be used as the non-editable 25 suffixes which follow each editable element. The length of the prefix, the length of the current text for each element, and the length of each suffix are used to compute the start position and the end position stored by eachelement. 30 elementAt returns the TextElementEditable which contains the supplied position. A TextElementEditable contains all positions from its own start position to the start position of the next TextElementEditable. setFocusElement sets the text insertion point position of the associated 35 TextEditable to the start position of the supplied TextElementEditable.
getFocusElement returns the TextElementEditable which contains the text insertion point of the associated TextEditable, or null if the text insertion point is within the non-editable prefix area.
16 updateElement uses the current text for the supplied TextElementEditable to re-compute the stored end position of the element, and also the stored start position and end position for each subsequent element. it then applies the element text to the associated TextEditable by calling its replaceRange method.
This class implements the methods of the TextFilter interface as follows:
setTextEditable adds the StructuredTextFilter as a TextEditableListener to the supplied TextEditable, in order that the TextEditable events relevant to structured text entry are received. A reference to the supplied TextEditable is stored.
getTextEditable returns a reference to the currently associated TextEditable.
setLocale stores a reference to the supplied java.util.Locale.
getLocale returns a reference to the stored java.util.LoCale.
getValue and setvalue allow the current value to be queried and modified using the java.lang.String class. They are mapped directly to the getText and setText methods on the associated TextEditable, in the expectation that this method will be overridden by classes extending StructuredTextFilter.
getSample returns an empty string in the expectation that this method will be overridden by classes extending StructuredTextFilter.
setExitCharacters stores the supplied array of characters. These characters are used to trigger focus transfer in the keyPressed method described below. getExitCharacters returns the stored array of exit characters. 35 setRemoveSelectionOnFocusLost and setSelectAllOnFocusGained store the supplied Boolean values. These values are used in the foCuSLOSt and focusGained methods described below. 40 getRemoveSelectionOnFocusLost and getSelectAllOnFocusGained return the stored Boolean values.
17 addTextFilterListener and addTextFilterInputListener add the supplied listener to the appropriate collection of listeners. These collections of listeners are used to issue appropriate events in the textValueChanged, textValueCompleted and keyTyped methods described below.
removeTextFilterListener and removeTextFilterInputListener remove the supplied listener from the appropriate collection of listeners.
StructuredTextFilter implements the methods of the TextEditableListener interface as follows:
keyPressed matches the key character that generated the event against the stored array of exit characters. If a match is found then focus is transferred by calling the transferFocus method on the associated TextEditable. If no match is found, this method causes the TextElementEditable returned by the getFocuSElement method to issue a keyPressed event to its collection of TextEditableListeners.
keyTyped and keyReleased cause the TextElementEditable returned by the getFocusElement method to issue a keyTyped or keyReleased event to its collection of TextEditableListeners.
textValueChanged parses the new text in the associated TextEditable, using the prefix and the suffixes as delimiters, to obtain the new text for each editable element. If the prefix or any of the suffixes does not appear fully in the new text, then they are restored by building the complete value using the prefix, the suffixes and the current text for each editable element, and applying that complete value to the associated TextEditable by calling the setText method. In this way the preservation of the non-editable prefix and all of the non-editable suffixes is ensured.
textValueCompleted issues a textValuevalidated event to the collection of TextFilterListeners.
focusGained uses the stored Boolean value set by setSelectAllOnFocusGained to determine whether the text in the associated TextEditable is to be selected. If it is, the selectAll method on the associated TextEditable is called. If not, the setFocusElement method is called supplying the first TextElementEditable as the parameter.
focusLost uses the stored Boolean value set by setRemoveSelectionOnFocusLost to determine whether the text in the associated TextEditable is to be de-selected. If it is, the select method on the associated TextEditable is called in such a way as to remove the current selection without moving the text insertion point. Finally, it causes the TextElementEditable returned by the getFocusElement method to issue a foCUSLOSt event to its collection of TextEditableListeners.
caretPositionChanged determines whether the new text insertion point is contained within a different TextElementEditable to that which contains the previous text insertion point. If so, it causes the TextElementEditable which contains the previous text insertion point to issue a focuSLost event to its collection of TextEditableListeners, and then causes the TextElementEditable which contains the new text insertion point to issue a focusGained event to its collection of TextEditableListeners.
DateFilter This class is an example of an extension to the StructuredTextFilter class. Its purpose is to create and maintain a structure of editable elements and non-editable areas which together form a filter for date and/or time entry. An example is shown in Figure 5(d). It generally makes use of two of the concrete implementations of TextFilter, namely 25 NumericTextFilter and AutoCompleteTextFilter described above. NumericTextFilter, as described above, restricts input to numeric characters, and can also be configured with range and formatting information. It is used for numeric elements of the date format, such as 30 day number, month number, year, hour, minute, and second. AutoCompleteTextFilter is used for elements of the date format which have a small set of possible values, such as day name, month name, era, and time zone. 35 The structure for a DateFilter is specified by a pattern composed of masking characters as defined by the java.text. SimpleDateFormat class. Alternatively, the structure can be specified using one of the predefined formats defined by the java.text.DateFormat class. In this case, Locale 40 services are used to convert each of these predefined formats to a pattern.
19 The pattern is used to determine the non-editable prefix, the requisite number of TextElementEditables, the appropriate text filters to be associated with each TextElementEditable, and the non-editable suffixes.
Each required TextElementEditable is created using the createTextElementEditable method, and the appropriate text filters are then created and associated with it. The prefix, the TextElementEditables and the suffixes are then supplied to the setStructure method of StructuredTextFilter.
DateFilter also provides the following additional methods:
setDateUsed and setTimeUsed store the supplied Boolean values. These values control the inclusion of the date and time parts of the structure.
getDateused and getTimeUsed return the stored Boolean values.
setDateFormatStyle and setTimeFormatStyle store the supplied predefined formats defined by java.text.DateFormat. These values control the format of the date and time parts of the structure. 20 getDateFormatStyle and getTimeFormatStyle return the stored predefined formats.
setPattern allows a custom pattern to be specified. 25 getPattern returns the current pattern.
DateFilter also overrides the following methods of StructuredTextFilter:
setLocale stores a reference to the supplied java.util.Locale. Locale services are used to apply the correct formatting characteristics. Changing the Locale may cause the entire structure to be recreated. getLocale returns a reference to the stored java.util.Locale. 35 setValue allows the current value to be modified using the java.util.Date class. Locale services are used to convert the supplied java.util.Date value to a java.lang.String value, which is passed to the setText method on the associated TextEditable. 40 getvalue uses Locale services to return a java.util.Date object representing the current value.
getSample uses Locale services to return a java.lang.String representing the current date formatted according to the current pattern.
Figure 6 shows an instantiation of structured text filter managing a date entry field. The date field in this case has a pattern comprising no prefix, two TextElementEditables each with a suffix of 11-11 and a third TextElementEditable without a suffix. The instance of the date entry field instantiates the TextElementEditables and passes the prefix, the
TextElementEditables and the suffixes to an instance of StructuredTextFilter which manages user interaction with an instance of TextComponentEditable which is used to display the entry field on a screen and to listen to user generated events relating to the entry field. The instance of date entry field also instantiates and adds the appropriate filters to the respective TextElementEditables. The first having an associated NumericTextFilter with a minimum of 1 and a maximum of 31, the second with an associated AutoCompleteTextFilter having a list of candidates being the names of the months, and the third having an associated NumericTextFilter with a minimum and maximum determining the first of last year accepted. Also shown is a TitleTextFilter associated with the second element. This enables different capitalisation from that of entries in the list of candidates to be enforced.
When an event, for example a key press, occurs, TextComponentEditable issues the keyPressed event to its listeners. One of these listeners is the instance of StructuredTextFilter. StructuredTextFilter then needs to determine which of the TextElementEditables has user input in focus which it does by calling getFocusElement.
StructuredTextFilter then causes the TextElementEditable returned to issue a keyPressed event to each of its listeners. This is made possible because each TextElementEditable also implements a list of methods corresponding to the events it can generate. For example, a method fireKeyPressed is available to enable objects to cause a TextElementEditable to call keyPressed on each of its listeners. These listeners are the appropriate text filters for the element.
Depending on the event and the text filter, the text filter may need to change the contents of the entry field. It does this by calling setText
21 on its associated TextEditable - this will be the appropriate TextElementEditable. The TextElementEditable in turn calls updateElement on the StructuredTextFilter. updateElement then calls replaceRange on the associated TextEditable to substitute the value for the old value. The associated TextEditable will be the TextComponentEditable with which the user is interacting. updateElement then adjusts the start and end positions for each TextElementEditable before finishing.
The instance of DateFilter can determine the value of the entire entry field at any time by calling getValue on StructuredTextFilter which is in turn mapped onto getText on the associated TextEditable, which is the TextComponentEditable.
A further example of a structured text filter, DecimalFilter, is shown in Figure 5(e). DecimalFilter extends the StructuredTextFilter class to provide currency, percentage, and number entry behaviour. DecimalFilter constrains the user's input to be a number which can admit negative values and/or a decimal point, currency symbols, percentage, etc.
All decimal number formats provided by Java Development Kit 1.1 internationalisation can be supported. The required format may be specified, or the locale default used.
Since these formats vary from country to country, the expected format is determined by the default or specified locale. DecimalFilter determines the type and order of the elements, as well as the appropriate element prefix and suffixes. Numeric text filters are applied to the elements representing the integer and fraction parts of the number. An appropriate prefix and/or suffixes are set to represent the currency symbol(s), percent symbol(s), decimal separator(s), and negative symbol(s). Locale services are used to obtain these symbols, and their correct positions.
For example, -123.45 as a currency in a US English locale would look like this: ($123.45). This can be considered as two elements: the first element uses a numeric text filter, and has a suffix set to the decimal separator for the locale; the second element uses a numeric text filter constrained to two digits, and has a suffix of 11)". The "(I' and the currency symbol are combined to form the prefix.
it will be seen that, while the present embodiment is described in terms of Java, the invention is applicable to components written in other 22 languages. For example, the components could also be written as a set of ActiveX Controls possibly for use by a visual builder such as Microsoft visual Studio 197. Such controls can be used to build an application in a similar manner to that for Java without departing from the scope of the invention.
It should also be noted that for simplicity and clarity, the description has in many places described the operation of the invention in terms of the names of classes themselves. It will of course be apparent to those skilled in the art, that it is in fact instances of these classes that operate at run-time, with those instances in general being given different names to those of the classes of which they are instances. This convention should not, however, detract from the overall level of the disclosure.
23
Claims (22)
1. An entry filter component cooperable with an entry field component, said entry field component being instantiable to receive user input, to store said user input, to change status responsive to said user input, to generate one or more events indicative of said status and being adapted to allow instances of the entry field component to add one or more entry filter components as listeners to the or each event, said entry filter component being instantiable to respond to one or more of the or each events, to read said user input and to modify said user input according to one or more conditions associated with said entry filter component.
2. An entry filter according to claim 1 wherein instances of said entry field are adapted to generate an event each time said user input changes and wherein said entry filter is a prompted entry filter, instances of said prompted entry filter having an associated list of candidates and being adapted to respond to said user input change event by comparing said user input to entries in their associated list of candidates and, responsive to said user input matching an entry in their associated list of candidates, modifying said user input to match said entry.
3. An entry filter according to claim 2 wherein instances of said prompted entry filter include a time delay characteristic determining a length of time after a last user input change event before said user input is compared to entries in their associated list of candidates.
4. An entry filter according to claim 1 wherein instances of said entry field are adapted to generate an event when said user input is complete and wherein said entry filter is a numeric filter, instances of said numeric filter having characteristics determining the range of said user input and being adapted to respond to said user input complete event by comparing said user input to said characteristics and, responsive to said user input lying outside the range determined by said characteristics, modifying said user input to lie within said range.
5. An entry filter according to claim I wherein instances of said entry field are adapted to generate an event when said user input is complete and wherein said entry filter is a title text filter, instances of said title text filter having a characteristic determining the character case of said user input and being adapted to respond to said 24 user input complete event by modifying said user input to conform to the character case.
6. An entry filter according to claim I wherein instances of said entry filter are adapted to receive an object from instances of said entry field when an event is generated by said instances of entry field, said instances of entry filter being adapted to use said object to control their response to said event.
7. An entry filter according to claim 6 wherein said instances of entry filter are adapted to use said object to generate a reference to the instance of the entry field that generated said event.
8. An entry filter according to claim 1 in which instances of said entry filter generate one or more events indicative of said status and are adapted to add one or more objects as listeners to the or each event.
9. An entry filter according to claim 8 in which said events include events indicative of the user input having being validated by said entry filter instance, the user input conforming to the conditions associated with the entry filter instance, the user input not conforming to the conditions associated with the entry filter instance, the user input having been successfully modified to conform to the conditions associated with the entry filter instance or the user input having been automatically added to by the entry filter instance.
10. An entry filter as claimed in claim I wherein said entry field is an entry field adaptor component instantiable to store a definition of a subset of an editable area within said instance of entry field and to store a value associated with said editable area, said entry field adaptor component being further adapted to allow instances of the entry field adaptor component to add one or more instances of entry filter components as listeners to the entry field adaptor component.
11. An entry filter as claimed in claim 1 wherein said entry filter is a structured entry filter manager component being instantiable to receive a pattern dividing an instance of an entry field into a plurality of editable areas, at least one of said editable areas having an associated instance of entry filter, instances of said filter manager being adapted to add themselves as isteners to events generated by an entry field instance and to relay said events to any entry filter instances associated with an editable area of said entry field to which said event applies.
12. A set of components for facilitating user input in an application, said set of components comprising:
an entry field component being instantiable to receive user input, to store said user input, to change status responsive to said user input, to generate one or more events indicative of said status and being adapted to allow instances of the entry field component to add one or more entry filter components as listeners to the or each event, and one or more entry filter components being instantiable to respond to one or more of the or each events, instances of the or each entry filter being adapted to read said user input and to modify said user input according to one or more conditions associated with the or each entry filter component.
13. A set of components as claimed in claim 11 comprising:
a structured entry filter manager component being instantiable to receive a pattern dividing an instance of an entry field into a plurality of editable areas, at least one of said editable areas having an associated instance of entry filter, instances of said filter manager being adapted to add themselves as listeners to events generated by an entry field instance and to relay said events to any entry filter instances associated with an editable area of said entry field to which said event applies; and a structured entry filter component being instantiable to determine said pattern according to a required format and to pass said pattern to said structured entry filter manager.
14. A set of components as claimed in claim 13 wherein said pattern includes a value to be used as the non-editable prefix and an array of values to be used as non-editable suffixes within each editable area of the entry field.
15. A set of components as claimed in claim 13 wherein said structured entry filter component is a Date Filter, said Date Filter being instantiable to store a date format pattern.
26
16. A set of components as claimed in claim 13 wherein said structured entry filter component is a Decimal Filter, said Decimal Filter being instantiable to store a decimal number format pattern.
17. A set of components as claimed in claim 13 further comprising:
an entry field adaptor component instantiable to store a definition of an editable area within an instance of entry field and to store a text value associated with said editable area, said entry field adaptor component being further adapted to allow instances of the entry field adaptor component to add one or more instances of entry filter components as listeners to the entry field adaptor component.
18. A set of components as claimed in claim 17 wherein instances of said structured entry filter component are adapted to add one or more instances of the or each entry filter to each entry field adaptor component instance according to said required format.
19. A set of components as claimed in claim 18 wherein instances of said structured entry filter relay said events to an instance of entry field adaptor component associated with an editable area of said entry field to which said event applies, said entry field adaptor component instance being adapted to relay said event to any instance of entry filter component listening to said entry field adaptor component.
20. A set of components as claimed in claim 18 wherein instances of said entry field adaptor component relay modifications of user input from instances of entry filters listening to said entry field adaptor component instances to an associated structured entry filter manager instance, said structured entry filter manager instance being adapted to modify the user input stored in said entry field instance.
21. A set of components as claimed in claim 18 wherein said structured entry filter manager component instance is adapted to return a reference to the instance of entry field adaptor component representing the subset of the editable area of said entry field instance which contains a specified position.
22. A computer program product stored on a computer readable storage medium for, when executed on a computer, facilitating user input in an 27 application, the product comprising an entry filter component as claimed in claim 1.
GB9816482A 1998-07-30 1998-07-30 Entry filters Expired - Fee Related GB2340264B (en)
Priority Applications (1)
Application Number Priority Date Filing Date Title
GB9816482A GB2340264B (en) 1998-07-30 1998-07-30 Entry filters
Applications Claiming Priority (1)
Application Number Priority Date Filing Date Title
GB9816482A GB2340264B (en) 1998-07-30 1998-07-30 Entry filters
Publications (3)
Publication Number Publication Date
GB9816482D0 GB9816482D0 (en) 1998-09-23
GB2340264A true GB2340264A (en) 2000-02-16
GB2340264B GB2340264B (en) 2003-03-12
Family
ID=10836351
Family Applications (1)
Application Number Title Priority Date Filing Date
GB9816482A Expired - Fee Related GB2340264B (en) 1998-07-30 1998-07-30 Entry filters
Country Status (1)
Country Link
GB (1) GB2340264B (en)
Cited By (2)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6907465B1 (en) * 2000-09-22 2005-06-14 Daniel E. Tsai Electronic commerce using personal preferences
WO2009013474A2 (en) * 2007-07-24 2009-01-29 Keycorp Limited Parsing of input fields in a graphical user interface
Citations (2)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0435478A2 (en) * 1989-11-30 1991-07-03 Emtek Health Care Systems Inc. Data processing system
US5101375A (en) * 1989-03-31 1992-03-31 Kurzweil Applied Intelligence, Inc. Method and apparatus for providing binding and capitalization in structured report generation
Patent Citations (2)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5101375A (en) * 1989-03-31 1992-03-31 Kurzweil Applied Intelligence, Inc. Method and apparatus for providing binding and capitalization in structured report generation
EP0435478A2 (en) * 1989-11-30 1991-07-03 Emtek Health Care Systems Inc. Data processing system
Non-Patent Citations (1)
* Cited by examiner, † Cited by third party
Title
Data Based Advisor Vol. 7, No. 7, July 1989, pages 104-108, and IAC Accession No. 07449594. *
Cited By (6)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6907465B1 (en) * 2000-09-22 2005-06-14 Daniel E. Tsai Electronic commerce using personal preferences
WO2009013474A2 (en) * 2007-07-24 2009-01-29 Keycorp Limited Parsing of input fields in a graphical user interface
WO2009013474A3 (en) * 2007-07-24 2009-07-30 Keycorp Ltd Parsing of input fields in a graphical user interface
GB2464060A (en) * 2007-07-24 2010-04-07 Keycorp Ltd Parsing of input fields in a graphical user interface
GB2464060B (en) * 2007-07-24 2012-10-10 Keycorp Ltd Graphic user interface parsing
US8793612B2 (en) 2007-07-24 2014-07-29 Keycorp Limited Parsing of input fields in a graphical user interface
Also Published As
Publication number Publication date
GB2340264B (en) 2003-03-12
GB9816482D0 (en) 1998-09-23
Similar Documents
Publication Publication Date Title
US7484200B2 (en) Automatically analyzing and modifying a graphical program
US7681119B2 (en) Method and apparatus for providing a graphical user interface for creating and editing a mapping of a first structural description to a second structural description
US6957228B1 (en) Object oriented apparatus and method for providing context-based class replacement in an object oriented system
US7607126B2 (en) System and method for external override of annotations
JP2522898B2 (en) Dynamic customized methods and graphics resources - vinegar editor
CA1169579A (en) Report preparation
JP2525546B2 (en) Graphics resource - vinegar editor
US5873094A (en) Method and apparatus for automated conformance and enforcement of behavior in application processing systems
US5557723A (en) Method and system for customizing forms in an electronic mail system
US6973625B1 (en) Method for creating browser-based user interface applications using a framework
CA2276240C (en) Client-server application development and deployment system and methods
USRE40633E1 (en) User centric product files distribution
JP4972254B2 (en) Integrated method for creating refreshable web queries
US6944622B1 (en) User interface for automated project management
EP1098244A2 (en) Graphical user interface
US20060168536A1 (en) Method and terminal for generating uniform device-independent graphical user interfaces
US5878262A (en) Program development support system
EP1489494A2 (en) Framework for creating modular web applications
US5051898A (en) Method for specifying and controlling the invocation of a computer program
US7043716B2 (en) System and method for multiple level architecture by use of abstract application notation
US6850950B1 (en) Method facilitating data stream parsing for use with electronic commerce
CA2079022C (en) Method and system for controlling the execution of an application program
CA1313418C (en) User interface with system and method for specifying same
US7269822B2 (en) Technique for enabling applications to use languages other than their built-in macro-languages without changing the applications
EP1077405B1 (en) Generating a graphical user interface from a command syntax for managing multiple computer systems as one computer system
Legal Events
Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee
Effective date: 20080730
|
__label__pos
| 0.684529 |
Take the 2-minute tour ×
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required.
I will elaborate this with an analogy, 15 toys are to be distributed amongst 3 children , such that any child can get any number of toys, so we have to find the number of ways in which we can do so if,
1. toys are distinct
2. toys are identical
We can apply multinomial in only the latter case , why so?
share|improve this question
add comment
1 Answer 1
up vote 3 down vote accepted
The answers in the two cases are clearly different, so a formula for one of the problems cannot possibly work, without change, to solve the other. A reasonable way to approach the question is to solve each of the problems, in the concrete case you mentioned. We use approaches that readily generalize.
Different toys: Call the toys $1$, $2$, and so on up to $15$ (numbers make great toys). There are $3$ ways to decide who gets toy $1$. For each of these ways, there are $3$ ways to decide who gets toy $2$, for a total so far of $3\times 3$. For each way of making these two decisions, there are $3$ ways to decide who gets toy $3$, and so on, for a total of $3^{15}$ ways.
We could approach the problem through multinomial coefficients. The number of ways to choose $t_A$ toys to give to kid $A$, and $t_B$ toys to give to kid $B$, and $t_C$ to give to kid $C$, where $t_A+t_B+t_B=15$, is the multinomial coefficient $\binom{15}{t_A,t_B,t_C}$. To get the total number of ways, sum over all $(t_A,t_B,t_C)$. We can use the multinomial theorem to conclude that the sum is $3^{15}$. But we already had a simpler argument for $3^{15}$.
Identical toys: A standard approach is to count the number of ways to distribute $18$ toys among the $3$ kids, at least one toy to each kid. Then we make each kid give back a toy. Line up the $18$ toys. They determine $17$ inter-toy gaps. Put a marker into $2$ of these gaps. Give kid $A$ all the toys up to the first marker, kid $B$ the toys from the first marker to the second, and kid $C$ the rest. There are $\binom{17}{2}$ ways to choose where the markers will go, and hence $\binom{17}{2}$ ways to distribute $15$ toys among $3$ kids, where some kid(s) may get nothing.
The argument perhaps is not *multi*nomial, since only the special binomial case of the multinomial is being used. But it certainly belongs to the same family of ideas. In both cases, "nomial" ideas can be used.
share|improve this answer
add comment
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.979084 |
Upio Upio - 7 months ago 28
Scala Question
Is there a way to ensure a type is Serializable at compile time
I work with Spark often, and it would save me a lot of time if the compiler could ensure that a type is serializable.
Perhaps with a type class?
def foo[T: IsSerializable](t: T) = {
// do stuff requiring T to be serializable
}
It's not enough to constrain T <: Serializable. It could still fail at runtime. Unit tests are a good substitute, but you can still forget them, especially when working with big teams.
I think this is probably impossible to do at compile time without the types being sealed.
Answer
Yes, it is possible, but not in the way that you're hoping. Your type class IsSerializable could provide a mechanism to convert your T to a value of a type which is guaranteed to be Serializable and back again,
trait IsSerializable[T] {
def toSerializable(t: T): String
def fromSerializable(s: String): Option[T]
}
But, of course, this is just an alternative type class based serialization mechanism in it's own right, making the use of JVM serialization redundant.
Your best course of action would be to lobby Spark to support type class based serialization directly.
|
__label__pos
| 0.963248 |
Returning Inf of specific type
question
#1
I would like for a function to return Inf in some cases. However, the type of Inf should depend on the floating type of the function arguments. That is, I would like the following
function f(x::R) where {R <: Real}
if # some condition
return Inf
end
end
to return Inf16 if R == Float16, Inf32 if R == Float32, Inf64 if R == Float64. However, there seem not to be a way of constructing Inf of a given type R <: Real, unlike e.g. zero. Is there a way to do this?
#2
You can write R(Inf) or convert(R, Inf).
#3
R(Inf) doesn’t work for integers. Consider restricting R to AbstractFloat
function f(x::R) where {R <: AbstractFloat}
if # some condition
return R(Inf)
end
end
#4
Right, depending on what you want you may want to restrict the method or call typemax which will be a typed infinity for float types.
#5
You can also use
function f(x)
if stuff
return oftype(x, Inf)
end
...
end
which should lead to equivalent code, but is more compact if you don’t need R.
#6
Wow, thanks a lot, all viable solutions. I marked Stefan’s as solution just because it is closest to what I was trying to do. But it’s all stuff to consider. Thanks again.
|
__label__pos
| 0.661096 |
Golang
关注公众号 jb51net
关闭
首页 > 脚本专栏 > Golang > Go目录结构依赖注入wire
如何组织Go代码目录结构依赖注入wire使用解析
作者:仁扬
这篇文章主要为大家介绍了如何组织Go代码目录结构依赖注入wire使用解析,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪
背景
对于大多数 Gopher 来说,编写 Go 程序会直接在目录建立 main.go,xxx.go,yyy.go……
不是说不好,对于小型工程来说,简单反而简洁明了,我也提倡小工程没必要整一些花里胡哨的东西。
毕竟 Go 语言作为现代微服务的开发新宠,各个方面都比较自由,没有很多约束。我想,这也是它充满活力的原因。
对于大型工程而言,或者团队协作中,没有明确的规范,只会使得项目越来越凌乱……
因为每个人的心中对代码的管理、组织,对业务的理解不完全是一致的。
我参考了 非官网社区的规范 以及公司的规范,谈谈平时是怎么组织的,希望我的理解,对大家有所帮助。
目录结构示例
.
├── api 路由与服务挂接
├── cmd 程序入口,可以有多个程序
│ └── server
│ ├── inject 自动生成依赖注入代码
│ └── main.go
├── config 配置相关文件夹
├── internal 程序内部逻辑
│ ├── database
│ │ ├── redis.go
│ │ └── mysql.go
│ ├── dao 数据库操作接口/实现
│ │ ├── dao_impls
│ │ │ └── user_impls.go
│ │ └── user.go 用户 DAO 接口
│ ├── svc_impls 服务接口实现
│ │ ├── svc_auth
│ │ └── svc_user
│ └── sdks 外部 SDK 依赖
└── service 服务接口定义
├── auth.go 认证服务定义
└── user.go 用户服务定义
面向接口编程
正如你所看到的,我的目录结构将接口和实现分开存放了。
根据依赖倒置原则(Dependence Inversion Principle),对象应依赖接口,而不是依赖实现。
依赖接口带来的好处有很多(当然缺点就是你要多写些代码):
比如我有个 Deployment 常驻进程管理服务,我是这样定义的:
type Service struct {
DB isql.GormSQL
DaoGroup dao.Group
DaoDeployment dao.Deployment
DaoDeploymentStates dao.DeploymentState
ProcessManager sdks.ProcessManager
ServerManager sdks.ServerManager
ServerSelector sdks.ServerSelector
}
该 struct 的成员都是接口。
目前 dao.* 都是在 MySQL 里面,但不排除哪天,我会把 dao.DeploymentState 放到 Redis 存储,此时只需重新实现 CURD 四个借口即可。
因为进程的状态是频繁更新的,数据量大的时候,放 MySQL 不太合适。
我们再看看 ProcessManager,它也是一个 interface:
type ProcessManager interface {
StartProcess(ctx context.Context, serverIP string, params ProcessCmdArgs) (code, pid int, err error)
CheckProcess(ctx context.Context, serverIP string, pid int) (err error)
InfoProcess(ctx context.Context, serverIP string, pid int) (info jobExecutor.ProcessInfoResponse, err error)
KillProcess(ctx context.Context, serverIP string, pid int) (err error)
IsProcessNotRunningError(err error) bool
}
我编码的过程中,只要先想好每个模块的入参和出参,ProcessManager 到底要长什么样,我到时候再写!
本地测试时,我也可以写个 mock 版的 ProcessManager,生产的时候是另一个实现,如:
func NewProcessManager(config sdks.ProcessManagerConfig) sdks.ProcessManager {
config.Default()
if config.IsDevelopment() {
return &ProcessManagerMock{config: config}
}
return &ProcessManager{config: config}
}
确实是要多写点代码,但是你习惯了之后,你肯定会喜欢上这种方式。
如果你眼尖,你会发现 NewProcessManager 也是依赖倒置的!它依赖 sdks.ProcessManagerConfig 配置:
func GetProcessManagerConfig() sdks.ProcessManagerConfig {
return GetAcmConfig().ProcessManagerConfig
}
而 GetProcessManagerConfig 又依赖 AcmConfig 配置:
func GetAcmConfig() AcmConfig {
once.Do(func() {
err := cfgLoader.Load(&acmCfg, ...)
if err != nil {
panic(err)
}
})
return acmCfg
}
也就是说,程序启动时候,可以初始化一个应用配置,有了应用配置,就有了进程管理器,有了进程管理器,就有了常驻进程管理服务……
这个时候你会发现,自己去组织这颗依赖树是非常痛苦的,此时我们可以借助 Google 的 wire 依赖注入代码生成器,帮我们把这些琐事做好。
wire
我以前写 PHP 的时候,主要是使用 Laravel 框架。
wire 和这类框架不同,它的定位是代码生成,也就是说在编译的时候,就已经把程序的依赖处理好了。
Laravel 的依赖注入,在 Go 的世界里对应的是 Uber 的 dig 和 Facebook 的 inject,都是使用 反射 机制实现依赖注入的。
在我看来,我更喜欢 wire,因为很多东西到了运行时,你都不知道具体是啥依赖……
基于代码生成的 wire 对 IDE 十分友好,容易调试。
要想使用 wire,得先理解 Provider 和 Injector:
Provider: a function that can produce a value. These functions are ordinary Go code.
Injector: a function that calls providers in dependency order. With Wire, you write the injector’s signature, then Wire generates the function’s body.
Provider 是一个可以产生值的函数——也就是我们常说的构造函数,上面的 NewProcessManager 就是 Provider。
Injector 可以理解为,当很多个 Provider 组装在一起的时候,可以得到一个管理对象,这个是我们定义的。
比如我有个 func NewApplicaion() *Applicaion 函数,
它依赖了 A、B、C,
而 C 又依赖了我的 Service,
Service 依赖了 DAO、SDK,
wire 就会自动把 *Applicaion 需要 New 的对象都列举出来,
先 NewDao,
然后 NewSDK,
再 NewService,
再 NewC,
最后得到 *Applicaion 返回给我们。
此时,NewApplicaion 就是 Injector,不知道这样描述能不能听懂!
实在没明白的,可以看下代码,这些不是手打的,而是 wire 自动生成的哦~
func InitializeApplication() (*app.Application, func(), error) {
extend := app.Extend{}
engine := app.InitGinServer()
wrsqlConfig := config.GetMysqlConfig()
gormSQL, cleanup, err := database.InitSql(wrsqlConfig)
if err != nil {
return nil, nil, err
}
daoImpl := &dao_group.DaoImpl{}
cmdbConfig := config.GetCmdbConfig()
rawClient, cleanup2 := http_raw_client_impls.NewHttpRawClient()
cmdbClient, err := cmdb_client_impls.NewCmdbCli(cmdbConfig, rawClient)
if err != nil {
cleanup2()
cleanup()
return nil, nil, err
}
serverManagerConfig := config.GetServerManagerConfig()
jobExecutorClientFactoryServer := job_executor_client_factory_server_impls.NewJobExecutorClientFactoryServer(serverManagerConfig)
serverManager := server_manager_impls.NewServerManager(gormSQL, daoImpl, cmdbClient, serverManagerConfig, jobExecutorClientFactoryServer)
service := &svc_cmdb.Service{
ServerManager: serverManager,
}
svc_groupService := &svc_group.Service{
DB: gormSQL,
DaoGroup: daoImpl,
ServerManager: serverManager,
}
dao_deploymentDaoImpl := &dao_deployment.DaoImpl{}
dao_deployment_stateDaoImpl := &dao_deployment_state.DaoImpl{}
processManagerConfig := config.GetProcessManagerConfig()
jobExecutorClientFactoryProcess := job_executor_client_factory_process_impls.NewJobExecutorClientFactoryProcess(serverManagerConfig)
jobExecutorClientFactoryJob := job_executor_client_factory_job_impls.NewJobExecutorClientFactoryJob(serverManagerConfig)
processManager := process_manager_impls.NewProcessManager(processManagerConfig, jobExecutorClientFactoryProcess, jobExecutorClientFactoryJob)
serverSelector := server_selector_impls.NewMultiZonesSelector()
svc_deploymentService := &svc_deployment.Service{
DB: gormSQL,
DaoGroup: daoImpl,
DaoDeployment: dao_deploymentDaoImpl,
DaoDeploymentStates: dao_deployment_stateDaoImpl,
ProcessManager: processManager,
ServerManager: serverManager,
ServerSelector: serverSelector,
}
svc_deployment_stateService := &svc_deployment_state.Service{
DB: gormSQL,
ProcessManager: processManager,
DaoDeployment: dao_deploymentDaoImpl,
DaoDeploymentState: dao_deployment_stateDaoImpl,
JobExecutorClientFactoryProcess: jobExecutorClientFactoryProcess,
}
authAdminClientConfig := config.GetAuthAdminConfig()
authAdminClient := auth_admin_client_impls.NewAuthAdminClient(authAdminClientConfig, rawClient)
redisConfig := config.GetRedisConfig()
redis, cleanup3, err := database.InitRedis(redisConfig)
if err != nil {
cleanup2()
cleanup()
return nil, nil, err
}
svc_authService := &svc_auth.Service{
AuthAdminClient: authAdminClient,
Redis: redis,
}
dao_managersDaoImpl := &dao_managers.DaoImpl{}
kserverConfig := config.GetServerConfig()
svc_heartbeatService := &svc_heartbeat.Service{
DB: gormSQL,
DaoManagers: dao_managersDaoImpl,
ServerConfig: kserverConfig,
JobExecutorClientFactoryServer: jobExecutorClientFactoryServer,
}
portalClientConfig := config.GetPortalClientConfig()
portalClient := portal_client_impls.NewPortalClient(portalClientConfig, rawClient)
authConfig := config.GetAuthConfig()
svc_portalService := &svc_portal.Service{
PortalClient: portalClient,
AuthConfig: authConfig,
Auth: svc_authService,
}
apiService := &api.Service{
CMDB: service,
Group: svc_groupService,
Deployment: svc_deploymentService,
DeploymentState: svc_deployment_stateService,
Auth: svc_authService,
Heartbeat: svc_heartbeatService,
Portal: svc_portalService,
}
ginSvcHandler := app.InitSvcHandler()
grpcReportTracerConfig := config.GetTracerConfig()
configuration := config.GetJaegerTracerConfig()
tracer, cleanup4, err := pkgs.InitTracer(grpcReportTracerConfig, configuration)
if err != nil {
cleanup3()
cleanup2()
cleanup()
return nil, nil, err
}
gatewayConfig := config.GetMetricsGatewayConfig()
gatewayDaemon, cleanup5 := pkgs.InitGateway(gatewayConfig)
application := app.NewApplication(extend, engine, apiService, ginSvcHandler, kserverConfig, tracer, gatewayDaemon)
return application, func() {
cleanup5()
cleanup4()
cleanup3()
cleanup2()
cleanup()
}, nil
}
wire 怎么用倒是不难,推荐大家使用 Provider Set 组合你的依赖。
可以看下面的例子,新建一个 wire.gen.go 文件,注意开启 wireinject 标签(wire 会识别该标签并组装依赖):
//go:build wireinject
// +build wireinject
package inject
import (
"github.com/google/wire"
)
func InitializeApplication() (*app.Application, func(), error) {
panic(wire.Build(Sets))
}
func InitializeWorker() (*worker.Worker, func(), error) {
panic(wire.Build(Sets))
}
InitializeApplication:这个就是 Injector 了,表示我最终想要 *app.Application,并且需要一个 func(),用于程序退出的时候释放资源,如果中间出现了问题,那就返回 error 给我。
wire.Build(Sets) :Sets 是一个依赖的集合,Sets 里面可以套 Sets:
var Sets = wire.NewSet(
ConfigSet,
DaoSet,
SdksSet,
ServiceSet,
)
var ServiceSet = wire.NewSet(
// ...
wire.Struct(new(svc_deployment.Service), "*"),
wire.Bind(new(service.Deployment), new(*svc_deployment.Service)),
wire.Struct(new(svc_group.Service), "*"),
wire.Bind(new(service.Group), new(*svc_group.Service)),
)
注:wire.Struct 和 wire.Bind 的用法看文档就可以了,有点像 Laravel 的接口绑定实现。
此时我们再执行 wire 就会生成一个 wire_gen.go 文件,它包含 !wireinject 标签,表示会被 wire 忽略,因为是 wire 生产出来的!
//go:build !wireinject
// +build !wireinject
package inject
func InitializeApplication() (*app.Application, func(), error) {
// 内容就是我上面贴的代码!
}
感谢公司的大神带飞,好记性不如烂笔头,学到了知识赶紧记下来!
以上就是如何组织Go代码目录结构依赖注入wire使用解析的详细内容,更多关于Go目录结构依赖注入wire的资料请关注脚本之家其它相关文章!
您可能感兴趣的文章:
阅读全文
|
__label__pos
| 0.904048 |
kmizuの日記
プログラミングや形式言語に関係のあることを書いたり書かなかったり。
Nemerleでコンパイル時に迷路を解いてみた
*1Nemerleのマクロは非常に強力で、コンパイル時でも実行時に行えるあらゆる事(計算はもちろんのこと、入出力、ネットワークIO、GUIなど)を行えるので、それを利用してコンパイル時に例の迷路を解くマクロを書いてみた。このマクロは、入力として迷路を表現してテキストを受け取り、迷路を解くのに成功した場合は、経路を書きこんだ迷路の文字列を結果として返し、解けない迷路の場合は、This maze cannot be solvedと表示し、コンパイルエラーにする。
using System;
using System.Console;
using System.IO.File;
using Nemerle.Collections;
using Nemerle.Imperative;
using Nemerle.Compiler;
macro SolveMaze(inputFile : string){
def maze = ReadAllLines(inputFile);
def h = maze.Length;
def w = maze[0].Length;
def q = Queue();
def v = array(h);
def d = [(1, 0), (-1, 0), (0, 1), (0, -1)];
for(mutable i = 0; i < h; i = i + 1) {
v[i] = array(w);
}
mutable s = None();
for(mutable y = 0; y < h; y = y + 1) {
for(mutable x = 0; x < w; x = x + 1) {
when(maze[y][x] == 'S') s = Some(x, y);
}
}
match(s) {
| None => Message.Error("S is not found"); <[ ("" : string) ]>
| Some((sx, sy)) =>
q.Add([(sx, sy)]);
mutable path = null;
def Loop() {
when(q.IsEmpty) Message.Error("This maze cannot be solved");
path = q.Take();
def (x, y) = path.Head;
when(maze[y][x] == 'G') return;
when((!v[y][x]) && maze[y][x] != '*') {
v[y][x] = true;
d.Iter(fun(_){
| (dx, dy) => q.Add((x + dx, y + dy)::path);
});
}
Loop();
}
Loop();
def result = System.Text.StringBuilder();
for(mutable y = 0; y < h; y = y + 1) {
for(mutable x = 0; x < w; x = x + 1) {
def e = maze[y][x];
if(e == 'S' || e == 'G' || !path.Contains((x, y))) {
Write(e);
_ = result.Append(e);
}else {
Write("$");
_ = result.Append("$");
}
}
WriteLine("");
_ = result.Append("\n");
}
WriteLine("compiled successfully");
<[ $(result.ToString(): string) ]>
}
}
解法は単なるBFS+αであり、特筆すべき点は無いが、ローカル変数の型が一切宣言されていない点に注意して欲しい。ローカル変数の宣言の際に、型宣言を省略できる事自体は、元々型推論が強力な静的型付けの関数型言語(ML,OCaml,Haskellなど)にとどまらず、静的型付けOOP言語(C#,Scala,Nice,Fantomなど)でもよく見られるようになって来ているが、Nemerleの場合、それに加えて、
def v = array(h);
のように、宣言時にはvの具体的な型がわからない(配列であることはわかるが、何の配列であるかは宣言時にはわからない)場合でも、多くの場合型宣言が必要無い*2
このマクロをコンパイルするには、以下のようにコマンドラインから入力する(maze.nはこのマクロを含むファイルのファイル名)。
ncc -r:Nemerle.Macros.dll -t:dll maze.n -o maze.dll
このマクロを使うには、たとえば、次のようなプログラムを書き、
using System.Console;
WriteLine(SolveMaze("input.txt"));
以下のようにコマンドラインから入力する。
>ncc -r:maze.dll use_maze.n -o solve_maze.exe
すると、迷路が解けた場合には、以下のように表示され、コンパイルが成功する。
**************************
*S* * $$$$ *
*$* *$$* $************* *
*$* $$* $$************ *
*$$$$* $$$$$ *
**************$***********
* $$$$$$$$$$$$$ *
**$***********************
* $$$$$* $$$$$$$$$$$$$G *
* * $$$$*********** * *
* * ******* * *
* * *
**************************
compiled successfully
さらに、できたsolve_maze.exeを実行してみると、先ほどの迷路の解がそのまま表示される。
**************************
*S* * $$$$ *
*$* *$$* $************* *
*$* $$* $$************ *
*$$$$* $$$$$ *
**************$***********
* $$$$$$$$$$$$$ *
**$***********************
* $$$$$* $$$$$$$$$$$$$G *
* * $$$$*********** * *
* * ******* * *
* * *
**************************
input.txtの中身が、解けない迷路、たとえば以下のような迷路の場合、
**************************
*S* * *
*** * * ************* *
* * * ************ *
* * *
************** ***********
* *
** ***********************
* * G *
* * *********** * *
* * ******* * *
* * *
**************************
以下のように表示され、コンパイルは失敗する。
use_maze.n:2:11:2:20: e[01;31merrore[0m: This maze cannot be solved
confused by earlier errors bailing out
*1:一応、今でもダウンロードできるが、もはやアップデートされることは無いだろうという意味で少し早計だったようだ。今は別の人がメンテしてるみたいで、ひょっとしたらまだこれから更新されるのかもしれない。
*2:変数の宣言時にはわからない型情報は、変数が実際に使われている箇所の情報を基に推論される
|
__label__pos
| 0.965977 |
Share
Explore BrainMass
Information security and cyber-victimization
Q1: Define Information Security, list the four types of security that contribute to Information Security, and provide an example of each. Do you have any personal experience with any of the four types of security? Which, if any, are the most important in your opinion?
Q2: Describe cyber-victimization and why it is growing. What characteristics of the Internet make finding victims easier than in the physical world? How may you protect your organization from cyber-victimization? What steps may you take to protect yourself?
Solution Preview
Q1: Define Information Security, http://www.businessdictionary.com/definition/information-security.html
Safe-guarding an organization's data from unauthorized access or modification to ensure its availability, confidentiality, and integrity.
list the four types of security that contribute to Information Security, and provide an example of each.
I am not finding this information on any resources which spell out exactly four kinds - this may be from the author of your textbook or your professor. Essentially, what the resources I am reading are saying is that information security involves preventing unauthorized disclosure of data, preventing misuse of data, preventing modification of data, and preventing loss of access to data (maybe by natural disaster, too?)
Do you have any personal experience with any of the four types of security? I have on numerous occasions been informed (by bank, credit card company ...
Solution Summary
What is information security, and what is cyber victimization, how does it happen, and how to prevent loss of information security and prevent becoming a victim online.
$2.19
|
__label__pos
| 0.935701 |
Zimbra CLI Commands: User Accounts Management | Zimbra
In Zimbra, you can manage accounts by creating them and adding or changing features both from the administration console and from the Command Line Tools (CLI).
In this article, in particular, we’re going to see how to manage user accounts through CLI commands, using zmprov. For every command, we will show you the extended and short-form, the syntax to be used and an example to better help you understand how it works.
All commands are intended to be executed logging in as a Zimbra user, with the command su - zimbra
Account Provisioning Commands
Create Account
To create an account the command to use is CreateAccount or ca. Below is the syntax:
zmprov CreateAccount {user@yourdomain} {passwd} [attribute1 value1 etc]
Example:
zmprov CreateAccount [email protected] pwd123 displayName JBrown
Delete Account
To delete an account, you have to use the following command: DeleteAccount (da). Below is the syntax:
zmprov DeleteAccount {user@yourdomain|id|adminName}
Example:
zmprov DeleteAccount [email protected]
Get Account Membership
In order to get account membership, we are going to use this command: GetAccountMembership (gam). Below is the syntax:
zmprov GetAccountMembership {user@yourdomain|id}
Get Account
To get an account, you have to use the following command: GetAccount or ga. Below is the syntax:
zmprov GetAccount {user@yourdomain|id|adminName}
Example:
zmprov GetAccount [email protected]
Get All Accounts
To get all accounts, you have to use the following command: GetAllAccounts or gaa. Below is the syntax:
zmprov GetAllAccounts [-v] [{yourdomain}]
Example:
zmprov GetAllAccounts -v yourdomain.com
The -v attribute, stands for verbose. Verbose mode dumps full exception stack trace
Get All Admin Accounts
To get all admin accounts, you have to use a similar command to the above one: GetAllAdminAccounts (gaaa). Below is the syntax:
zmprov GetAllAdminAccounts
Modify Account
If you want to modify an account the command you need to use is the following one: ModifyAccount (ma). Below is the syntax:
zmprov ModifyAccount {user@yourdomain|id|adminName} [attribute1 value1 etc]
Example:
zmprov ModifyAccount [email protected] zimbraAccountStatus maintenance
Rename Account
To rename an account the command to be used is RenameAccount or ra. Below is the syntax:
zmprov RenameAccount {user@yourdomain|id} {newusername@yourdomain}
Example:
zmprov RenameAccount [email protected] [email protected]
Set Password
If you want to set a password to your account, you have to use the following command: SetPassword (sp). Below is the syntax:
zmprov SetPassword {user@yourdomain|id|adminName} {passwd}
Example:
zmprov SetPassword [email protected] pwd123
Account Alias
You can add or remove an alias account.
To add an account alias, the command to be used is AddAccountAlias or aaa, and the syntax is:
zmprov AddAccountAlias {user@yourdomain|id|adminName} {aliasname@yourdomain}
Example:
zmprov AddAccountAlias [email protected] [email protected]
To remove an account alias, the command to be used is RemoveAccountAlias or raa, and the syntax is:
zmprov RemoveAccountAlias {user@yourdomain|id|adminName} {aliasname@yourdomain}
Example:
zmprov RemoveAccountAlias [email protected] [email protected]
Search Accounts
You can search accounts using the following command: SearchAccounts or sa. Below is the syntax:
zmprov SearchAccounts [-v] {ldap-query} [limit] [offset] [sortBy {attribute}]
Set Account Class of Service
To set an account COS, all you have to do is to use the command SetAccountCOS or sac with the following syntax:
zmprov SetAccountCOS {user@yourdomain|id|adminName} {cos-name|cos-id}
Example:
zmprov SetAccountCOS [email protected] SampleRole
Other Commands Related to Account Management
There are some other commands you can use when you manage accounts on your server. Here is a shortlist of the main categories of them. Click on the name of each one for a complete guide on specific commands:
Download Zextras Suite for Zimbra OSE
Post your comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Zextras Suite 3.1.9 | Blog
Zimbra CLI Commands: Distribution List | Zimbra
|
__label__pos
| 0.979498 |
Unleash the Power of Node.js and Fastify: Build a Rest API and Connect it to Open AI Chat GPT
Unlock the potential of Node.js and Fastify as we delve into building a high-performance REST API. But that's not all – we're taking it a step further by seamlessly integrating OpenAI's Chat GPT.
Unleash the Power of Node.js and Fastify: Build a Rest API and Connect it to Open AI Chat GPT
Photo by Mojahid Mottakin / Unsplash
In the ever-evolving landscape of web development, Node.js and Fastify stand as formidable pillars of efficiency and performance. Node.js, known for its speed and versatility, and Fastify, recognized for its lightning-fast web framework, together form a dynamic duo for building robust REST APIs. But what if we told you that there's a way to take your API to the next level, infusing it with the capabilities of OpenAI's Chat GPT?
In this comprehensive guide, we embark on a journey through the realms of Node.js, Fastify, and OpenAI's Chat GPT, unlocking their full potential to create an API experience like no other.
Node.js: Fueling the Backend Revolution
Node.js has been a game-changer in backend development, enabling developers to use JavaScript on the server-side. Its non-blocking, event-driven architecture makes it a natural choice for building highly scalable applications. We'll dive deep into how Node.js empowers our project with speed, efficiency, and an extensive library ecosystem.
Fastify: The Swift and Secure Web Framework
Fastify, a web framework for Node.js, is designed for blazing-fast performance. It excels in handling high loads while maintaining a minimalistic and developer-friendly API. We'll explore how Fastify's features and plugins streamline the creation of our RESTful API, ensuring it's both performant and secure.
OpenAI Chat GPT: The AI Language Model Revolution
OpenAI's Chat GPT represents a breakthrough in natural language understanding. With its ability to generate human-like text, it opens up exciting possibilities for conversational interfaces and content generation. We'll integrate Chat GPT seamlessly into our API, creating a dynamic and responsive user experience.
Creating the project
To start the project, we need to create a new Node.js project and install Fastify to our project dependencies, create a new folder, and:
npm init -y
#or
yarn init -y
Now install the Fastify dependency:
npm install fastify
#or
yarn add fastify
Add dotenv too:
npm install dotenv
#or
yarn add dotenv
Open the project in your code editor:
Project in ZED code editor.
It's time to create the folder structure and some files to get our project running, I like this structure:
Folders and files
In your package.json create an entry for "type": "module", this will allow the project to use import/export syntax. In main entry, change to server.js.
Let's create our server module, open app.js and write a code that imports the routes file and setup the Fastify:
import Fastify from 'fastify';
import Routes from './routes.js';
const fastify = Fastify({
logger: true,
});
Routes({ fastify });
export default fastify;
In the Fastify options we will enable only the logger.
In the routes.js temporarily we can register a new route and redirect to the controller:
import * as GPTController from './controllers/GPTController.js';
const gptRoutes = (fastify, _, done) => {
fastify.get('/', GPTController.search);
done();
}
export default async function Routes({ fastify }) {
fastify.register(gptRoutes, { prefix: 'v1/gpt' });
}
This code imports the GPTController (we will create it) and use the search method on fastify.get, this line means that when the user hits the .../gpt using the GET HTTP verb we will call the search function inside the controller.
We got the Routes function too, this function just registers the routes on Fastify and permits the creation of route prefixes (and other settings), this is very useful for grouping the resources by context or creating versions for each route group.
Open the server.js file and let's see the code:
import 'dotenv/config'
import fastify from './src/app.js';
try {
const port = parseInt(process.env.PORT, 10) || 7777;
await fastify.listen({ port });
} catch (err) {
throw err
}
Here we just import dotenv and execute the config setup, import the app.js file, and start the Fastify server with listen a function passing the port as a parameter. We get the port number from the Env.
Create the controller file inside controllers folder with name GPTController.js and put a small "Hello World":
export const search = (request) => {
return {message: "Hello World"}
}
Just a small function that returns an object with message key and as a value; "Hello World".
Fastify automatically serializes the return value of the controller to a JSON and we can speed up this process (yes, serializing is slow) using the response key on the schema option for each route, check it here. It is not necessary but is good to know.
Time to run the server and see all the changes that we made:
node server.js
Fastify server running
Some logs will be printed on the terminal, this is because we enable the logger, and every request will output some logs too.
Open the server on an HTTP client (I use HTTPie) with the new route and we can see the "Hello World" message:
Server response for GPT route
Okay, the main application is done! Go to check the OpenAI GPT part.
|
__label__pos
| 0.954961 |
Thursday, September 28, 2023
HomeIoTConstruct your pool water temperature monitoring answer with AWS
Construct your pool water temperature monitoring answer with AWS
[ad_1]
I live in Toulouse, in the south of France, where the climate is classified as humid subtropical climate (Cfa in the Köppen climate classification). This is why swimming pools are so common here! My household is no exception. But as a geek, I also wanted to monitor the temperature of my swimming pool, consult real-time indicators, and view history.
Let’s have a deep dive (pun intended) together: in this blog post, I demonstrate how to put AWS services together to cost-effectively build a water temperature monitoring solution. By following this demo, you will learn useful tools not just for building your own water temperature monitoring solution, but other creative monitoring solutions as well.
Prerequisites
I had a M5StickC with an NCIR hat, and an AWS account with AWS IoT Core, Amazon Timestream and Amazon Managed Service for Grafana (Preview), which covered everything I needed to get started!
Components overview
M5StickC is a mini M5Stack, powered by ESP32. It is a portable, easy-to-use, open source, IoT development board. M5stickC is one of the core devices in the M5Stack product series. It is built in a continuously growing hardware and software ecosystem. It has many compatible modules and units, as well as the open source and engineering communities that will help maximize your benefits at every step of the development process.
NCIR hat is an M5StickC-compatible infrared sensor. This HAT module integrates MLX90614 which can be used to measure the surface temperature of a human body or other object. Since this sensor measures infrared light bouncing off of remote objects, it senses temperature without the need for physical contact.
AWS IoT Core lets you connect IoT devices to AWS without the need to provision or manage servers. AWS IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT Core, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected.
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases.
Amazon Managed Service for Grafana (AMG) is a fully managed service that is developed together with Grafana Labs and based on open source Grafana. Enhanced with enterprise capabilities, AMG makes it easy for you to visualize and analyze your operational data at scale. Grafana is a popular open source analytics platform that enables you to query, visualize, alert on and understand your metrics no matter where they are stored.
High-level architecture
The following diagram shows the flow of information, starting from the M5stickC, through AWS IoT Core and then Timestream, to the end users viewing the dashboard in AMG.
Architecture diagram showing data flow from M5Stick, IoT Core, Timestream, AMG and end users.
AWS IoT Core setup
We will start with the following steps:
• Policy creation
• Thing creation, including:
• Certificate creation
• Policy attachment
To create a policy
An AWS IoT Core policy allows you to control access to AWS IoT Core operations that allow you to connect to the AWS IoT Core message bus as well as send and receive MQTT messages.
1. Open the AWS Management Console of your AWS account.
2. Navigate to the AWS IoT Core service, then open Secure > Policies section.
3. Select Create.
4. Enter the following values:
• Name: TempCheckerPolicy
• Statements > Advanced
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:Publish",
"Resource": "arn:aws:iot:<region>:<account-id>:topic/TempCheckerTopic"
},
{
"Effect": "Allow",
"Action": "iot:Subscribe",
"Resource": "arn:aws:iot:<region>:<account-id>:topicfilter/TempCheckerTopic"
},
{
"Effect": "Allow",
"Action": "iot:Connect",
"Resource": "*"
}
]
}
Screenshot of the action selection.
5. Select Create.
To create a thing
1. In the AWS Management Console, open AWS IoT Core.
2. In the Manage > Things section, select Create.
3. Select Create a single thing.
Screenshot of the AWS IoT thing creation start.
4. Create a thing type with the following information:
• Name: M5Stick
• Description: An M5StickC with NCIR hat.
Screenshot of the thing type creation form.
5. Select Create thing type.
6. On the next page, fill in the thing creation form with the following:
• Name: TempChecker
• Thing type: select M5Stick
7. Select Next.
Screenshot of the form to add your device to the thing registry.
To add a certificate for your thing and attach a policy
1. In the “One-click certificate creation (recommended)” panel, select Create certificate.
Screenshot of the certification creation start.
The certificate is immediately created.
2. Download the certificate, along with public and private keys.
3. Select Attach a policy.
Screenshot of the certificate created.
4. Select the TempCheckerPolicy policy, then select Register Thing.
Screenshot of the thing registration completion.
M5Stick setup
Now that AWS IoT Core is ready to receive IoT (MQTT) messages, let’s take care of the thing itself.
The M5Stick supports multiple development platforms: UIFlowArduino, and FreeRTOS. In this use case, I used UIFlow visual programming capabilities (using Blockly+Python) along with its AWS IoT built-in library to easily build and deploy my business logic.
Note: You can find more information here about how to install UIFlow IDE and how to “burn” UIFlow firmware on the M5StickC.
We need to build and deploy a program on the M5Stick that will run continuously. It contains all the necessary instructions to take the temperature sensor data and send it to AWS IoT Core. The algorithm is simple:
• Initiate the communication with AWS IoT Core.
• Initialize the M5StickC internal clock with NTP.
• Start a loop that repeats every second with the following:
• Get the temperature from the NCIR hat.
• Publish a JSON-formatted message containing the temperature and the current timestamp.
I added a visual indication of the temperature on the LCD screen, as well as LED signals with publishing MQTT messages.
As Werner Vogels, AWS CTO, says, “everything fails all the time”, so to reduce errors, I added try-catch components to debug and recover from errors.
In the AWS IoT block, use the private key and certificate files you just downloaded to set the keyFile and certFile values.
Screenshot of the UIFlow algorithm.
UIFlow translates the blocks into micropython.
from m5stack import *
from m5ui import *
from uiflow import *
from IoTcloud.AWS import AWS
import ntptime
import hat
import json
import time
import hat
setScreenColor(0x111111)
hat_ncir5 = hat.get(hat.NCIR)
iterator = None
temperature = None
label0 = M5TextBox(5, 72, "1", lcd.FONT_Default, 0xFFFFFF, rotate=0)
from numbers import Number
try :
aws = AWS(things_name="TempChecker", host="<endpoint>.iot.eu-west-1.amazonaws.com", port=8883, keepalive=300, cert_file_path="/flash/res/c47c10a25d-certificate.pem", private_key_path="")
aws.start()
try :
ntp = ntptime.client(host="pool.ntp.org", timezone=2)
iterator = 1
while True:
temperature = hat_ncir5.temperature
M5Led.on()
try :
aws.publish(str('TempCheckerTopic'),str((json.dumps(({'Temperature':temperature,'Iterator':iterator,'Timestamp':(ntp.getTimestamp())})))))
iterator = (iterator if isinstance(iterator, Number) else 0) + 1
label0.setText(str(temperature))
pass
except:
label0.setColor(0xff0000)
label0.setText('IoT error')
M5Led.off()
wait(1)
pass
except:
label0.setColor(0xff0000)
label0.setText('NTP error')
pass
except:
label0.setColor(0xff0000)
label0.setText('AWS error')
Amazon Timestream setup
Now we configure the storage for our temperature data. Amazon offers the broadest selection of purpose-built databases to support different use cases. In this case, the right tool for the right job is Timestream.
Our use case is clearly related to time series data. Timestream is the dedicated service to manipulate this type of data using SQL, with built-in time series functions for smoothing, approximation, and interpolation. Amazon Timestream also supports advanced aggregates, window functions, and complex data types such as arrays and rows. And Amazon Timestream is serverless – there are no servers to manage and no capacity to provision. More information in the documentation.
To create a database in Timestream
1. Open Timestream in the AWS Management Console.
2. Select Create database.
3. Enter the following information:
• Configuration: Standard database
• Name: TempCheckerDatabase
• Encryption: aws/timestream
4. Confirm by selecting Create database.
Screenshot of the database creation form.
To create a table
On the next page, we create our table.
1. Select your newly created database, open the Tables tab, and select Create table.
2. Set the following values:
• Table name: Temperature
• Data retention:
• Memory: 1 day
• Magnetic: 1 year
Screenshot of the table creation form.
AWS IoT Core destination setup
Our storage is ready to receive data from AWS IoT Core.
Let’s configure the IoT rule that triggers when our M5StickC sends data. This will run the action to insert data into Timestream.
1. Open AWS IoT Core in the AWS Management Console.
2. In the Act > Rules section, select Create, and then enter the following:
• Name: TempCheckerRule
• Description: Rule to handle temperature messages
• Rule query statement: SELECT Temperature FROM 'TempCheckerTopic'
Screenshot of the rule creation form.
3. In the “Set one or more actions” panel, select Add action. Select Write a message into a Timestream table, then Configure action.
Screenshot of the action selection.
4. Select the Timestream database and table we just created. Add the following dimension:
• Dimension Name: Device
• Dimension Value: M5stick
5. Next, we need to create an AWS IAM role to allow the service to access the database. Select Create role, and then enter the following:
• Name: TempCheckerDatabaseRole
Screenshot of the database role creation form.
6. Review your selections, and then confirm the action by selecting Add action.
Screenshot of the action creation form.
7. In the “Error action” panel, select Add action. Select Send message data to CloudWatch logs, select Configure action.
Screenshot of the error action selection.
8. Select Create a new resource to be redirected to Cloudwatch.
9. Create a log group named TempCheckerRuleErrors.
10. In the action configuration wizard, refresh the resources list and select the newly created log group.
11. We need to create an AWS IAM role to allow the service to access Cloudwatch. Select Create role, then enter the following name:
• Name: TempCheckerCloudwatchRole
Screenshot of the Cloudwatch role creation form.
12. Confirm the action by selecting Add action.
Screenshot of the action creation completion.
13. Confirm the rule creation by selecting Create rule.
Screenshot of the rule creation completion.
We now have a valid rule that feeds the Timestream database with the temperature data sent by the M5StickC.
Amazon Managed Service for Grafana setup
Next, let’s visualize this data.
Note: At the time of writing, AMG is still in preview.
1. Open the AMG console, then choose Create workspace and enter the following:
• Workspace name: TempCheckerWorkspace
2. Choose Next.
Screenshot of the workspace creation form.
You are prompted to enable AWS Single Sign-On (SSO) before you can begin managing it. If you have already performed these actions in your AWS account, you can skip this step.
Screenshot of the AWS SSO enablement form.
AMG integrates with AWS SSO so that you can easily assign users and groups from your existing user directory such as Active Directory, LDAP, or Okta within the Grafana workspace and single sign-on using your existing user ID and password. You can find more information in this blog post.
3. Select Create user.
4. Enter your email address, along with your first and last name, then confirm by selecting Create user.
Screenshot of the user creation form.
The AWS Organization and AWS SSO should be enabled in seconds. You will receive a few emails in parallel: one for AWS Organization setup validation, and one for for AWS SSO setup validation. Remember to check them out and complete the validation steps.
5. For permission type, use the default Service managed permissions, allowing AWS to manage IAM roles. This ensures that evolutions in AMG that require updates in IAM will be automatically propagated and service will not be interrupted. Select Next.
Screenshot of the authentication configuration form.
6. Because I built this for a personal project, I can use “Current account” to manage the authorizations in this AWS account. Complex organizations will want to leverage their AWS Organization Units.
7. To allow our workspace to access Timestream data source, select Amazon TimeStream, then select Next.
Screenshot of the managed permissions and data sources configuration form.
A warning panel should appear, stating “you must assign user(s) or user group(s) before they can access Grafana console.” To assign users, use the following steps:
8. Select Assign user.
9. Select the user you just created and confirm by selecting Assign user.
Screenshot of the users configuration form.
Once the Grafana workspace is created, the link will be provided on this page.
To configure a Grafana dashboard
1. Log in with AWS SSO.
2. On the welcome page, navigate to Create > Dashboard.
Screenshot of the Grafana welcome page with Create menu unfolded.
In Grafana, each dashboard contains one or more panels. The panel is the basic visualization building block. With the exception of a few special purpose panels, a panel is a visual representation of data over time. This can range from temperature fluctuations to the current server status to a list of logs or alerts. There are a wide variety of style and formatting options for each panel. Panels can be moved, rearranged, and resized.
We start by adding a panel for the real-time temperature display. For this, a gauge will give us a quick and colorful overview.
To configure a temperature gauge panel
1. Add a new panel and select Amazon Timestream as data source.
2. Enter the following query to retrieve the latest temperature value inserted in the database:
SELECT measure_value::double AS temperature
FROM "TempCheckerDatabase".Temperature
ORDER BY time DESC
LIMIT 1
Screenshot of the query configuration for the temperature gauge panel.
3. On the right, in Panel configuration, set the panel name to “Real time monitoring” and select the Gauge visualization.
Screenshot of the visualization type selection.
4. In the Field configuration, set the Min and Max options, as well as the relevant thresholds. I chose a progressive rainbow from blue to red as the temperature is expected to increase.
Screenshot of fields configuration form.
Screenshot of the thresholds configuration.
5. Select Save, and you will see your gauge.
Screenshot of the temperature gauge panel completely configured.
To configure a temperature history panel
The second panel is a temperature history panel which will retrieve the historical data, filtered by the period selected by the user in Grafana.
1. Add a panel and select Amazon Timestream as data source.
2. Enter the following statement in order to query the database with the relevant filters:
SELECT ROUND(AVG(measure_value::double), 2) AS avg_temperature, BIN(time, $__interval_ms) AS binned_timestamp
FROM "TempCheckerDatabase".Temperature
WHERE $__timeFilter
AND measure_value::double < 100
AND measure_value::double > 20
GROUP BY BIN(time, $__interval_ms)
ORDER BY BIN(time, $__interval_ms)
Screenshot of the query configuration for the temperature history panel.
3. On the right, in Panel configuration, set the Panel title to Trend and select the Graph visualization.
Screenshot of the visualization type selection.
4. Select Apply to finalize the second panel.
Screenshot of the temperature history panel completely configured.
Now we have created two panels in our Grafana dashboard: one that displays current temperature on a gauge and one that shows historical temperature on a graph.
Pricing
To understand the cost of this project using AWS’s pay-as-you-go services, we will use the following assumptions:
• The M5StickC is connected permanently to send 1 message per second.
• Every IoT message is written to the time series database, for around 40 bytes each.
• 1 Grafana Editor license active, as I am the only user of my personal dashboard.
Cost breakdown
(Note: Please check following pricing pages to ensure the current pricing)
Total cost
The total monthly cost for our complete solution is $3.42 + $1.64 + $9.00 = $14.06/month
Cleaning up
If you followed along with this solution, complete the following steps to avoid incurring unwanted charges to your AWS account.
AWS IoT Core
• In the Manage section, delete the Thing and Thing type.
• In the Secure section, remove the Policy and Certificate.
• In the Act section, clean up the Rule.
Amazon Timestream
• Delete the database, this will also delete the table.
Amazon Managed Service for Grafana
• Delete the whole workspace.
AWS IAM
• Delete the roles created along the way.
Amazon CloudWatch
• Delete the relevant Log groups.
Conclusion
In this post, I demonstrated how to set up a simple end-to-end solution to monitor the temperature of your swimming pool. The solution required minimal coding: only 2 SQL queries for Grafana. If you want to try it, you can mock the M5stickC by creating an AWS Lambda function in your AWS account with the following code. Give it an IAM role with access rights to AWS IoT Core.
import boto3
import datetime
import time
import math
import json
iot_client=boto3.client('iot-data')
topic = "TempCheckerTopic"
def lambda_handler(event, context):
i = 0
while i < 800:
i += 1
# Calculating seconds from midnight
now = datetime.datetime.now()
midnight = now.replace(hour=0, minute=0, second=0, microsecond=0)
seconds = (now - midnight).seconds
# Calculating fake temperature value
temperature = round(math.sin(seconds/600) * 5 + 25, 2)
# Preparing payload
payload = json.dumps({"Iterator": i,
"Temperature": temperature,
"Timestamp": now.timestamp()})
# Publishing to IoT Core
iot_client.publish(topic=topic, payload=payload)
# Waiting for 1 second before next value
time.sleep(1)
return {
'statusCode': 200
}
Thanks for reading this blog post where I demonstrated how to use AWS IoT Core, Timestream, and AMG together to build a smart water temperature monitoring solution for my pool. I hope the demonstration and lessons learned inspire you to bring your innovative ideas to life! Learn more about Connected Home Solutions with AWS IoT.
About the author
Jérôme Gras is a Solutions Architect at AWS. He is passionate about helping industry customers build scalable, secure, and cost-effective applications to achieve their business goals. Outside of work, Jérôme enjoys DIY stuff, video games, and board games with his family.
[ad_2]
RELATED ARTICLES
LEAVE A REPLY
Please enter your comment!
Please enter your name here
Most Popular
Recent Comments
|
__label__pos
| 0.698538 |
Addition of Coordinates
If we have two points, what will happen if we add their coordinates? If point A has coordinates (1,2) and point B has coordinates (4,1), then A + B = (5,3). It seems that there is nothing special about these points, but let us look at their position in the Cartesian coordinates.
Notice that if we add another point on the origin, the four points form a parallelogram. Now the question is, is this always the case? Try plotting two points and add their coordinates several times and see if the four points form a parallelogram. What do you observe?
To test further our observation above, we use GeoGebra to plot the two points and automatically produce the ‘sum point.’
Instructions
1. Open GeoGebra
2. To plot the points, type the following equations in the input bar and then press the Enter/Return key on the keyboard after each equation: A = (1,2), B = (4,1) , C = A + B.
3. To create the fourth point D, click the Intersect Two Object tools, click the x-axis, and then click the y-axis.
4. To draw the quadrilateral, click the Polygon tool, and then click points D, A, C, B, D in that order.
5. Now move point A or B. Is the quadrilateral always a parallelogram? What conjecture can be made from your observations?
We now prove our conjecture above.
Theorem: If you add the coordinates of two points A and B, and construct another point C whose coordinates are the sum of the coordinates of points A and B, then three points form a parallelogram containing the origin as its fourth vertex.
Proof: Let the coordinates of A and B be (p,q) and (r,s) respectively. Then the coordinates of C = (p+r, q+s). Let D be the point at the origin. We have to show that ACBD is a parallelogram.
There are several ways to prove this, but it is sufficient to show that the opposite sides of the ACBD are parallel; that is, AD is parallel to BC and AC is parallel BD.
We now compute the slopes m of each side of the parallelogram.
m_{AD} = \displaystyle\frac{q-0}{p-0} = \frac{q}{p}
m_{BC} = \displaystyle\frac{(q+s)-s}{(p+r)-r} = \frac{q}{p}
m_{BD} = \displaystyle\frac{s-0}{r-0} = \frac{s}{r}
m_{AC} = \displaystyle\frac{(q+s)-q}{(p+r)-p} = \frac{s}{r}
As we can see, the slopes of AD and BC are equal, which means that they are parallel. Also, the slopes of BD and AC are equal, which means that they are also parallel. Therefore, ADBC is a parallelogram.
Note that the parallelogram method above can also represent addition of vectors.
Related Posts Plugin for WordPress, Blogger...
Leave a Reply
|
__label__pos
| 0.999773 |
Table of Contents
Search
1. Preface
2. Connectors and connections
3. Connection configuration
4. Connection properties
5. Swagger file generation for REST V2 connections
Connections
Connections
JWT bearer token authentication
JWT bearer token authentication
When you set up a REST V2 connection, you must configure the connection properties.
The following table describes the REST V2 connection properties when you use JWT bearer token authentication:
Connection property
Description
JWT Header
JWT header in JSON format.
Sample:
{
"alg":"RS256",
"kid":"xxyyzz"
}
You can configure
HS256
and
RS256
algorithms.
JWT Payload
JWT payload in JSON format.
Sample:
{
"iss":"abc",
"sub":"678",
"aud":"https://api.box.com/oauth2/token",
"box_sub_type":"enterprise",
"exp":"120"
,
"jti":"3ee9364e"
}
The expiry time represented as
exp
is the relative time in seconds. The expiry time is calculated in the UTC format from the token issuer time (
iat
).
When
iat
is defined in the payload and the expiry time is reached, mappings and Generate Access Token will fail. To generate a new access token, you must provide a valid
iat
in the payload.
If
iat
is not defined in the payload, the expiry time is calculated from the current timestamp.
To pass the expiry time as a string value, enclose the value with double quotes. For example:
"exp":"120"
,
To pass the expiry time as an integer value, do not enclose the value with double quotes. For example:
"exp":120
,
Authorization Server
Access token URL configured in your application.
Authorization Advanced Properties
Additional parameters to use with the access token URL. Parameters must be defined in the JSON format. For example:
[\{"Name":"client_id","Value":"abc"},\{"Name":"client_secret","Value":"abc"}]
TrustStore File Path
The absolute path of the truststore file that contains the TLS certificate to establish a one-way or two-way secure connection with the REST API. Specify a directory path that is available on each Secure Agent machine in the runtime environment.
You can also configure the truststore file name and password as a JVM option or import the certificate to the following directory:
<Secure Agent installation directory\jre\lib\security\cacerts
.
For the serverless runtime environment, specify the truststore file path in the serverless agent directory.
For example: /home/cldagnt/SystemAgent/serverless/configurations/ssl_store/<cert_name>.jks
TrustStore Password
The password for the truststore file that contains the SSL certificate.
You can also configure the truststore password as a JVM option.
KeyStore File Path
Mandatory. The absolute path of the keystore file that contains the keys and certificates required to establish a two-way secure communication with the REST API. Specify a directory path that is available on each Secure Agent machine in the runtime environment.
You can also configure the keystore file name and location as a JVM option or import the certificate to any directory.
For the serverless runtime environment, specify the keystore file path in the serverless agent directory.
For example: /home/cldagnt/SystemAgent/serverless/configurations/ssl_store/<cert_name>.jks
KeyStore Password
Mandatory. The password for the keystore file required for secure communication.
You can also configure the keystore password as a JVM option.
Private Key Alias
Mandatory. Alias name of the private key used to sign the JWT payload.
Private Key Password
Mandatory. The password for the keystore file required for secure communication. The private key password must be same as the keystore password.
Access Token
Enter the access token value or click
Generate Access Token
to populate the access token value.
To pass the generate access token call through a proxy server, you must configure an unauthenticated proxy server at the Secure Agent level. The REST V2 connection-level proxy configuration does not apply to the generate access token call.
Swagger File Path
The absolute path along with the file name or the hosted URL of the swagger specification file. The hosted URL must return the content of the file without prompting for further authentication and redirection.
If you provide the absolute path of the swagger specification file, the swagger specification file must be located on the machine that hosts the Secure Agent. The user must have the read permission for the folder and the specification file. Example:
C:\swagger\sampleSwagger.json
In a
streaming ingestion
task, use only a hosted URL of the swagger specification file as the swagger file path.
Proxy Type
Type of proxy. You can select one of the following options:
• No Proxy: Bypasses the proxy server configured at the agent or the connection level.
• Platform Proxy: Proxy configured at the agent level is considered.
• Custom Proxy: Proxy configured at the connection level is considered.
Proxy Configuration
The proxy configuration format:
<host>:<port>
You cannot configure an authenticated proxy server.
Advanced Fields
Enter the arguments that the Secure Agent uses when connecting to a REST endpoint. You can specify the following arguments, each separated by a semicolon (
;
):
ConnectionTimeout
: The wait time in milliseconds to get a response from a REST endpoint. The connection ends after the connection timeout is over. Default is the timeout defined in the endpoint API.
If you define both the REST V2 connection timeout and the endpoint API timeout, the connection ends at the shortest defined timeout.
connectiondelaytime
: The delay time in milliseconds to send a request to a REST endpoint. Default is 10000.
retryattempts
: Number of times the connection is attempted when 400 and 500 series error codes are returned in the response. Default is 3. Specify 0 to disable the retry attempts.
qualifiedSchema
: Specifies if the schema selected is qualified or unqualified. Default is false.
Example:
connectiondelaytime:10000;retryattempts:5
In a
streaming ingestion
task, only
ConnectionTimeout
and
retryattempts
are applicable.
The
HS256
algorithm support in
JWT Header
is available for preview. Preview functionality is supported for evaluation purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-production environments only. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support. To use the functionality, your organization must have the appropriate licenses.
0 COMMENTS
We’d like to hear from you!
|
__label__pos
| 0.524123 |
Jump to Content
Class VerifyDomainDkimCommandProtected
Returns a set of DKIM tokens for a domain identity.
When you execute the VerifyDomainDkim operation, the domain that you specify is added to the list of identities that are associated with your account. This is true even if you haven't already associated the domain with your account by using the VerifyDomainIdentity operation. However, you can't send email from the domain until you either successfully verify it or you successfully set up DKIM for it.
You use the tokens that are generated by this operation to create CNAME records. When Amazon SES detects that you've added these records to the DNS configuration for a domain, you can start sending email from that domain. You can start sending email even if you haven't added the TXT record provided by the VerifyDomainIdentity operation to the DNS configuration for your domain. All email that you send from the domain is authenticated using DKIM.
To create the CNAME records for DKIM authentication, use the following values:
• Name: token._domainkey.example.com
• Type: CNAME
• Value: token.dkim.amazonses.com
In the preceding example, replace token with one of the tokens that are generated when you execute this operation. Replace example.com with your domain. Repeat this process for each token that's generated by this operation.
You can execute this operation no more than once per second.
Example
Use a bare-bones client and the command you need to make an API call.
import { SESClient, VerifyDomainDkimCommand } from "@aws-sdk/client-ses"; // ES Modules import
// const { SESClient, VerifyDomainDkimCommand } = require("@aws-sdk/client-ses"); // CommonJS import
const client = new SESClient(config);
const command = new VerifyDomainDkimCommand(input);
const response = await client.send(command);
See
Example
VerifyDomainDkim
// The following example generates DKIM tokens for a domain that has been verified with Amazon SES:
const input = {
"Domain": "example.com"
};
const command = new VerifyDomainDkimCommand(input);
const response = await client.send(command);
/* response ==
{
"DkimTokens": [
"EXAMPLEq76owjnks3lnluwg65scbemvw",
"EXAMPLEi3dnsj67hstzaj673klariwx2",
"EXAMPLEwfbtcukvimehexktmdtaz6naj"
]
}
*/
// example id: verifydomaindkim-1469049503083
Hierarchy
Constructors
Properties
Methods
|
__label__pos
| 0.995778 |
How to use excludeMethodsFromCoverage method of runner class
Best Atoum code snippet using runner.excludeMethodsFromCoverage
runner.php
Source:runner.php Github
copy
Full Screen
...342 {343 $this->runner->getCoverage()->resetExcludedClasses();344 return $this;345 }346 public function excludeMethodsFromCoverage(array $methods)347 {348 $coverage = $this->runner->getCoverage();349 foreach ($methods as $method) {350 $coverage->excludeMethod($method);351 }352 return $this;353 }354 public function resetExcludedMethodsFromCoverage()355 {356 $this->runner->getCoverage()->resetExcludedMethods();357 return $this;358 }359 public function addTest($testPath)360 {361 $this->runner->addTest($testPath);362 return $this;363 }364 public function addTests(array $testPaths)365 {366 foreach ($testPaths as $testPath) {367 $this->addTest($testPath);368 }369 return $this;370 }371 public function addTestsFromDirectory($directory)372 {373 $this->runner->addTestsFromDirectory($directory);374 return $this;375 }376 public function addTestsFromDirectories(array $directories)377 {378 foreach ($directories as $directory) {379 $this->addTestsFromDirectory($directory);380 }381 return $this;382 }383 public function addTestsFromPattern($pattern)384 {385 $this->runner->addTestsFromPattern($pattern);386 return $this;387 }388 public function addTestsFromPatterns(array $patterns)389 {390 foreach ($patterns as $pattern) {391 $this->addTestsFromPattern($pattern);392 }393 return $this;394 }395 public function acceptTestFileExtensions(array $testFileExtensions)396 {397 $this->runner->acceptTestFileExtensions($testFileExtensions);398 return $this;399 }400 public function setBootstrapFile($bootstrapFile)401 {402 $this->runner->setBootstrapFile($bootstrapFile);403 return $this;404 }405 public function setAutoloaderFile($autoloaderFile)406 {407 $this->runner->setAutoloaderFile($autoloaderFile);408 return $this;409 }410 public function enableDebugMode()411 {412 $this->runner->enableDebugMode();413 return $this;414 }415 public function setXdebugConfig($xdebugConfig)416 {417 $this->runner->setXdebugConfig($xdebugConfig);418 return $this;419 }420 public function doNotfailIfVoidMethods()421 {422 $this->runner->doNotfailIfVoidMethods();423 return $this;424 }425 public function failIfVoidMethods()426 {427 $this->runner->failIfVoidMethods();428 return $this;429 }430 public function shouldFailIfVoidMethods()431 {432 return $this->runner->shouldFailIfVoidMethods();433 }434 public function doNotfailIfSkippedMethods()435 {436 $this->runner->doNotfailIfSkippedMethods();437 return $this;438 }439 public function failIfSkippedMethods()440 {441 $this->runner->failIfSkippedMethods();442 return $this;443 }444 public function shouldFailIfSkippedMethods()445 {446 return $this->runner->shouldFailIfSkippedMethods();447 }448 public function init($directory = null)449 {450 $resourceDirectory = static::getResourcesDirectory();451 $currentDirectory = $this->getDirectory();452 if ($directory !== null) {453 $currentDirectory = rtrim($directory, DIRECTORY_SEPARATOR) . DIRECTORY_SEPARATOR;454 }455 $defaultConfigFile = $currentDirectory . static::defaultConfigFile;456 if ($this->adapter->file_exists($defaultConfigFile) === false || $this->prompt($this->locale->_('Default configuration file \'' . static::defaultConfigFile . '\' already exists in ' . $currentDirectory . ', type \'Y\' to overwrite it...')) === 'Y') {457 $this458 ->copy($resourceDirectory . '/configurations/runner/atoum.php.dist', $defaultConfigFile)459 ->writeInfo($this->locale->_('Default configuration file \'' . static::defaultConfigFile . '\' was successfully created in ' . $currentDirectory))460 ;461 }462 $bootstrapFile = $currentDirectory . static::defaultBootstrapFile;463 if ($this->adapter->file_exists($bootstrapFile) == false || $this->prompt($this->locale->_('Default bootstrap file \'' . static::defaultBootstrapFile . '\' already exists in ' . $currentDirectory . ', type \'Y\' to overwrite it...')) === 'Y') {464 $this465 ->copy($resourceDirectory . '/configurations/runner/bootstrap.php.dist', $bootstrapFile)466 ->writeInfo($this->locale->_('Default bootstrap file \'' . static::defaultBootstrapFile . '\' was successfully created in ' . $currentDirectory))467 ;468 }469 return $this->stopRun();470 }471 public function setDefaultBootstrapFiles($startDirectory = null)472 {473 foreach (self::getSubDirectoryPath($startDirectory ?: $this->getDirectory()) as $directory) {474 $defaultBootstrapFile = $directory . static::defaultBootstrapFile;475 if ($this->adapter->is_file($defaultBootstrapFile) === true) {476 $this->setBootstrapFile($defaultBootstrapFile);477 break;478 }479 }480 return $this;481 }482 public function setDefaultAutoloaderFiles($startDirectory = null)483 {484 foreach (self::getSubDirectoryPath($startDirectory ?: $this->getDirectory()) as $directory) {485 $defaultAutoloaderFile = $directory . static::defaultAutoloaderFile;486 if ($this->adapter->is_file($defaultAutoloaderFile) === true) {487 $this->setAutoloaderFile($defaultAutoloaderFile);488 return $this;489 }490 }491 foreach (self::getSubDirectoryPath($startDirectory ?: $this->getDirectory()) as $directory) {492 $composerAutoloaderFile = $directory . static::defaultComposerAutoloaderFile;493 if ($this->adapter->is_file($composerAutoloaderFile) === true) {494 $this->setAutoloaderFile($composerAutoloaderFile);495 break;496 }497 }498 return $this;499 }500 public static function autorunMustBeEnabled()501 {502 return (static::$autorunner === true);503 }504 public static function enableAutorun($name)505 {506 static $autorunIsRegistered = false;507 if (static::$autorunner instanceof static) {508 throw new exceptions\runtime('Unable to autorun \'' . $name . '\' because \'' . static::$autorunner->getName() . '\' is already set as autorunner');509 }510 if ($autorunIsRegistered === false) {511 $autorunner = & static::$autorunner;512 $calledClass = get_called_class();513 register_shutdown_function(514 function () use (& $autorunner, $calledClass) {515 if ($autorunner instanceof $calledClass) {516 set_error_handler(function ($error, $message, $file, $line) use ($autorunner) {517 $errorReporting = error_reporting();518 if ($errorReporting & $error) {519 $autorunner->writeError($message . ' in ' . $file . ' at line ' . $line, $error);520 exit(3);521 }522 });523 try {524 $score = $autorunner->run()->getRunner()->getScore();525 $isSuccess = $score->getFailNumber() <= 0 && $score->getErrorNumber() <= 0 && $score->getExceptionNumber() <= 0 && $score->getUncompletedMethodNumber() <= 0;526 if ($autorunner->shouldFailIfVoidMethods() && $score->getVoidMethodNumber() > 0) {527 $isSuccess = false;528 }529 if ($autorunner->shouldFailIfSkippedMethods() && $score->getSkippedMethodNumber() > 0) {530 $isSuccess = false;531 }532 exit($isSuccess ? 0 : 1);533 } catch (\exception $exception) {534 $autorunner->writeError($exception->getMessage());535 exit(2);536 }537 }538 }539 );540 $autorunIsRegistered = true;541 }542 static::$autorunner = new static($name);543 foreach (static::$configurationCallables as $callable) {544 try {545 static::$autorunner->useConfigurationCallable($callable);546 } catch (\exception $exception) {547 static::$autorunner->writeError($exception->getMessage());548 static::$autorunner = null;549 exit($exception->getCode());550 }551 }552 return static::$autorunner;553 }554 public static function disableAutorun()555 {556 static::$autorunner = false;557 }558 public static function addConfigurationCallable(callable $callable)559 {560 static::$configurationCallables[] = $callable;561 }562 protected function setArgumentHandlers()563 {564 parent::setArgumentHandlers()565 ->addArgumentHandler(566 function ($script, $argument, $values) {567 if (count($values) !== 0) {568 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));569 }570 $script->version();571 },572 ['-v', '--version'],573 null,574 $this->locale->_('Display version')575 )576 ->addArgumentHandler(577 function ($script, $argument, $values) {578 if (count($values) !== 0) {579 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));580 }581 $script->resetVerbosityLevel();582 $verbosityLevel = substr_count($argument, '+');583 while ($verbosityLevel--) {584 $script->increaseVerbosityLevel();585 }586 },587 ['+verbose', '++verbose'],588 null,589 $this->locale->_('Enable verbose mode')590 )591 ->addArgumentHandler(592 function ($script, $argument, $values) {593 if (count($values) === 0) {594 $values = [getcwd()];595 }596 $script->init(current($values));597 },598 ['--init'],599 '<path/to/directory>',600 $this->locale->_('Create configuration and bootstrap files in <path/to/directory> (Optional, default: %s)', $this->getDirectory())601 )602 ->addArgumentHandler(603 function ($script, $argument, $path) {604 if (count($path) != 1) {605 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));606 }607 $script->setPhpPath(reset($path));608 },609 ['-p', '--php'],610 '<path/to/php/binary>',611 $this->locale->_('Path to PHP binary which must be used to run tests')612 )613 ->addArgumentHandler(614 function ($script, $argument, $defaultReportTitle) {615 if (count($defaultReportTitle) != 1) {616 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));617 }618 $script->setDefaultReportTitle(reset($defaultReportTitle));619 },620 ['-drt', '--default-report-title'],621 '<string>',622 $this->locale->_('Define default report title with <string>')623 )624 ->addArgumentHandler(625 function ($script, $argument, $file) {626 if (count($file) != 1) {627 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));628 }629 $script->setScoreFile(reset($file));630 },631 ['-sf', '--score-file'],632 '<file>',633 $this->locale->_('Save score in file <file>')634 )635 ->addArgumentHandler(636 function ($script, $argument, $maxChildrenNumber) {637 if (count($maxChildrenNumber) != 1) {638 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));639 }640 $script->setMaxChildrenNumber(reset($maxChildrenNumber));641 },642 ['-mcn', '--max-children-number'],643 '<integer>',644 $this->locale->_('Maximum number of sub-processes which will be run simultaneously')645 )646 ->addArgumentHandler(647 function ($script, $argument, $empty) {648 if ($empty) {649 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));650 }651 $script->disableCodeCoverage();652 },653 ['-ncc', '--no-code-coverage'],654 null,655 $this->locale->_('Disable code coverage')656 )657 ->addArgumentHandler(658 function ($script, $argument, $directories) {659 if (count($directories) <= 0) {660 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));661 }662 $script663 ->resetExcludedDirectoriesFromCoverage()664 ->excludeDirectoriesFromCoverage($directories)665 ;666 },667 ['-nccid', '--no-code-coverage-in-directories'],668 '<directory>...',669 $this->locale->_('Disable code coverage in directories <directory>')670 )671 ->addArgumentHandler(672 function ($script, $argument, $namespaces) {673 if (count($namespaces) <= 0) {674 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));675 }676 $script677 ->resetExcludedNamespacesFromCoverage()678 ->excludeNamespacesFromCoverage($namespaces)679 ;680 },681 ['-nccfns', '--no-code-coverage-for-namespaces'],682 '<namespace>...',683 $this->locale->_('Disable code coverage for namespaces <namespace>')684 )685 ->addArgumentHandler(686 function ($script, $argument, $classes) {687 if (count($classes) <= 0) {688 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));689 }690 $script691 ->resetExcludedClassesFromCoverage()692 ->excludeClassesFromCoverage($classes)693 ;694 },695 ['-nccfc', '--no-code-coverage-for-classes'],696 '<class>...',697 $this->locale->_('Disable code coverage for classes <class>')698 )699 ->addArgumentHandler(700 function ($script, $argument, $classes) {701 if (count($classes) <= 0) {702 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));703 }704 $script705 ->resetExcludedMethodsFromCoverage()706 ->excludeMethodsFromCoverage($classes)707 ;708 },709 ['-nccfm', '--no-code-coverage-for-methods'],710 '<method>...',711 $this->locale->_('Disable code coverage for methods <method>')712 )713 ->addArgumentHandler(714 function ($script, $argument, $empty) {715 if ($empty) {716 throw new exceptions\logic\invalidArgument(sprintf($script->getLocale()->_('Bad usage of %s, do php %s --help for more information'), $argument, $script->getName()));717 }718 $script->enableBranchesAndPathsCoverage();719 },720 ['-ebpc', '--enable-branch-and-path-coverage'],...
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1require_once 'PHPUnit/TextUI/TestRunner.php';2require_once 'PHPUnit/Util/Filter.php';3{4 public function __construct()5 {6 $this->excludeMethodsFromCoverage(array('testMethod1','testMethod2','testMethod3'));7 }8}9$runner = new MyRunner();10$runner->run($suite);
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1$runner->excludeMethodsFromCoverage($class, $method);2$runner->excludeMethodsFromCoverage($class, $method);3$runner->excludeMethodsFromCoverage($class, $method);4$runner->excludeMethodsFromCoverage($class, $method);5$runner->excludeMethodsFromCoverage($class, $method);6$runner->excludeMethodsFromCoverage($class, $method);7$runner->excludeMethodsFromCoverage($class, $method);8$runner->excludeMethodsFromCoverage($class, $method);9$runner->excludeMethodsFromCoverage($class, $method);10$runner->excludeMethodsFromCoverage($class, $method);11$runner->excludeMethodsFromCoverage($class, $method);12$runner->excludeMethodsFromCoverage($class, $method);13$runner->excludeMethodsFromCoverage($class, $method);14$runner->excludeMethodsFromCoverage($class, $method);15$runner->excludeMethodsFromCoverage($class, $method);
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1$runner = new PHPUnit\TextUI\TestRunner();2$runner->excludeMethodsFromCoverage('MyClass', 'method1');3$runner->excludeMethodsFromCoverage('MyClass', 'method2');4$runner->excludeMethodsFromCoverage('MyClass', 'method3');5$runner->run($suite);6$runner = new PHPUnit\TextUI\TestRunner();7$runner->excludeMethodsFromCoverage('MyClass', 'method1');8$runner->excludeMethodsFromCoverage('MyClass', 'method2');9$runner->excludeMethodsFromCoverage('MyClass', 'method3');10$runner->run($suite);11$runner = new PHPUnit\TextUI\TestRunner();12$runner->excludeMethodsFromCoverage('MyClass', 'method1');13$runner->excludeMethodsFromCoverage('MyClass', 'method2');14$runner->excludeMethodsFromCoverage('MyClass', 'method3');15$runner->run($suite);16$runner = new PHPUnit\TextUI\TestRunner();17$runner->excludeMethodsFromCoverage('MyClass', 'method1');18$runner->excludeMethodsFromCoverage('MyClass', 'method2');19$runner->excludeMethodsFromCoverage('MyClass', 'method3');20$runner->run($suite);
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1$runner = new PHPUnit_TextUI_TestRunner();2$runner->excludeMethodsFromCoverage('testMethod1');3$runner->excludeMethodsFromCoverage('testMethod2');4PHPUnit_TextUI_Command::main();5$runner = new PHPUnit_TextUI_TestRunner();6$runner->excludeMethodsFromCoverage('testMethod3');7$runner->excludeMethodsFromCoverage('testMethod4');8PHPUnit_TextUI_Command::main();9$runner = new PHPUnit_TextUI_TestRunner();10$runner->excludeMethodsFromCoverage('testMethod5');11$runner->excludeMethodsFromCoverage('testMethod6');12PHPUnit_TextUI_Command::main();
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1$runner = new Runner();2$runner->excludeMethodsFromCoverage('TestClass', 'testMethod');3$runner->addTestsFromDirectory('tests');4$runner->run();5$coverage = new CodeCoverage();6$coverage->excludeMethodsFromCodeCoverage('TestClass', 'testMethod');7$coverage->filter()->addDirectoryToWhitelist('src');8$coverage->start('TestClass');9$coverage->stop();10$coverage->append(new PHP_CodeCoverage_Report_HTML());11$coverage = new CodeCoverage();12$coverage->filter()->addDirectoryToWhitelist('src');13$coverage->start('TestClass');14$coverage->stop();15$coverage->excludeMethodsFromCodeCoverage('TestClass', 'testMethod');16$coverage->append(new PHP_CodeCoverage_Report_HTML());
Full Screen
Full Screen
excludeMethodsFromCoverage
Using AI Code Generation
copy
Full Screen
1require_once 'PHPUnit/TextUI/TestRunner.php';2require_once 'PHPUnit/Util/Filter.php';3require_once 'PHPUnit/Util/Log/JSON.php';4require_once 'PHPUnit/Util/Log/JUnit.php';5PHPUnit_Util_Filter::addFileToFilter(__FILE__, 'PHPUNIT');6{7 public function excludeMethodsFromCoverage($methods)8 {9 $this->excludeMethodsFromCoverage = $methods;10 }11}12{13 public function testOne()14 {15 }16 public function testTwo()17 {18 }19}20$methods = array('testOne');21$runner = new MyTestRunner();22$runner->excludeMethodsFromCoverage($methods);23$runner->doRun(new PHPUnit_Framework_TestSuite('MyTest'), array());24require_once 'PHPUnit/TextUI/TestRunner.php';25require_once 'PHPUnit/Util/Filter.php';26require_once 'PHPUnit/Util/Log/JSON.php';27require_once 'PHPUnit/Util/Log/JUnit.php';28PHPUnit_Util_Filter::addFileToFilter(__FILE__, 'PHPUNIT');29{30 public function excludeMethodsFromCoverage($methods)31 {32 $this->excludeMethodsFromCoverage = $methods;33 }34}35{36 public function testOne()37 {38 }39 public function testTwo()40 {41 }42}43$methods = array('testTwo');44$runner = new MyTestRunner();45$runner->excludeMethodsFromCoverage($methods);46$runner->doRun(new PHPUnit_Framework_TestSuite('MyTest'), array());
Full Screen
Full Screen
Automation Testing Tutorials
Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.
LambdaTest Learning Hubs:
YouTube
You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.
Run Atoum automation tests on LambdaTest cloud grid
Perform automation testing on 3000+ real desktop and mobile devices online.
Most used method in runner
Trigger excludeMethodsFromCoverage code on LambdaTest Cloud Grid
Execute automation tests with excludeMethodsFromCoverage on a cloud-based Grid of 3000+ real browsers and operating systems for both web and mobile applications.
Test now for Free
Try LambdaTest Now !!
Get 100 minutes of automation test minutes FREE!!
Next-Gen App & Browser Testing Cloud
Was this article helpful?
Helpful
NotHelpful
|
__label__pos
| 0.870662 |
Choose the dot property accessor when the property name is known ahead of time. The keys in this array are the names of the object's properties. Most of the time, in TypeScript, objects have narrowly defined interfaces. Subscribe to my newsletter to get them right into your inbox. In TypeScript, the class keyword provides a more familiar syntax for generating constructor functions and performing simple inheritance. Moreover, you can extract the dynamic property names (determined at runtime): { [propertName]: variable } = object. The name variable is inferred to be a string and age - a number. const { [property]: name } = hero is an object destructuring that dynamically, at runtime, determines what property to extract. JavaScript provides a bunch of good ways to access object properties. The variable pets_2 is an object. It will check if performing any operation on a variable is possible given its type. An “indexed” array is one where the index must be an integer, and you access its elements using its index as a reference. There’s just two golden rules about objects and the dot notation — in order for the “.” notation to work, the key must NOT start with a number. In this article, I’ll discuss arrays and objects. Meaning, the properties and methods available on the objects are known at transpile time. When the property name is dynamic or is not a valid identifier, a better alternative is square brackets property accessor: object[propertyName]. If the accessed property doesn’t exist, all 3 accessor syntaxes evalute to undefined: The property name doesn’t exist in the object hero. By default, the index will always start at “0”. We can also use th… It doesn't make much sense to forbid property access (o.unknown) syntax on a type with a string index signature, but allow element access syntax (o['unknown']).Note that this should only apply to types with an explicit string index signature — element access is currently allowed on any type if you don't pass --noImplicitAny, but this should not be true for property access. Choose depending on your particular situation. Pretty cool, huh? Thus the dot property accessor hero.name, square brackets property accessor hero['name'] and the variable name after destructuring evaluate to undefined. TypeScript 3.0 introduced a new unknown type which is the type-safe counterpart of the any type.. In this case, we don't need to define a new interface to describe ProductDisplay 's props because we only pass the product title. Every now and then, you might want to statically type a global variable in TypeScript. pets_4[“1”];pets_4[“2abc”];pets_4[“3y3”]; With this in mind, now let’s look at an Array of Objects: let pets_5 = [ { prop1 : “cat”, prop2 : “dog”, prop3 : “mouse” } ]; Now, pets_5 is an Array of Objects. let pets_4 = { 1 : “cat”, “2abc” : “dog”, “3y3” : “mouse”}; Golden Rule #2: Any key that starts with a number cannot be chained using the dot notation. In this simple example, Keys is a hard-coded list of property names and the property type is always boolean, so this mapped type is equivalent to writing: Note, the type definition {email: string, firstName: string, lastName: string} is not the value and it is TypeScript’s syntax for defining the type to make sure that user object may have only this type. Because TypeScript files are compiled, there is an intermediate step between writing and running your code. When a file changes under --watchmode, TypeScript is able to use your project’s previously-constructed dependency graph to determine which files could potentially have been affected and need to be re-checked and potentially re-e… See the similarities? The Window variable, is an object, therefore to declare a new property in the Window object with Javascript we would just simply use the previous snippet and everything will work like a charm. Each time through the loop, it saves the next property name in the loop variable. name, address, street, createdBy. There’s no specific type for integers. One way to do that is by running TypeScript in --watch mode. The dot property accessor syntax object.property works nicely when you know the variable ahead of time. But, occasionally, this is not possible. Window in a web page serves a dual purpose. After the destructuring, the variable aliasIdentifier contains the property value. In Typescript, an interface can be used to describe an Object's required properties along with their types. Inside each pair of { } is a key:value pair called “ property ”. Try nesting another array of objects in the object like this: var pets_6 = [ { prop1 : “cat”, prop2 : “dog”, prop3 : “mouse” } ,{ prop1 : “apple”, prop2 : “banana”, prop3 : “cherry” } ,{ prop1 : [{ name : “Henry”, age : 2, breed : “Poodle”}] }]; My book “Developing Business Applications for the Web: With HTML, CSS, JSP, PHP, ASP.NET, and JavaScript” is available on Amazon and MC Press. TypeScript and JavaScript are similar in many ways. Note that you can extract as many properties as you’d like: If you’d like to access the property, but create a variable name different than the property name, you could use aliasing. Object.entries(hero) returns the entries of hero: [['name', 'Batman'], ['city', 'Gotham']]. One can think of an object as an associative array (a.k.a. Here's a Typescript-friendly way to verify an attribute exists in an object, and then access that attribute. There are also three symbolic values: Infinity, -Infinity, and NaN. Instead, we use an object type annotation with a title string property. Indexable types have an index signature that describes the types we can use to index into the object, along with the corresponding return types when indexing. The static members of a class are accessed using the class name and dot notation, without creating an object e.g. The empty type {} refers to an object that has no property on its own. User can use any keyword to declare the datatype at the time of variable declaration. So the above indexed array can be rewritten into an object as follows: let pets_2 = { 0 : “cat”, 1 : “dog”, 2 : “mouse” }; Notice the curly braces — that’s the main distinction between an array and an object. Getting started with TypeScript classes TypeScript includes the keywords public, protected, and private to control access to the members of a class i.e. For example, userDetail is a property which represents user object and we define type using JavaScript object within curly braces as shown below. You can use the dot property accessor in a chain to access deeper properties: object.prop1.prop2. Class members marked public… it implements the Window interface representing the web page main view, but also acts as an alias to the global namespace. Check out my author page at http://amazon.com/author/christianhur, The DOM in JavaScript, jQuery, AngularJS, and React, Building JSF Web Applications with Java EE 7, https://www.linkedin.com/in/christianhur/, Early Returns/Guard Clauses in JavaScript (and React), The (Redux) Saga Continues — Implementing your own redux-saga like middleware, How to create a webworkers driven multithreading App — Part 1, React basic 2 — JSX, the syntax extension to JavaScript that looks like HTML, Closures, Currying, and Cool Abstractions. Now, what most people don’t realize is that, unlike private members in JavaScript, where the members aren’t accessible, in TypeScript, the resulting JavaScript has the variables just as public as the public members. The reason why this works is because the “keys” are numeric, thus it’s identical to the indexed array. After the destructuring, the variable aliasIdentifier contains the property value. The destructuring defines a variable name with the value of property name. Here’s an example of an indexed array: When declaring an indexed array, you don’t have to concern about the index. TypeScript - Static . In fact, the only thing that makes a member private in Typ… Fortunately, TypeScript allows you to specify that members of an object are readonly. In general, when you write a unit test, you only want to access public fields for both the purposes of setting up your tests and for evaluating the success or failure of the tests. let pets_4 = { 1 : “cat”, 2abc : “dog”, “3y3” : “mouse”}; // syntax error — 2abc is invalid. Our example has three properties named 0, 1, & 2 (not meaningful yet but just for illustration purposes). Properties can also be marked as readonly for TypeScript. For example, let’s access the property name of the object hero: hero.name is a dot property accessor that reads the property name of the object hero. hero['name'] and hero[property] both read the property name by using the square brackets syntax. Choose the square brackets property accessor when the property name is dynamic, i.e. A common way to access the property of an object is the dot property accessor syntax: expression should evaluate to an object, and identifier is the name of the property you’d like to access. determined at runtime. There are two number types in JavaScript, which are number and BigInt. identifier is the name of the property to access, aliasIdentifier is the variable name, and expression should evaluate to an object. You can access the properties of an object in JavaScript in 3 ways: Let’s see how each syntax to access the properties work. However, when you use the currentLocation to access object properties, TypeScript also won’t carry any check: console.log (currentLocation.x); The TypeScript object type represents any value that is not a primitive value. Please write your answer in a comment below! Golden Rule #1: Any key that starts with a number must be a string. 3.1 Entries in practice: find the property having 0 value. In an object destructuring pattern, shape: Shape means “grab the property shape and redefine it locally as a variable named Shape.Likewise xPos: number creates a variable named number whose value is based on the parameter’s xPos.. readonly Properties. The string literal union Keys, which contains the names of properties to iterate over. But sometimes properties are not valid identifiers: Because prop-3 and 3 are invalid identifiers, the dot property accessor doesn’t work: Why does the expression weirdObject.prop-3 evaluate to NaN? The destucturing defines a new variable heroName (instead of name as in previous example), and assigns to heroName the value hero.name. This is not a problem, because usually, the property names are valid identifiers: e.g. // implicitly typed object const myObj = { Hello: "world" }; const myObjKey = " You annotate a React functional component's props the same way as any other function in TypeScript. 1.1 Dot property accessor requires identifiers, An Easy Guide to Object Rest/Spread Properties in JavaScript, 3 Ways to Check If an Object Has a Property in JavaScript, A Simple Explanation of JavaScript Closures, Gentle Explanation of "this" in JavaScript, 5 Differences Between Arrow and Regular Functions, A Simple Explanation of React.useEffect(), 5 Best Practices to Write Quality JavaScript Variables, 4 Best Practices to Write Quality JavaScript Modules, 5 Best Practices to Write Quality Arrow Functions, Important JavaScript concepts explained in simple words, Software design and good coding practices, 1 hour, one-to-one, video or chat coaching sessions, JavaScript, TypeScript, React, Next teaching, workshops, or interview preparation (you choose! In TypeScript, when a variable or object property is defined as optional and another primitive type, you may get an error when you try to use it. The Object type, however, describes functionality that available on all objects. I know how cumbersome are closures, scopes, prototypes, inheritance, async functions, this concepts in JavaScript. This time, due to a mistake, one of the books has been assigned with the price 0.. Let’s find the book with the price 0 and log its name to console. Here are 5 methods to solve this issue check out the pic.twitter.com/M9chovpMjv — ʀᴜʙᴇɴ (@rleija_) September 26, 2020 I like to tweet about TypeScript and post helpful code snippets. Most built-in properties aren't enumerable, but the properties you add to an object … But, some objects cannot conform to such constraints - some objects are dynamic and driven by things like Router state. properties or methods. While you can use the super keyword to access a public method from a derived class, you can’t access a property in the base class using super (though you can override the property). The type variable K, which gets bound to each property in turn. And understand when it’s reasonable, depending on the situation, to use one way or another. To access the array’s elements, you use the index as follows: pets_1[0]; //catpets_1[1]; //dogpets_1[2]; //mouse. identifier is the name of the property to access, aliasIdentifier is the variable name, and expression should evaluate to an object. Object is similar to the indexed array, it’s often referred to as an Associative Array. ). The square brackets property accessor has the following syntax: The first expression should evaluate to an object and the second expression should evaluate to a string denoting the property name. TypeScript’s Compiler is your Guardian Angel It turns out when you try to access an object’s property via a string, TypeScript’s compiler is still looking out for you. Unable to access extended properties in constructor; It's typical when speaking of an object's properties to make a distinction between properties and methods. Why program in TypeScript 2. pets_4.1; // Errorpets_4.2abc; // Errorpets_4.3y3; //Error. Adding generic signatures reduces type-safety though. The second expression should evaluate to the object you’d like to destructure. TypeScript only allows two types for indexes (the keys): string and number. Now, let’s see how pets_1 looks like as an object. My daily routine consists of (but not limited to) drinking coffee, coding, writing, coaching, overcoming boredom . However, the property/method distinction is little more than a convention. If the compiler can’t determine what’s inside of your string variable, it will refuse to compile your program. Describing Access to Any Property in a Given Object. In this example, the currentLocation variable is assigned to an object returned by the JSON.parse () function. Suppose we created an interface 'I' with properties x and y. An identifier in JavaScript contains Unicode letters, $, _, and digits 0..9, but cannot start with a digit. Consider the following example of a class with static property. TypeScript has gained popularity and surfaced rather quickly thanks to popular frameworks like Angular 2 and Vue.js. I'm excited to start my coaching program to help you advance your JavaScript knowledge. Most notably, it allows for non-method properties, similar to this Stage 3 proposal. If you look at how TypeScript defines property types within functions you may think that the TypeScript equivalent of a functional object destructure might just … TypeScript lets you augment an interface by simply declaring an interface with an identical name and new members. In TypeScript, Object is the type of all instances of class Object. You can have direct access to me through: Software developer, tech writer and coach. All numbers are floating-point numbers. The variable pets_2 is an object. The main difference between unknown and any is that unknown is much less permissive than any: we have to do some form of checking before performing most operations on values of type unknown, … Structural vs nominal typing 3. It won’t run until the property is accessed after the object has been constructed, so the timing issue is avoided. We have to use a different syntax. Any arbitrary object's instance 'o' can be declared with type 'I' if 'o' has same properties x and y; this feature is known as "Duck Typing". Learn TypeScript: Types Cheatsheet | Codecademy ... Cheatsheet A for-in statement loops through all the defined properties of an object that are enumerable. To access these properties of the pets_2 object, you can reference exactly the same way as the indexed array: pets_2[0]; //catpets_2[1]; //dogpets_2[2]; //mouse. All global variables are accessible on the window object at run-time; this applies to builtin JS declarations like Array, Math, JSON, Intl as well as global DOM declarations like … Implementation class use setters and getters to access the member variable. To access the properties with these special names, use the square brackets property accessor (which is described in the next section): The square brackets syntax accesses without problems the properties that have special names: weirdObject['prop-3'] and weirdObject[3]. It has roughly the same syntax as the ES2015 class syntax, but with a few key distinctions. Indexed Access Types. The basic object destructuring syntax is pretty simple: identifier is the name of the property to access and expression should evaluate to an object. But objects are pretty cool because you don’t have to use numbers for these indexes (or keys). Choose the object destructuring when you’d like to create a variable having the property value. The unknown Type in TypeScript May 15, 2019. Suppose our function resturns a value which depends on condition & we are assigning this value to a variable; then we can define that the variable has any type. The number type is a double-precision 64-bit number that can have values between -2 to the 53rd power minus 1 and 2 to the 53rd power minus 1. For example, in some of my web applications, I need to pass a few properties from my markup rendered on the server to my JavaScript code running in the browser. While it won’t change any behavior at runtime, a property … User can assign any datatype value to the variable, which is intitialized later. Setting Subclass Properties in TypeScript. The static members can be defined by using the keyword static. Index types tell the compiler that the given property or variable is a key representing a publicly accessible property name of a given type. In fact, declaration of each instance method or property that will be used by the class is mandatory, as this will be used to build up a type for the value of thiswithin the class… Now, let’s add a second element to the array: var pets_5 = [ { prop1 : “cat”, prop2 : “dog”, prop3 : “mouse” } ,{ prop1 : “apple”, prop2 : “banana”, prop3 : “cherry” } ]; To access these data using the dot notation: pets[0].prop1; //catpets[0].prop2; //dogpets[0].prop3; //mouse, pets[1].prop1; //applepets[1].prop2; //bananapets[1].prop3; //cherry. You can use strings or words like this: let pets_3 = { prop1 : “cat”, prop2 : “dog”, “prop3” : “mouse” }; Now I can access them the same way as with the indexed array: pets_3[“prop1”]; //catpets_3[“prop2”]; //dogpets_3[“prop3”]; //mouse. See it now? The resulting type of the property. Typescript does not support setter/getters methods in interfaces directly. You can also reference its index as a string (double quotes around the index) like this: pets_2[“0”]; //catpets_2[“1”]; //dogpets_2[“2”]; //mouse. To get these values, you access them like this: pets_5[0][“prop1”]; //catpets_5[0][“prop2”]; //dogpets_5[0][“prop3”]; //mouse. There is no doubt that TypeScript has enjoyed a huge adoption in the JavaScript community, and one of the great benefits it provides is the type checking of all the variables inside our code. With the keyof keyword we can cast a given value to an Index type or set a variable to the property name an object. The dot property accessor works correctly when the property name is a valid identifier. The chapter on Interfaces has the details. Unless you take specific measures to avoid it, the internal state of a const variable is still modifiable. This blog is a part of my TypeScript series, and the previous ones are: 1. const { name } = hero is an object destructuring. Inside each pair of { } is a key:value pair called “property”. The second bracket is the property of the object which you can access like an array index. I also added a private _string3 property for improved performance, but of course, that is optional. ES6 includes static members and so does TypeScript. What makes the object destructuring even more useful is that you could extract to variables properties with the dynamic value: The first expression should evaluate to a property name, and the identifier should indicate the variable name created after the destructuring. That is because TypeScript is a newer computer language — a superset of JavaScript — developed and maintained by Microsoft in just the last couple of years. You can access it using the dot notation as follows: pets_5[0].prop1; //catpets_5[0].prop2; //dogpets_5[0].prop3; //mouse. There are no good or bad ways to access properties. After the destructuring, the variable identifier contains the property value. The array has only 1 element and the element is the object (denoted by the curly braces “{ }”) containing three properties. I am using index “0” because there’s only one element in the array. Again, let’s use the books object that holds the prices of some books. map, dictionary, hash, lookup table). Typescript is obscurely particular with accessing attribute keys on objects that lack a generic signature. const { name: heroName } = hero is an object destructuring. Declaring Global Variables in TypeScript April 14, 2020. Because they’re properties of an object, you can also use the “.” (dot) notation to access them like this: pets_3.prop1; //catpets_3.prop2; //dogpets_3.prop3; //mouse. When you get used to object destructuring, you will find that its syntax is a great way to extract the properties into variables. Interface ObjectConstructor defines the properties of class Object (i.e., the object pointed to by that global variable). The largest and smallest available values for a number are Infinity and -Infinity, respectively. There are two major types of arrays: indexed array and associative array. The object destructuring extracts the property directly into a variable: { property } = object. The interface has to provide only member variables and did not provide an implementation for these properties. One of our goals is to minimize build time given any change to your program. It is defined by two interfaces: Interface Object defines the properties of Object.prototype. In TypeScript you can reuse the type of a property of another type. ..
typescript access object property with variable 2021
|
__label__pos
| 0.949368 |
Linux and UNIX Man Pages
Linux & Unix Commands - Search Man Pages
BSD 2.11 - man page for syserrlst (bsd section 5)
SYSERRLST(5) File Formats Manual SYSERRLST(5)
NAME
syserrlst - error message file format
DESCRIPTION
mkerrlst(1), creates error message files in the format described below. An ``error message file'' consists of a header, an array of structures specifying the offset and length of each message, and the array of message strings separated by newlines. The message strings are separated by newlines but the newline characters are not included in the size of the message. These newline char- acters serve only to make the file editable or printable (after stripping off the header). The file format is: /* * Definitions used by the 'mkerrlst' program which creates error message * files. * * The format of the file created is: * * struct ERRLSTHDR ehdr; * struct ERRLST emsg[num_of_messages]; * struct { * char msg[] = "error message string"; * char lf = '0; * } [num_of_messages]; * * Note: the newlines are NOT included in the message lengths, the newlines * are present to make it easy to 'cat' or 'vi' the file. */ struct ERRLSTHDR { short magic; short maxmsgnum; short maxmsglen; short pad[5]; /* Reserved */ }; struct ERRLST { off_t offmsg; short lenmsg; }; #define ERRMAGIC 012345
SEE ALSO
mkerrlst(1), syserrlst(3)
BUGS
Format of the file isn't necessarily portable between machines.
HISTORY
This file format is new with 2.11BSD. 3rd Berkeley Distribution March 7, 1996 SYSERRLST(5)
|
__label__pos
| 0.974564 |
本文永久链接 – https://tonybai.com/2022/10/27/when-encountering-slice-during-function-design
切片(slice)是Go语言中的一种重要的也是最常用的同构数据类型。在Go语言编码过程中,我们多数情况下会使用slice替代数组,一来是因为其动态可扩展,二来在多数场合传递slice的开销要比传递数组要小(这里有一些例外)。
切片算是“半个”零值可用的类型,为什么这么说呢?
当我们声明一个切片类型实例但在未显式初始化的情况下,我们不能直接对其做下标操作,比如:
var sl []int
sl[0] = 5 // 错误:引发panic
但是我们可以通过Go内置的append函数对其进行追加操作,即便sl目前的值为nil:
var sl []int
sl = append(sl, 5) // ok
到这里,我要提出本文要讨论的topic了:为什么append函数要通过返回值返回切片结果呢?再泛化一点:当你在函数设计环节遇到要传入传出切片类型时,你会如何设计函数的参数与返回值呢?下面我们就来探讨一下。
我们在$GOROOT/src/builtin/builtin.go中找到了append预置函数的原型:
func append(slice []Type, elems ...Type) []Type
显然参照“append”函数的设计,通过参数传入slice,通过返回值传出更新过的切片肯定是一个正确的方案,比如下面的第一版MyAppend函数:
func myAppend1(sl []int, elems ...int) []int {
return append(sl, elems...)
}
func main() {
var in = []int{1, 2, 3}
fmt.Println("in slice:", in) // 输出:in slice: [1 2 3]
fmt.Println("out slice:", myAppend1(in, 4, 5, 6)) // 输出:out slice: [1 2 3 4 5 6]
}
到这里,有些初学者会提出:切片不是动态数组吗?是不是可以既作为输入参数,又兼作输出参数呢?我理解提出这个问题的小伙伴们希望设计出像下面这样的函数原型:
func myAppend2(sl []int, elems ...int)
这里sl作为输入参数传入myAppend2,然后在myAppend2对其进行update后,myAppend2函数的调用者将得到更新后的sl。但实际情况是这样的吗?我们来看一下:
func myAppend2(sl []int, elems ...int) {
sl = append(sl, elems...)
}
func main() {
var inOut = []int{1, 2, 3}
fmt.Println("in slice:", inOut)
myAppend2(inOut, 4, 5, 6)
fmt.Println("out slice:", inOut)
}
运行这段程序,我们得到如下结果:
in slice: [1 2 3]
out slice: [1 2 3]
我们看到myAppend2并未如我们预期的那样工作,传入的切片并未在myAppend2中得到预期的更新,这是为什么呢?首先这是与切片在运行时的表示有关的。在我的专栏《Go语言精进之路》一书中都有对切片在运行时表示的细致讲解,这里简单说说:
切片在运行时由三个字段构成,reflect包中有切片在类型系统中表示的对应的定义:
// $GOROOT/src/reflect/value.go
type SliceHeader struct {
Data uintptr // 指向底层数组的指针
Len int // 切片长度
Cap int // 切片容量
}
此外,Go函数采用“值拷贝”的参数传递方式,这意味着myAppend2传递的切片sl实质上仅仅传递的是切片“描述符” – SliceHeader。myAppend2函数体内改变的是形参sl的各个字段的值,但myAppend2的实参并未受到任何影响,即执行完myAppend2后,inOut的len和cap依旧保持不变,而其底层数组是否改变了呢?在这个例子中肯定是“改变”了,但改变的是inOut长度(len)范围之外,cap之内的元素,通过对inOut的常规访问是无法获取到这些元素的。
那么我们该如何让slice作为in/out参数呢?答案是使用指向切片的指针,我们来看下面例子:
func myAppend3(sl *[]int, elems ...int) {
(*sl) = append(*sl, elems...)
}
func main() {
var inOut = []int{1, 2, 3}
fmt.Println("in slice:", inOut) // in slice: [1 2 3]
myAppend3(&inOut, 4, 5, 6)
fmt.Println("out slice:", inOut) // out slice: [1 2 3 4 5 6]
}
我们看到myAppend3函数使用*[]int类型的形参的确解决了切片参数作为输入输出参数的问题:myAppend3对切片的更改操作都反映到inOut变量所代表的这个slice上了,即便在myAppend3内切片进行了动态扩容,inOut也能“捕捉”到这点。
不过我在Go标准库中查找了一下,使用指向切片的指针作为参数的函数“少得可怜”:
$grep "*\[\]" */*go|grep func
grep: cmd/cgo: Is a directory
grep: cmd/go: Is a directory
grep: runtime/cgo: Is a directory
log/log.go:func itoa(buf *[]byte, i int, wid int) {
log/log.go:func (l *Logger) formatHeader(buf *[]byte, t time.Time, file string, line int) {
regexp/onepass.go:func mergeRuneSets(leftRunes, rightRunes *[]rune, leftPC, rightPC uint32) ([]rune, []uint32) {
regexp/onepass.go: extend := func(newLow *int, newArray *[]rune, pc uint32) bool {
runtime/mstats.go:func readGCStats(pauses *[]uint64) {
runtime/mstats.go:func readGCStats_m(pauses *[]uint64) {
runtime/proc.go:func saveAncestors(callergp *g) *[]ancestorInfo {
综上,当我们在函数设计时遇到切片类型数据时,如果要对切片做更新操作,优先还是要参考append函数的设计方案,即通过切片作为输入参数和返回值的方式实现该操作逻辑,必要时也可以使用指向切片的指针的方式传递切片,就像myAppend3那样。
“Gopher部落”知识星球旨在打造一个精品Go学习和进阶社群!高品质首发Go技术文章,“三天”首发阅读权,每年两期Go语言发展现状分析,每天提前1小时阅读到新鲜的Gopher日报,网课、技术专栏、图书内容前瞻,六小时内必答保证等满足你关于Go语言生态的所有需求!2022年,Gopher部落全面改版,将持续分享Go语言与Go应用领域的知识、技巧与实践,并增加诸多互动形式。欢迎大家加入!
img{512x368}
img{512x368}
img{512x368}
img{512x368}
我爱发短信:企业级短信平台定制开发专家 https://51smspush.com/。smspush : 可部署在企业内部的定制化短信平台,三网覆盖,不惧大并发接入,可定制扩展; 短信内容你来定,不再受约束, 接口丰富,支持长短信,签名可选。2020年4月8日,中国三大电信运营商联合发布《5G消息白皮书》,51短信平台也会全新升级到“51商用消息平台”,全面支持5G RCS消息。
著名云主机服务厂商DigitalOcean发布最新的主机计划,入门级Droplet配置升级为:1 core CPU、1G内存、25G高速SSD,价格5$/月。有使用DigitalOcean需求的朋友,可以打开这个链接地址:https://m.do.co/c/bff6eed92687 开启你的DO主机之路。
Gopher Daily(Gopher每日新闻)归档仓库 – https://github.com/bigwhite/gopherdaily
我的联系方式:
• 微博:https://weibo.com/bigwhite20xx
• 博客:tonybai.com
• github: https://github.com/bigwhite
商务合作方式:撰稿、出书、培训、在线课程、合伙创业、咨询、广告合作。
© 2022, bigwhite. 版权所有.
Related posts:
1. 一文告诉你神奇的Go内建函数源码在哪里
2. Go 1.17新特性详解:支持将切片转换为数组指针
3. Go GC如何检测内存对象中是否包含指针
4. Go 1.17中值得关注的几个变化
5. Go语言的“黑暗角落”:盘点学习Go语言时遇到的那些陷阱[译](第一部分)
|
__label__pos
| 0.889172 |
NatBI NatBI - 8 months ago 17
SQL Question
How do I find the max in each group when the calculation is done with count(*)?
I've been trying to find the max of each member of a group using advices from other posts, but seems this is a different problem as the counter is based on a count(*) not on an specific column.
My table has several columns; the ones I need: date and branch. Each record of the table represents a transaction in that branch. I would need to know for each date which is the branch with more transactions and how many they were done.
I started with:
Select date, branch, count(*) as total
from table
group by date, branch
I attempted a max(total) but this would just give me a row instead of one per group.
I tried joining with itself, something like this, but it doesn´t work because maxim is not recognized in the having clause:
Select date, branch, count(*) as maxim
(Select date, branch, count(*) as total
from table
group by date, branch) a
having maxim=max(total)
group by date, branch
Any idea?, thanks!
Answer
Try this:
select t1.date, t2.branch, t1.max_total
from (
select date, max(total) as max_total
from (
select date, branch, count(*) as total
from mytable
group by date, branch) as x
group by date
) as t1
join (
select date, branch, count(*) as total
from mytable
group by date, branch
) as t2 on t1.date = t2.date and t1.max_total = t2.total
The idea is to use the query you started with twice as a derived table:
• The first time you use it in order to get the maximum number per date
• The second time you use it in order to extract the branch value having a total count equal to the maximum number. There may be more than one branches in case of ties.
Demo here
If DB2 supports window functions you may use the following, which, if applicable, is more efficient:
select date, branch, total
from (
select date, branch, count(*) as total,
rank() over (partition by date order by count(*) desc) as rn
from mytable
group by date, branch) as t
where t.rn = 1
|
__label__pos
| 0.887542 |
Firmware numbering
I just realized that the firmware numbers of the hub and the tempest are getting pretty close together. Which might cause some confusing. Perhaps it is a good idea to add a letter to the firmware, like 169H or H169 for the hub and 153T / T153 for the tempest.
2 Likes
It is better than when the hub was on v143 and the Tempest was on v134.
1 Like
Disagree with @sunny’s suggestion. It’s just a value. What their versioning scheme is doesn’t matter as long as the devices report a version and WF knows what is in that version.
2 Likes
Haha I remember when David was poking fun at himself for getting the firmware numbers mixed up between the two devices.
3 Likes
it still would be just a value, but one preceded of followed by a letter. The devices still would report their firmware value like H169 and WF would still know what is in that version. I don’t see any problem with that. But more then once people mention the current firmware but for the wrong device. Adding the extra identification might make it more clear.
4 Likes
|
__label__pos
| 0.527241 |
Getting Started with Laminas
Database and models
The database
Now that we have the Album module set up with controller action methods and view scripts, it is time to look at the model section of our application. Remember that the model is the part that deals with the application's core purpose (the so-called “business rules”) and, in our case, deals with the database. We will make use of laminas-db's Laminas\Db\TableGateway\TableGateway to find, insert, update, and delete rows from a database table.
We are going to use Sqlite, via PHP's PDO driver. Create a text file data/schema.sql with the following contents:
CREATE TABLE album (id INTEGER PRIMARY KEY AUTOINCREMENT, artist varchar(100) NOT NULL, title varchar(100) NOT NULL);
INSERT INTO album (artist, title) VALUES ('The Military Wives', 'In My Dreams');
INSERT INTO album (artist, title) VALUES ('Adele', '21');
INSERT INTO album (artist, title) VALUES ('Bruce Springsteen', 'Wrecking Ball (Deluxe)');
INSERT INTO album (artist, title) VALUES ('Lana Del Rey', 'Born To Die');
INSERT INTO album (artist, title) VALUES ('Gotye', 'Making Mirrors');
(The test data chosen happens to be the Bestsellers on Amazon UK at the time of writing!)
Now create the database using the following:
$ sqlite data/laminastutorial.db < data/schema.sql
Some systems, including Ubuntu, use the command sqlite3; check to see which one to use on your system.
Using PHP to create the database
If you do not have Sqlite installed on your system, you can use PHP to load the database using the same SQL schema file created earlier. Create the file data/load_db.php with the following contents:
<?php
$db = new PDO('sqlite:' . realpath(__DIR__) . '/laminastutorial.db');
$fh = fopen(__DIR__ . '/schema.sql', 'r');
while ($line = fread($fh, 4096)) {
$db->exec($line);
}
fclose($fh);
Once created, execute it:
$ php data/load_db.php
We now have some data in a database and can write a very simple model for it.
The model files
Laminas does not provide a laminas-model component because the model is your business logic, and it's up to you to decide how you want it to work. There are many components that you can use for this depending on your needs. One approach is to have model classes represent each entity in your application and then use mapper objects that load and save entities to the database. Another is to use an Object-Relational Mapping (ORM) technology, such as Doctrine or Propel.
For this tutorial, we are going to create a model by creating an AlbumTable class that consumes a Laminas\Db\TableGateway\TableGateway, and in which each album will be represented as an Album object (known as an entity). This is an implementation of the Table Data Gateway design pattern to allow for interfacing with data in a database table. Be aware, though, that the Table Data Gateway pattern can become limiting in larger systems. There is also a temptation to put database access code into controller action methods as these are exposed by Laminas\Db\TableGateway\AbstractTableGateway. Don't do this!
Let's start by creating a file called Album.php under module/Album/src/Model:
namespace Album\Model;
class Album
{
public $id;
public $artist;
public $title;
public function exchangeArray(array $data)
{
$this->id = !empty($data['id']) ? $data['id'] : null;
$this->artist = !empty($data['artist']) ? $data['artist'] : null;
$this->title = !empty($data['title']) ? $data['title'] : null;
}
}
Our Album entity object is a PHP class. In order to work with laminas-db's TableGateway class, we need to implement the exchangeArray() method; this method copies the data from the provided array to our entity's properties. We will add an input filter later to ensure the values injected are valid.
Next, we create our AlbumTable.php file in module/Album/src/Model directory like this:
namespace Album\Model;
use RuntimeException;
use Laminas\Db\TableGateway\TableGatewayInterface;
class AlbumTable
{
private $tableGateway;
public function __construct(TableGatewayInterface $tableGateway)
{
$this->tableGateway = $tableGateway;
}
public function fetchAll()
{
return $this->tableGateway->select();
}
public function getAlbum($id)
{
$id = (int) $id;
$rowset = $this->tableGateway->select(['id' => $id]);
$row = $rowset->current();
if (! $row) {
throw new RuntimeException(sprintf(
'Could not find row with identifier %d',
$id
));
}
return $row;
}
public function saveAlbum(Album $album)
{
$data = [
'artist' => $album->artist,
'title' => $album->title,
];
$id = (int) $album->id;
if ($id === 0) {
$this->tableGateway->insert($data);
return;
}
try {
$this->getAlbum($id);
} catch (RuntimeException $e) {
throw new RuntimeException(sprintf(
'Cannot update album with identifier %d; does not exist',
$id
));
}
$this->tableGateway->update($data, ['id' => $id]);
}
public function deleteAlbum($id)
{
$this->tableGateway->delete(['id' => (int) $id]);
}
}
There's a lot going on here. Firstly, we set the protected property $tableGateway to the TableGateway instance passed in the constructor, hinting against the TableGatewayInterface (which allows us to provide alternate implementations easily, including mock instances during testing). We will use this to perform operations on the database table for our albums.
We then create some helper methods that our application will use to interface with the table gateway. fetchAll() retrieves all albums rows from the database as a ResultSet, getAlbum() retrieves a single row as an Album object, saveAlbum() either creates a new row in the database or updates a row that already exists, and deleteAlbum() removes the row completely. The code for each of these methods is, hopefully, self-explanatory.
Using ServiceManager to configure the table gateway and inject into the AlbumTable
In order to always use the same instance of our AlbumTable, we will use the ServiceManager to define how to create one. This is most easily done in the Module class where we create a method called getServiceConfig() which is automatically called by the ModuleManager and applied to the ServiceManager. We'll then be able to retrieve when we need it.
To configure the ServiceManager, we can either supply the name of the class to be instantiated or a factory (closure, callback, or class name of a factory class) that instantiates the object when the ServiceManager needs it. We start by implementing getServiceConfig() to provide a factory that creates an AlbumTable. Add this method to the bottom of the module/Album/src/Module.php file:
namespace Album;
// Add these import statements:
use Laminas\Db\Adapter\AdapterInterface;
use Laminas\Db\ResultSet\ResultSet;
use Laminas\Db\TableGateway\TableGateway;
use Laminas\ModuleManager\Feature\ConfigProviderInterface;
class Module implements ConfigProviderInterface
{
// getConfig() method is here
// Add this method:
public function getServiceConfig()
{
return [
'factories' => [
Model\AlbumTable::class => function($container) {
$tableGateway = $container->get(Model\AlbumTableGateway::class);
return new Model\AlbumTable($tableGateway);
},
Model\AlbumTableGateway::class => function ($container) {
$dbAdapter = $container->get(AdapterInterface::class);
$resultSetPrototype = new ResultSet();
$resultSetPrototype->setArrayObjectPrototype(new Model\Album());
return new TableGateway('album', $dbAdapter, null, $resultSetPrototype);
},
],
];
}
}
This method returns an array of factories that are all merged together by the ModuleManager before passing them to the ServiceManager. The factory for Album\Model\AlbumTable uses the ServiceManager to create an Album\Model\AlbumTableGateway service representing a TableGateway to pass to its constructor. We also tell the ServiceManager that the AlbumTableGateway service is created by fetching a Laminas\Db\Adapter\AdapterInterface implementation (also from the ServiceManager) and using it to create a TableGateway object. The TableGateway is told to use an Album object whenever it creates a new result row. The TableGateway classes use the prototype pattern for creation of result sets and entities. This means that instead of instantiating when required, the system clones a previously instantiated object. See PHP Constructor Best Practices and the Prototype Pattern for more details.
Factories
The above demonstrates building factories as closures within your module class. Another option is to build the factory as a class, and then map the class in your module configuration. This approach has a number of benefits:
• The code is not parsed or executed unless the factory is invoked.
• You can easily unit test the factory to ensure it does what it should.
• You can extend the factory if desired.
• You can re-use the factory across multiple instances that have related construction.
Creating factories is covered in the laminas-servicemanager documentation.
The Laminas\Db\Adapter\AdapterInterface service is registered by the laminas-db component. You may have noticed earlier that config/modules.config.php contains the following entries:
return [
'Laminas\Form',
'Laminas\Db',
'Laminas\Router',
'Laminas\Validator',
/* ... */
],
All Laminas components that provide laminas-servicemanager configuration are also exposed as modules themselves; the prompts as to where to register the components during our initial installation occurred to ensure that the above entries are created for you.
The end result is that we can already rely on having a factory for the Laminas\Db\Adapter\AdapterInterface service; now we need to provide configuration so it can create an adapter for us.
Laminas's ModuleManager merges all the configuration from each module's module.config.php file, and then merges in the files in config/autoload/ (first *.global.php files, and then *.local.php files). We'll add our database configuration information to global.php, which you should commit to your version control system. You can use local.php (outside of the VCS) to store the credentials for your database if you want to. Modify config/autoload/global.php (in the project root, not inside the Album module) with following code:
return [
'db' => [
'driver' => 'Pdo',
'dsn' => sprintf('sqlite:%s/data/laminastutorial.db', realpath(getcwd())),
],
];
If you were configuring a database that required credentials, you would put the general configuration in your config/autoload/global.php, and then the configuration for the current environment, including the DSN and credentials, in the config/autoload/local.php file. These get merged when the application runs, ensuring you have a full definition, but allows you to keep files with credentials outside of version control.
Back to the controller
Now that we have a model, we need to inject it into our controller so we can use it.
Firstly, we'll add a constructor to our controller. Open the file module/Album/src/Controller/AlbumController.php and add the following property and constructor:
namespace Album\Controller;
// Add the following import:
use Album\Model\AlbumTable;
use Laminas\Mvc\Controller\AbstractActionController;
use Laminas\View\Model\ViewModel;
class AlbumController extends AbstractActionController
{
// Add this property:
private $table;
// Add this constructor:
public function __construct(AlbumTable $table)
{
$this->table = $table;
}
/* ... */
}
Our controller now depends on AlbumTable, so we will need to create a factory for the controller. Similar to how we created factories for the model, we'll create it in our Module class, only this time, under a new method, Album\Module::getControllerConfig():
namespace Album;
use Laminas\Db\Adapter\AdapterInterface;
use Laminas\Db\ResultSet\ResultSet;
use Laminas\Db\TableGateway\TableGateway;
use Laminas\ModuleManager\Feature\ConfigProviderInterface;
class Module implements ConfigProviderInterface
{
// getConfig() and getServiceConfig() methods are here
// Add this method:
public function getControllerConfig()
{
return [
'factories' => [
Controller\AlbumController::class => function($container) {
return new Controller\AlbumController(
$container->get(Model\AlbumTable::class)
);
},
],
];
}
}
Because we're now defining our own factory, we can modify our module.config.php to remove the definition. Open module/Album/config/module.config.php and remove the following lines:
<?php
namespace Album;
// Remove this:
use Laminas\ServiceManager\Factory\InvokableFactory;
return [
// And remove the entire "controllers" section here:
'controllers' => [
'factories' => [
Controller\AlbumController::class => InvokableFactory::class,
],
],
/* ... */
];
We can now access the property $table from within our controller whenever we need to interact with our model.
Listing albums
In order to list the albums, we need to retrieve them from the model and pass them to the view. To do this, we fill in indexAction() within AlbumController. Update the AlbumController::indexAction() as follows:
// module/Album/src/Controller/AlbumController.php:
// ...
public function indexAction()
{
return new ViewModel([
'albums' => $this->table->fetchAll(),
]);
}
// ...
With Laminas, in order to set variables in the view, we return a ViewModel instance where the first parameter of the constructor is an array containing data we wish to represent. These are then automatically passed to the view script. The ViewModel object also allows us to change the view script that is used, but the default is to use {module name}/{controller name}/{action name}. We can now fill in the index.phtml view script:
<?php
// module/Album/view/album/album/index.phtml:
$title = 'My albums';
$this->headTitle($title);
?>
<h1><?= $this->escapeHtml($title) ?></h1>
<p>
<a href="<?= $this->url('album', ['action' => 'add']) ?>">Add new album</a>
</p>
<table class="table">
<tr>
<th>Title</th>
<th>Artist</th>
<th> </th>
</tr>
<?php foreach ($albums as $album) : ?>
<tr>
<td><?= $this->escapeHtml($album->title) ?></td>
<td><?= $this->escapeHtml($album->artist) ?></td>
<td>
<a href="<?= $this->url('album', ['action' => 'edit', 'id' => $album->id]) ?>">Edit</a>
<a href="<?= $this->url('album', ['action' => 'delete', 'id' => $album->id]) ?>">Delete</a>
</td>
</tr>
<?php endforeach; ?>
</table>
The first thing we do is to set the title for the page (used in the layout) and also set the title for the <head> section using the headTitle() view helper which will display in the browser's title bar. We then create a link to add a new album.
The url() view helper is provided by laminas-mvc and laminas-view, and is used to create the links we need. The first parameter to url() is the route name we wish to use for construction of the URL, and the second parameter is an array of variables to substitute into route placeholders. In this case we use our album route which is set up to accept two placeholder variables: action and id.
We iterate over the $albums that we assigned from the controller action. laminas-view automatically ensures that these variables are extracted into the scope of the view script; you may also access them using $this->{variable name} in order to differentiate between variables provided to the view script and those created inside it.
We then create a table to display each album's title and artist, and provide links to allow for editing and deleting the record. A standard foreach: loop is used to iterate over the list of albums, and we use the alternate form using a colon and endforeach; as it is easier to scan than to try and match up braces. Again, the url() view helper is used to create the edit and delete links.
Escaping
We always use the escapeHtml() view helper to help protect ourselves from Cross Site Scripting (XSS) vulnerabilities.
If you open http://localhost:8080/album (or http://laminas-mvc-tutorial.localhost/album if you are using self-hosted Apache) you should see this:
Initial album listing
Found a mistake or want to contribute to the documentation? Edit this page on GitHub!
|
__label__pos
| 0.900688 |
Convert px to in (Pixel to Inch)
Pixel into Inch
numbers in scientific notation
Direct link to this calculator:
https://www.convert-measurement-units.com/convert+Pixel+to+in.php
How many Inch make 1 Pixel?
1 Pixel [px] = 0.010 416 666 666 667 Inch [in] - Measurement calculator that can be used to convert Pixel to Inch, among others.
Convert Pixel to Inch (px to in):
1. Choose the right category from the selection list, in this case 'Font size (CSS)'.
2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), brackets and π (pi) are all permitted at this point.
3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Pixel [px]'.
4. Finally choose the unit you want the value to be converted to, in this case 'Inch [in]'.
5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.
With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '765 Pixel'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Pixel' or 'px'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Font size (CSS)'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '46 px to in' or '46 px into in' or '10 Pixel -> Inch' or '29 px = in' or '62 Pixel to in' or '6 px to Inch' or '90 Pixel into Inch'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second.
Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(78 * 72) px'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '765 Pixel + 2295 Inch' or '69mm x 48cm x 2dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.
If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 7.117 038 206 839 9×1030. For this form of presentation, the number will be segmented into an exponent, here 30, and the actual number, here 7.117 038 206 839 9. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 7.117 038 206 839 9E+30. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 7 117 038 206 839 900 000 000 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications.
|
__label__pos
| 0.96631 |
Repair a pandoc-generated LaTeX table with R
Pandoc is a great piece of software but it is not always kind to HTML tables when converting to LaTeX. Especially tables containing <tr> elements with rowspan or <td> elements with colspan attributes end up as sequences of lines of text, not embedded in a table environment like longtableand devoid of both line endings (\\) […]
Text Mining: Detect Strings: Very Fast Word Lookup in a Large Dictionary in R with data.table and matrixStats
Looking up words in dictionaries is the alpha and omega of text mining. I am, for instance interested to know whether a given word from a large dictionary (>100k words) occurs in a sentence or not, for a list of over 1M sentences. The best take at this task is using the Julia language, but […]
Premiers pas avec R et RStudio
Cet exercice a pour préalable d’avoir installé R et RStudio. Se familiariser avec l’interface Ouvrez RStudio. Vous devriez voir l’interface comme à l’image ci-dessous, pour l’heure sans la partie A. La partie C est en principe vide: Les fonctions de ces différentes parties sont les suivantes: A : Fenêtre d’édition des fichiers sources. Ici vous […]
Installer R et RStudio comme logiciels indépendants
R est un langage de programmation. Pour que les programmes écrits en R fonctionnent, il est nécessaire d’installer au préalable un environnement d’exécution pour ce langage. RStudio est un environnement de développement (en anglais: IDE: Integrated development environment) pour R. Installer l’environnement d’exécution du langage de programmation R Vous pouvez installer R et RStudio comme […]
Installer R, RStudio et Orange Data Mining avec Miniconda
Conda est un gestionnaire de logiciels d’analyse de données et de visualisation extrêmement répandu dans les milieux scientifiques. Sa version minimale, Miniconda, permet d’installer et de tenir à jour plusieurs logiciels dont R, RStudio et Orange Data Mining. Installez-le en suivant les instructions ci-dessous. Dans tous les cas, si un choix est proposé choisissez la […]
Create a subgraph from the neighborhood of specific vertices in igraph
Many user of igraph for R expect the functions ego() and make_ego_graph() , that take a list of vertices as input, to generate a new graph composed of the neighbors of these vertices. Unfortunately, these functions do no such thing. They generate a list of igraph.vs objects, which cannot be further treated as an igraph […]
Visualiser des données avec R (2): réductions dimensionnelles, clustering, composantes principales
Cet exercice fait suite à l’exercice Visualiser des données avec R (1). Il présuppose que vous avez chargé les données et créé les variables de cet exercice précédent, faute de quoi les scripts R ci-dessous ne fonctionneront pas. De 1 dimension à 0 dimensions: le nombre unique qui résume les données Réduisons d’abord nos données […]
Unknown column? Force encoding of an entire table from “unknown” to “UTF-8” in R on Windows
A common knitr issue on Windows Running R scripts on a Windows machine is equivalent to a dive into enconding hell. In effect, your non-English data most likely contains characters like Ä, ü, è or š, or even 语言. In all cases, the only serious way of dealing with these, in fact with any data […]
Cleaning up PDFs of pre-1990s scanned texts for text mining in R with Quanteda
Text sources are often PDF’s. If optical character recognition (OCR) has been applied, the pdftools R package allows you to extract text from all PDFs to text files stored in a folder. The readtext package converts the set of text files into something useful for Quanteda. Nevertheless, some cleaning is necessary before transforming your text […]
|
__label__pos
| 0.541681 |
Dis Max Query
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
This is useful when searching for a word in multiple fields with different boost factors (so that the fields cannot be combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost, not the sum of the field scores (as Boolean Query would give). If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching another gets a higher score than "albino" matching both fields. To get this result, use both Boolean Query and DisjunctionMax Query: for each term a DisjunctionMaxQuery searches for it in each field, while the set of these DisjunctionMaxQuery’s is combined into a BooleanQuery.
The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that include this term in only the best of those multiple fields, without confusing this with the better case of two different terms in the multiple fields.The default tie_breaker is 0.0.
This query maps to Lucene DisjunctionMaxQuery.
GET /_search
{
"query": {
"dis_max" : {
"tie_breaker" : 0.7,
"boost" : 1.2,
"queries" : [
{
"term" : { "age" : 34 }
},
{
"term" : { "age" : 35 }
}
]
}
}
}
|
__label__pos
| 0.999488 |
6.12.3 3D Scatter Plot with Line Projections of Core Drilling Locations
Summary
This tutorial will demonstrate how to create a 3D scatter plot and show the projections of the plots.
3D Scatter FinalGraph.png
Minimum Origin Version Required: Origin 2015 SR0
Steps
This tutorial is associated with the Tutorial Data.opj file under <Origin EXE Folder>\Samples\.
1. Open the Tutorial Data.opj file and browse to the folder 3D Scatter with Line Projections, activate the workbook 3DScatterPlot.
2. Activate the The_First_Curve_of_3D_Scatter worksheet. The column designation for the three columns is already set as XYZ so you could directly create a 3D scatter plot. Highlight column C and select Plot: 3D Symbol/Bar/Vector: 3D Scatter to create the plot.
3. Activate The Second Curve of 3D Scatter worksheet. Highlight column C. Move the mouse cursor to the right edge of the selection area until the shape of the cursor changes to DragAndDrop.png. Hold down the left mouse button and drag the highlighted data into the newly created graph window. The resulting graph should appear as shown below:
3D Scatter FirstGraph.png
4. Double-click on the Z axis to open the Axis dialog, go to the Scale tab with Z icon on left panel selected and reverse the Z axis by exchanging From and To values.
3D Scatter Z.png
5. Click OK to close dialog. Select Format: Layer Properties from the main menu to open the Plot Details dialog. Alternatively you can double-click on the plot to bring up the dialog. If the left panel is not visible, use the Plot Details Expansion.png button on the bottom left of the dialog box to open it up. Expand the Layer1 node. Choose the first plot and select All Together from the Edit Dependencies page. This applies the same settings to the original data and all the projections.
3D Scatter FirstPlot.png
6. Expand the first plot node and select Original and XY/XY/YZ Projection check box.
7. Set the dialog options to those shown in the screenshot below. To set the color, click on the Color button and choose the desired option from the Individual Color drop down. Since the All Together radio button was selected in the previous step, these settings automatically apply to the projections as well.
3D Scatter FirstPlot Line.png
8. Select the Symbol tab and change the Shape to Cube:
3D Scatter FirstPlot Symbol.png
9. In a similar fashion, select the second plot and set the dialog options to match those in the screenshots below:
3D Scatter 2Plot.png
3D Scatter 2Plot Line.png
3D Scatter 2Plot Symbol.png
10. The final graph should appear as shown below:
3D Scatter FinalGraph.png
|
__label__pos
| 0.993428 |
• "Bwoah." - Generic Kimi Quotes.
1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
2. If you have any questions, please don't hesitate to ask. There's no such thing as a stupid question.
Tech Question..
Discussion in 'rFactor' started by Dave Cerio, Nov 27, 2015.
1. Dave Cerio
Dave Cerio
Messages:
151
Ratings:
+55
I just built a new computer.
Can I simply copy my rFactor folder from my old computer to a thumb drive, and move it to the new computer and play? Or do I need to download the game again?
2. Quas
Quas
Premium
Messages:
1,678
Ratings:
+822
You can install rflite on your new pc, then just overwrite with your old rfactor folder from the thumb drive, i don't think it will work if you just move the folder.
3. Dave Cerio
Dave Cerio
Messages:
151
Ratings:
+55
|
__label__pos
| 0.850394 |
Please create an account to participate in the Slashdot moderation system
Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Comment Re:I get the impression that (Score 1) 180
I used to do "big data" and "cloud" computing when it was called clusters.
Did you run one process with multiple threads across all of those machines, or was threading less of an issue once you started thinking about distributed computing?
I can say this with a certainty: Anything other than a compiled language with low level facilities is a pure waste of time and money.
Isn't that what Numba does? Compiling Python code using LLVM and being able to understand numpy data structures? I'm still not sure I understand what threading has to do with this. The OP said threading was an issue, but threading doesn't
While with Java you at least get some safety for big projects
Safety? Job security?
Comment Re:I get the impression that (Score 1) 180
Both languages suffer from the global interpreter lock defect and will require a rewrite in the next 5-10 years if the languages have any chance of surviving in the servers.
You don't really understand big data, if you think it needs to run on ONE computer.
This is only a problem if you think threading is the solution to scaling CPU computations across hundreds of computers. If you generalized your code to run on hundreds of computers, there is no reason you can't run a process per core for your multicore machines.
Comment Re:Great. Just Great (Score 1) 180
You have no idea what he's talking about? It was pretty clear: factions within the US government wants these tools to datamine all the ISP data they have been snarfing up so they can spy on everyone in the world. Saying that you believe otherwise is a pretty extreme view
He has no idea why there is ranting about open source code that everyone in the world can use for any purposes. Did you rant about git being open source? I'm betting the gov't can use that to manage code related to data mining. Do you rant about postgres or any of the databases used by the US gov't? Would postgres suddenly become evil because the gov't threw some money their way?
Comment Re:Great. Just Great (Score 1) 180
Yeah the govt needs better systems to manage the huge databases and dossiers they are building on everybody with their warrentless wiretaps and reading everybody's emails. Anybody who helps with this project is pretty damn naive if they don't think it will also be used for this.
Isn't this true of all useful open source projects?
Comment Re:Photographer should say "Go ahead" (Score 1) 667
The DMCA doesn't restrict speech. Your contract with your ISP that you enter in gives the ISP the right to take it down. You've waived your right by signing your contract (in the same that NDAs work). The gov't is not forcing the ISP to take it down, so there is no free speech issue here.
Comment Re:Photographer should say "Go ahead" (Score 1) 667
The rest of the DMCA is absolute rubbish, but not the take down provision.
The takedown doesn't provide for any punishment whatsoever. An ISP is free to ignore the takedown notice, but they lose their safe haven against liability if they ignore it and it turns out to be illegal. You are free to choose an ISP that won't preemptively drop your site. She happened to choose an ISP that handles take downs in the manner illustrated in the article.
Candice Schwager stupidly choose an ISP that shuts down sites and then she stupidly took the fight to the internet. epic fail.
Slashdot Top Deals
Remember: use logout to logout.
Working...
|
__label__pos
| 0.530272 |
For questions about the history of programming and computing.
learn more… | top users | synonyms
30
votes
2answers
2k views
Why did the Haskell committee choose monads to represent I/O?
The Clean language uses uniqueness types to handle I/O in a purely functional setting. Why did the Haskell committee go with monads instead? Were there other proposals for handling state that the ...
8
votes
4answers
341 views
Origin of structures and classes
What design and implementation issues did programmers have to solve when they decided first to use structures and classes? When did this happened and who were the pioneers behind these ideas? Note, ...
9
votes
2answers
3k views
Is the 14th line of The Zen of Python a reference to Dijkstra?
Python's Zen states on line 14 that: Although that way may not be obvious at first unless you're Dutch. Is this a reference to the famous Dutch computer scientist Edsger W. Dijkstra?
2
votes
2answers
2k views
Why is 24 lines a common default terminal height?
80x24 characters seems to be a very common default for terminal windows. This answer provides a very good historical reason as to why the width is 80 characters. But why is the height commonly 24 ...
310
votes
9answers
78k views
Why is 80 characters the 'standard' limit for code width?
Why is 80 characters the "standard" limit for code width? Why 80 and not 79, 81 or 100? What is the origin of this particular value?
38
votes
3answers
9k views
What were the “core” API packages of Java 1.0?
Reading about the Google v Oracle case, I came across these questions (apparently from the presiding Judge) ... Is it agreed that the following is true, at least as of 1996? The following ...
264
votes
4answers
98k views
What software programming languages were used by the Soviet Union's space program?
I got interested in the Soviet space program and was interested to discover that the software on the Buran spacecraft circa 1988 was written in Prolog. Does anyone know what languages might have ...
2
votes
4answers
1k views
Why is a database represented as a cylinder in architecture drawings? [closed]
I'm a young upstart programmer and I've never actually seen a database. But computers in large part come in boxes, not in cylinders so I was wondering why they are always represented as a cylinder ...
4
votes
3answers
3k views
What was the most used programming language before C was created? [closed]
C is a language written between '69 and '73 according to WIkipedia. I imagine it made programming a whole lot easier and opened the gate for other programming languages. My question, however, is what ...
16
votes
1answer
4k views
What did they call Object-Oriented Programming before Alan Kay invented the term?
Alan Kay claims that "I made up the term "object-oriented", and I can tell you I did not have C++ in mind." What he had in mind, of course, was Smalltalk. But he did not make up object-oriented ...
5
votes
5answers
3k views
False friends? Keyword “static” in C compared to C++, C# and Java
To me, the use of the keyword static in C and languages like C# and Java are "false friends" like "to become" in English and "bekommen" in German (= "to get" in English), because they mean different ...
7
votes
1answer
192 views
Strategy for restoring state via URL in web apps
This is a question about modern web apps, where a single page is loaded, and all subsequent navigation is done by XHR calls and modifying the DOM. We can use libraries that manipulate the hash ...
7
votes
7answers
12k views
What was the first programming language written for computers? [closed]
Looking at so many programming languages we have today, each one being unique in it's own way, I've tried to figure out what the first programming language written for computers is. Looking at the ...
2
votes
4answers
740 views
Which applications have driven the mass spread of floating point units? [closed]
Floating point units are standard on CPUs today, and even desktops might use them today (3D effects). However, I wonder which applications have initially driven the development and mass adoption of ...
19
votes
5answers
3k views
Why are several popular programming languages influenced by C? [closed]
The Top 10 programming languages, according to the TIOBE index seem to be heavily influenced by C: 1. Java The language derives much of its syntax from C and C++ but has a simpler object model ...
3
votes
2answers
345 views
What is the origin of sessions, by whom and when were they created?
I know sessions are in PHP, ASP, ASP.NET and probably in many other languages. In most cases session solve a problem which the language itself fails to solve. I am wondering Who created sessions and ...
3
votes
1answer
374 views
How to explain the history of programming to non-programmers? [closed]
Sorry if this question is not appropriate for this stack exchange site, I've never used this one before. I am doing my senior project on computer programming. I'm going to be presenting the project ...
11
votes
3answers
947 views
Why didn't the C++ Standard adopt expression templates?
It's my understanding that expression templates as a technique were discovered significantly prior to the original C++ Standard in 1998. Why weren't they used to improve the performance of several ...
3
votes
2answers
488 views
Google Closure Compiler - what does the name mean?
I am curious about the Google Closure Compiler. Why did they name it that? Does it have anything to do with lexical closures? EDIT: I tried researching it in the FAQ and documentation, as well as ...
54
votes
6answers
3k views
Why the Select is before the From in a SQL Query? [closed]
This is something that bothered me a lot at school. 5 years ago, when I learned SQL, I always wondered why we specify first the fields we want and then where we want them from. According to my idea, ...
6
votes
2answers
550 views
Who decided on the terminology downcasting and upcasting?
As far as I know, the terminology comes from how the inheritance hierarchy is traditionally displayed, with the extending types at the bottom, and the parent types at the top. This is a bit ...
6
votes
2answers
1k views
In C++, were SFINAE and metaprogramming intentional or just a byproduct of templates?
SFINAE and template metaprogramming can do wonderful things and many libraries also use them considerably. Historically both of these "magic concepts" were intentionally introduced/supported in C++ ? ...
13
votes
5answers
5k views
Why pointer symbol and multiplication sign are same in C/C++?
I am writing a limited C/C++ code parser. Now, multiplication and pointer signs give me really a tough time, as both are same. For example, int main () { int foo(X * p); // forward declaration ...
9
votes
1answer
985 views
Who first created or popularized the original XMLHTTPRequest / MSXML?
I'm trying to understand the origins of AJAX, and think MSXML and XMLHTTPRequest were the objects that started it all. Which came first, and/or became the defacto way to create dynamic pages?
28
votes
14answers
4k views
Why isn't rich code formatting more common?
I was reading Code Complete and in the chapter on layout and style, he was predicting that code editors would use some sort of rich text formatting. That means, instead of code looking like this ...
35
votes
6answers
15k views
What is the history of why bytes are eight bits?
What where the historical forces at work, the tradeoffs to make, in deciding to use groups of eight bits as the fundamental unit ? There were machines, once upon a time, using other word sizes, ...
3
votes
3answers
557 views
strcpy memcpy reason for parameter order
Answering a question about order of parameters it struck me that strcpy (and family) are the wrong way round. Copy should be src -> destination. Is there a historical or architectural reason for the ...
5
votes
2answers
3k views
Do any languages use =/= for the inequality operator?
Wikipedia says: Not equal The symbol used to denote inequation — when items are not equal — is a slashed equals sign "≠" (Unicode 2260). Most programming languages, limiting themselves ...
15
votes
5answers
805 views
Why is $ in identifier names for so many languages?
A lot of scripting languages like Perl, Awk, Bash, PHP, etc. use a $ sign before identifier names. I've tried to look up why but never had a satisfactory answer.
15
votes
2answers
2k views
What happened to Concurrent C? [closed]
I recently checked out a fantastic book from my college library. It is a bit old at 1989, but the language it describes sounds rather nice. And even while I may not be using it soon, I wanted to ...
1
vote
2answers
79 views
Where can I find an authoritative source documenting the relationships between standards bodies? [closed]
There seems to be a number of different so-called "standards" bodies interacting and/or competing with each other. For example, RFC 3629 references ISO/IEC 10646 and appears to be governed by the ...
36
votes
1answer
2k views
Where does the term “Red/Black Tree” come from?
A Red/Black Tree is one way to implement a balanced binary search tree. The principles behind how it works make sense to me, but the chosen colors don't. Why red and black, as opposed to any other ...
2
votes
2answers
242 views
Reasons for the build-technological fork between Java and UNIX/C/Fortran
When Java was developed, it's designers chose to discard an unusual amount of the conventional wisdom established in the UNIX and C oriented toolchains. For one (and in my view) the most major ...
54
votes
10answers
20k views
Why has C prevailed over Pascal? [closed]
My understanding is that in the 1980s, and perhaps in the 1990s too, Pascal and C were pretty much head-to-head as production languages. Is the ultimate demise of Pascal only due to Borland's neglect ...
16
votes
9answers
2k views
What were the reasons why Windows never had a decent shell? [closed]
I was reading a topic on SO: Why are scripting languages (e.g. ...) not suitable as shell languages?. Especially I liked the answer by Jörg W Mittag, from which I learned interesting things about ...
7
votes
3answers
724 views
Did concept of ViewModel exist before MVVM?
Today I was having a discussion with a colleague that ViewModel is a general concept and existed before MVVM pattern. I believe ViewModel term is used anytime you create a class with Model like ...
10
votes
5answers
1k views
Who invented pointers?
Pretty simple question, but something I haven't been able to find out. Who was the first person to describe the idea of a pointer? The abstract concept itself?
62
votes
1answer
3k views
What task did Dijkstra give volunteers, which was mentioned in his paper “The Humble Programmer”?
In Dijkstra's paper "Humble Programmer", he mentions that he gave some volunteers a problem to solve: “I have run a little programming experiment with really experienced volunteers, but something ...
1
vote
3answers
8k views
Sequel vs S-Q-L [duplicate]
Possible Duplicate: What's the history of the non-official pronunciation of SQL? I hear it every so often, "In sequel server...", and for some reason I cringe every time. Maybe it's ...
5
votes
1answer
432 views
Why is the dollar sign used to abbreviate the description of a cache? [closed]
What's the historical significance of abbreviating say, an L1 cache as L1$?
27
votes
3answers
3k views
What is the origin and meaning of the phrase “Lambda the ultimate?”
I've been messing around with functional programming languages for a few years, and I keep encountering this phrase. For example, it is a chapter of "The Little Schemer, which certainly predates the ...
20
votes
11answers
1k views
Is there a reason for initial overconfidence of scientists and engineers working on artificial intelligence in the 1960s?
I just started an AI & Data Mining class, and the book. AI Application Programming, starts off with an overview of the history of AI. The first chapter deals with the history of AI from the 1940s ...
34
votes
3answers
11k views
How could the first C++ compiler be written in C++?
Stroustrup claims that Cfront, the first C++ compiler, was written in C++ (Stroustrup FAQ). However, how is it even possible that the first C++ compiler be written in C++? The code that makes up the ...
16
votes
4answers
2k views
What was the first hierarchical file system?
"Directories containing directories and files" seems to have been around forever, but there must have been a first.
18
votes
5answers
1k views
What good reasons are there to capitalise SQL keywords?
There seem to be a lot of developers who write their SQL by capitalising the keywords: SELECT column FROM table INNER JOIN table ON condition WHERE condition GROUP BY clause HAVING ...
6
votes
5answers
551 views
Java without implementation inheritance
In a recent video on Java, Joshua Bloch states at 4 minutes 20 seconds into the video: And then there's inheritance, and that was a marketing necessity. You know, we can argue whether you really ...
9
votes
3answers
392 views
Where can I read the original C# introduction paper by Microsoft? [closed]
When Microsoft presented .NET Framework and C# language in 2002, what was the first article to introduce C#? I'm looking for some paper published on MSDN or Microsoft website that would explain the ...
4
votes
2answers
324 views
What methods were used for online payments before API's and Paypal, etc
What methods (in programming/web dev terms) were used to take payments online before such things as Paypal, Google Checkout and various gateways and API's. How were such transactions carried out?
38
votes
12answers
4k views
Why does the assignment operator assign to the left-hand side?
I began teaching a friend programming just recently (we're using Python), and when we began discussing variable creation and the assignment operator, she asked why the value on the right is assigned ...
3
votes
1answer
391 views
Who is the father of CIL?
...formerly known as MSIL, simple question, it is widely known that Anders Hejlsberg is the father of C#, but is there a "father of CIL"?
|
__label__pos
| 0.981105 |
Dotty Documentation
0.13.0-bin-SNAPSHOT
object Checking
extends Object
[-] Constructors
[-] Members
[+] class CheckNonCyclicMap
A type map which checks that the only cycles in a type are F-bounds and that protects all F-bounded references by LazyRefs.
[+] def checkAppliedType ( tree: AppliedTypeTree , boundsCheck: Boolean ) ( implicit ctx: Context ) : Unit
Check applied type trees for well-formedness. This means - all arguments are within their corresponding bounds - if type is a higher-kinded application with wildcard arguments, check that it or one of its supertypes can be reduced to a normal application. Unreducible applications correspond to general existentials, and we cannot handle those.
[+] def checkBounds ( args: List [ Tree ] , boundss: List [ TypeBounds ] , instantiate: (Type, List [ Type ]) => Type ) ( implicit ctx: Context ) : Unit
A general checkBounds method that can be used for TypeApply nodes as well as for AppliedTypeTree nodes. Also checks that type arguments to *-type parameters are fully applied.
[+] def checkBounds ( args: List [ Tree ] , tl: TypeLambda ) ( implicit ctx: Context ) : Unit
Check that type arguments args conform to corresponding bounds in tl Note: This does not check the bounds of AppliedTypeTrees. These are handled by method checkBounds in FirstTransform
[+] def checkDerivedValueClass ( clazz: Symbol , stats: List [ Tree ] ) ( implicit ctx: Context ) : Unit
Verify classes extending AnyVal meet the requirements
[+] def checkInstantiable ( tp: Type , posd: Positioned ) ( implicit ctx: Context ) : Unit
Check that tp refers to a nonAbstract class and that the instance conforms to the self type of the created class.
[+] def checkNoPrivateLeaks ( sym: Symbol , pos: SourcePosition ) ( implicit ctx: Context ) : Type
Check the type signature of the symbol M defined by tree does not refer to a private type or value which is invisible at a point where M is still visible.
As an exception, we allow references to type aliases if the underlying type of the alias is not a leak, and if sym is not a type. The rationale for this is that the inferred type of a term symbol might contain leaky aliases which should be removed (see leak-inferred.scala for an example), but a type symbol definition will not contain leaky aliases unless the user wrote them, so we can ask the user to change his definition. The more practical reason for not transforming types is that checkNoPrivateLeaks can force a lot of denotations, and this restriction means that we never need to run TypeAssigner#avoidPrivateLeaks on type symbols when unpickling, which avoids some issues related to forcing order.
See i997.scala for negative tests, and i1130.scala for a case where it matters that we transform leaky aliases away.
[+] def checkNonCyclic ( sym: Symbol , info: Type , reportErrors: Boolean ) ( implicit ctx: Context ) : Type
Check that info of symbol sym is not cyclic.
[+] def checkNonCyclicInherited ( joint: Type , parents: List [ Type ] , decls: Scope , posd: Positioned ) ( implicit ctx: Context ) : Unit
Check type members inherited from different parents of joint type for cycles, unless a type with the same name aleadry appears in decls.
[+] def checkRealizable ( tp: Type , posd: Positioned , what: String ) ( implicit ctx: Context ) : Unit
Check that type tp is realizable.
[+] def checkRealizable$default$3 : String
Check that type tp is realizable.
[+] def checkRefinementNonCyclic ( refinement: Tree , refineCls: ClassSymbol , seen: Set [ Symbol ] ) ( implicit ctx: Context ) : Unit
Check that refinement satisfies the following two conditions 1. No part of it refers to a symbol that's defined in the same refinement at a textually later point. 2. All references to the refinement itself via this are followed by selections. Note: It's not yet clear what exactly we want to allow and what we want to rule out. This depends also on firming up the DOT calculus. For the moment we only issue deprecated warnings, not errors.
[+] def checkWellFormed ( sym: Symbol ) ( implicit ctx: Context ) : Unit
Check that symbol's definition is well-formed.
[+] def preCheckKind ( arg: Tree , paramBounds: Type ) ( implicit ctx: Context ) : Tree
Check that kind of arg has the same outline as the kind of paramBounds. E.g. if paramBounds has kind * -> *, arg must have that kind as well, and analogously for all other kinds. This kind checking does not take into account variances or bounds. The more detailed kind checking is done as part of checkBounds in PostTyper. The purpose of preCheckKind is to do a rough test earlier in Typer, in order to prevent scenarios that lead to self application of types. Self application needs to be avoided since it can lead to stack overflows. Test cases are neg/i2771.scala and neg/i2771b.scala. A NoType paramBounds is used as a sign that checking should be suppressed.
[+] def preCheckKinds ( args: List [ Tree ] , paramBoundss: List [ Type ] ) ( implicit ctx: Context ) : List [ Tree ]
|
__label__pos
| 0.565841 |
Meta Stack Exchange is where users like you discuss bugs, features, and support issues that affect the software powering all 158 Stack Exchange communities.
What is meta?
Here's how it works:
1. Any Stack Exchange user can ask a question
2. The community provides support, votes on ideas, and reports bugs
3. Your voice helps shape the way Stack Exchange operates
Possible Duplicate:
What is a “closed” question?
Once I got close link in questions after getting enough rep on Stack Overflow but once I voted a question to close it and the link is gone forever. Now I never seen close link below any question Why?
Edit
Its my mistake I have seen some times the link below some questions during my question may be and i try to click on it and vote for close so I post question here
share|improve this question
marked as duplicate by kiamlaluno, Manishearth, jonsca, jadarnel27, Pops Jul 30 '12 at 15:39
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
So you still have any problem or something is not clear? – Shadow Wizard Jun 21 '12 at 7:25
I got the answer and every thing is clear now thanks to all of you – The iOSDev Jun 21 '12 at 7:27
up vote 14 down vote accepted
At your reputation level, you can only see the vote-to-close dialog on your own questions. You don't get the privilege for all questions until you reach 3,000.
share|improve this answer
Not the answer you're looking for? Browse other questions tagged .
|
__label__pos
| 0.80984 |
[Haskell-cafe] Type constraints for class instances
Krzysztof Skrzętnicki gtener at gmail.com
Thu Mar 20 22:00:52 EDT 2008
Hello everyone,
I'm working on a small module for comparing things incomparable with Ord.
More precisely I want to be able to compare equal infinite lists like [1..].
Obviously
(1) compare [1..] [1..] = _|_
It is perfectly reasonable for Ord to behave this way.
Hovever, it doesn't have to be just this way. Consider this class
class YOrd a where
ycmp :: a -> a -> (a,a)
In a way, it tells a limited version of ordering, since there is no
way to get `==` out of this.
Still it can be useful when Ord fails. Consider this code:
(2) sort [ [1..], [2..], [3..] ]
It is ok, because compare can decide between any elements in finite time.
However, this one
(3) sort [ [1..], [1..] ]
will fail because of (1). Compare is simply unable to tell that two
infinite list are equivalent.
I solved this by producing partial results while comparing lists. If
we compare lists
(1:xs)
(1:ys)
we may not be able to tell xs < ys, but we do can tell that 1 will be
the first element of both of smaller and greater one.
You can see this idea in the code below.
--- cut here ---
{-# OPTIONS_GHC -O2 #-}
module Data.YOrd where
-- Well defined where Eq means equality, not only equivalence
class YOrd a where
ycmp :: a -> a -> (a,a)
instance (YOrd a) => YOrd [a] where
ycmp [] [] = ([],[])
ycmp xs [] = ([],xs)
ycmp [] xs = ([],xs)
ycmp xs'@(x:xs) ys'@(y:ys) = let (sm,gt) = x `ycmp` y in
let (smS,gtS) = xs `ycmp` ys in
(sm:smS, gt:gtS)
ycmpWrap x y = case x `compare` y of
LT -> (x,y)
GT -> (y,x)
EQ -> (x,y) -- biased - but we have to make our minds!
-- ugly, see the problem below
instance YOrd Int where
ycmp = ycmpWrap
instance YOrd Char where
ycmp = ycmpWrap
instance YOrd Integer where
ycmp = ycmpWrap
-- ysort : sort of mergesort
ysort :: (YOrd a) => [a] -> [a]
ysort = head . mergeAll . wrap
wrap :: [a] -> [[a]]
wrap xs = map (:[]) xs
mergeAll :: (YOrd a) => [[a]] -> [[a]]
mergeAll [] = []
mergeAll [x] = [x]
mergeAll (a:b:rest) = mergeAll ((merge a b) : (mergeAll rest))
merge :: (YOrd a) => [a] -> [a] -> [a]
merge [] [] = []
merge xs [] = xs
merge [] xs = xs
merge (x:xs) (y:ys) = let (sm,gt) = x `ycmp` y in
sm : (merge [gt] $ merge xs ys)
--- cut here ---
I'd like to write the following code:
instance (Ord a) => YOrd a where
ycmp x y = case x `compare` y of
LT -> (x,y)
GT -> (y,x)
EQ -> (x,y)
But i get an error "Undecidable instances" for any type [a].
Does anyone know the way to solve this?
Best regards
Christopher Skrzętnicki
More information about the Haskell-Cafe mailing list
|
__label__pos
| 0.879057 |
arrays
Ads
Share on Google+Share on Google+
Bertha Mambala
arrays
1 Answer(s) 5 years ago
Posted in : Java Beginners
write a program that reads in a text typed in by the user and produces a list of disinct words in alphabetical order and how many times each word appears in the passage.
Ads
View Answers
May 29, 2012 at 1:07 PM
The given code accepts the string text from the user and display distinct words in ascending order. The code also displays the frequency of each word.
import java.util.*;
public class CountWordOccurrence {
public static void main(String[] args){
Scanner input=new Scanner(System.in);
System.out.print("Enter string: ");
String str=input.nextLine();
HashMap<String, Integer> map = new HashMap<String, Integer>();
str = str.toLowerCase();
int count = -1;
for (int i = 0; i < str.length(); i++) {
if ((!Character.isLetter(str.charAt(i))) || (i + 1 == str.length())) {
if (i - count > 1) {
if (Character.isLetter(str.charAt(i)))
i++;
String word = str.substring(count + 1, i);
if (map.containsKey(word)) {
map.put(word, map.get(word) + 1);
}
else{
map.put(word, 1);
}
}
count = i;
}
}
List sortedKeys=new ArrayList(map.keySet());
Collections.sort(sortedKeys);
System.out.println("List of words in ascending order: ");
for(int i=0;i<sortedKeys.size();i++){
System.out.println(sortedKeys.get(i).toString());
}
for(Map.Entry entry : map.entrySet()) {
System.out.println(entry.getKey()+" : "+ entry.getValue());
}
}
}
Ads
Related Tutorials/Questions & Answers:
arrays
arrays using arrays in methods Java use arrays in methods import java.util.*; class ArrayExample{ public static int getMaxValue(int[] arr){ int maxValue = arr[0]; for(int i=1;i < arr.length;i
arrays
Store a table with students and information (name, ID, password, crypted password, etc) in a multi-dimensional array "stud" Arrays and Strings: Store a table with students and information (name, ID, password, crypted password
Advertisements
Arrays
Arrays Hi I need help with the following exercises. Exercise 1: Write a Java application in which the user is prompted for the total number of integer values to be stored in an array. Initialize the array with random values
Arrays
called Rebel.java; 2.Create two arrays. One of them must store integer numbers... the numbers in the second array by 3; 7.Print out the contents of both arrays. 8.Swap
Arrays in java
Arrays in java what is param array in java
java arrays
java arrays i need a java program to store student details like id,name,addr,marks,average,total..using arrays..input data using scanner class and by using class, object and constructor
java arrays
java arrays can i know how can we intilize the three dimentional arrays in java? and how can we assign thae values to that array
Java with Arrays
Java with Arrays I was given the assignment to create two parallel arrays; both of size 50 (declares 50 as a constant) One is an array of strings... and store it in the arrays. The input for the problem should be provided in a text
java arrays
java arrays how do you write the code for multipliying 2 double arrays to be in a 3rd double array? here is my code: package employeepay; /** * * @author Owner */ public class Main { /** * @param args the command line
How to create arrays in JavaScript?
How to create arrays in JavaScript? How to create arrays in JavaScript
Compare two Byte Arrays?
Compare two Byte Arrays? Compare two Byte Arrays
Comparing arrays not working correctly?
Comparing arrays not working correctly? Comparing arrays not working correctly
create arrays in JavaScript
create arrays in JavaScript How to create arrays in JavaScript
arrays in java - Java Beginners
arrays in java Hi All, I have two arrays. in this two array some name are same. I want to merge those arrays into single. But while merging I want to delete duplicate entries. How merge those arrays. Thanks, mln15584
java; arrays - Java Beginners
java arrays example How can you create a program, by using arrays and the output would be X. by using char or string.Thank you
Are arrays primitive data types?
Are arrays primitive data types? Hi, Are arrays primitive data types? thanks Hi, In Java, Arrays are objects. Identifier are some simple variable names which is defined as the value container. The type of value
What is javascript Arrays?
What is javascript Arrays? Hi, I am learner of JavaScript. My question is that What is JavaScript Arrays ? How to Define the JavaScript arrays in you program. Please give an example show that i will try my self. Thanks
combine two arrays in php
combine two arrays in php combine two arrays in php $newArray = array_combine($diff, $pages_name['data']); foreach ($newArray as $data) { var_dump($data); exit('zzzzz'); echo $data . '<br>
|
__label__pos
| 0.969352 |
Here's the question you clicked on:
55 members online
• 0 replying
• 0 viewing
agusginanjar
• 3 years ago
a +b = c a x b = c what the value of a, b, and c with value of a &b not 2 & 0
• This Question is Open
1. joemath314159
• 3 years ago
Best Response
You've already chosen the best response.
Medals 1
If you knew what c was, then solving the system\[a+b=c\]\[ab=c\]Is the same as solving the quadratic equation:\[x^2-cx+c=0\]where the solutions to the quadratic are a and b. Using the quadratic formula yields:\[x=\frac{-c\pm \sqrt{c^2-4c}}{2}\] Now we probably want these solutions to be real, so we need to make sure:\[c^2-4c\ge 0\]If c is positive, and not 0, then we divide by c to get:\[c-4\ge 0\Longrightarrow c\ge 4\] So pick any number for c such that c is greater than 4. Say 10. Then:\[a=\frac{-10+\sqrt{60}}{2}\]\[b=\frac{-10-\sqrt{60}}{2}\]does the job.
2. joemath314159
• 3 years ago
Best Response
You've already chosen the best response.
Medals 1
Note there is not one answer to this question. You can pick infinitely many values of c, which will produce a and b accordingly.
3. joemath314159
• 3 years ago
Best Response
You've already chosen the best response.
Medals 1
whoops typo, should be:\[a=\frac{10+\sqrt{60}}{2}\]\[b=\frac{10-\sqrt{60}}{2}\]
4. RadEn
• 3 years ago
Best Response
You've already chosen the best response.
Medals 0
ab = a+b ab - a = b a(b-1) = b a = b/(b-1), with b not 1 just plug b E real but not for b=0,1,and 2 u will get values a and get order pairs : (a1,b1),(a2,b2),.... so, infinitely solutions
5. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
__label__pos
| 0.877267 |
+------------------------------------------------------------------------
|
| Blackmail FLI Editor
|
+------------------------------------------------------------------------
|
| Features (see Explanations for details):
|
| - 1 Bitmap, 8 Screen-RAMs (FLI), Color-RAM
| - Multicolor pixels
|
| Memory-Structure ($3B00-$8000, 17664 Bytes, 70 Blocks):
|
| $3C00 - $3FE7: Color-RAM
| $4000 - $43E7: Screen-RAM
| $4400 - $47E7: Screen-RAM
| $4800 - $4BE7: Screen-RAM
| $4C00 - $4FE7: Screen-RAM
| $5000 - $53E7: Screen-RAM
| $5400 - $57E7: Screen-RAM
| $5800 - $5BE7: Screen-RAM
| $5C00 - $5FE7: Screen-RAM
| $6000 - $7F3F: Bitmap
|
+------------------------------------------------------------------------
|
__label__pos
| 0.949443 |
INTEROFFICE MEMORANDUM DATE: April 2, 1991 FROM: Info. Center Johnson PRODUCT: WordPerfect VERSION: 5.1 RELEASE DATE: All SUBJECT: Mail Merging ASCII Delimited Files Where do these files come from? ASCII delimited files mostly come from database packages such as dBASE III or IV; however, they can also come from spreadsheet programs. To save a dBASE III or IV file in ASCII delimited format, do the following command at the Dot prompt in dBASE (the prompt is actually a small dot). USE (name of the database file) COPY TO (file to be saved in ASCII Delimited format) DELE. What ASCII delimited files look like? ASCII is simply DOS text or a general text format recognized by IBM compatible computers. Delimited means separated. If you were to look at an ASCII delimited file, it would look like normal text with a comma or other characters to separate the fields, and probably a new line to separate a record. The following is an example of an ASCII file without delimiters. John Doe 100 N. 200 W. Provo UT 84057 Tom Smith 100 N. 200 W. Provo UT 84057 Below is an ASCII file with delimiters, and it is called ASCII Delimited. "John","Doe","100 N. 200 W.","Provo","UT","84057" "Tom","Smith","100 N. 200 W.","Provo","UT","84057" ÀÄÄÄþ The comma is the field delimiter A new line (Carriage return and a line feed) indicate a new record. The quotation marks simply indicate that the information was a character field, and the quotations are stripped out since they are not needed. The field delimiter does not have to be a comma; it could be almost any character; however, commas are generally used. Similarly a new line (Carriage return and a line feed) indicates a new record. You can merge directly with an ASCII file without converting it to a secondary merge file if you have WP51. To do this you must be in WordPerfect, press Ctrl-F9, 1 and type the name of the primary file. When WP asks for secondary file, press F5 and change to the directory which contains the ASCII text file, highlight the ASCII text file, and press 1 for Retrieve. WordPerfect is able to detect that this is an ASCII delimited file and a menu appears which asks for the beginning and ending delimiters for the fields and records. In most cases, the beginning delimiter for a field will be a quotation mark ("), and the ending delimiter for a field will be a quotation mark with a comma ", and you should use a single " without the comma if it does not merge properly. There generally will not be a beginning record delimiter, but the ending delimiter will be a [CR] for carriage return which can be obtained by pressing Ctrl-M. When you are sure that you have entered the correct information, press Enter and the merge will be performed. If you desire to change the initial defaults for the delimiter characters, press Shift-F1,4, 1 and enter the delimiter character. This allows you alter initial settings permanently for the beginning and ending delimiter characters used for the fields and records. Memo ID: WP51_4833B
|
__label__pos
| 0.86495 |
Advertisement
1. SEJ
2. ⋅
3. Core Web Vitals
What Is Largest Contentful Paint: An Easy Explanation
Learn about Largest Contentful Paint – what it is, how it's measured, and how to optimize for it to improve your Core Web Vitals score.
Largest Contentful Paint (LCP) is a Google user experience metric that became a ranking factor in 2021.
This guide explains what LCP is and how to achieve the best scores.
What is Largest Contentful Paint?
LCP is a measurement of how long it takes for the main content of a page to download and be ready to be interacted with.
What is measured is the largest image or block of context within the user viewport. Anything that extends beyond the screen does not count.
Typical elements measured are images, video poster images, background images, and block-level text elements like paragraph tags.
Why is LCP Measured?
LCP was chosen as a key metric for the Core Web Vitals score because it accurately measures how fast a webpage can be used.
Additionally, it is easy to measure and optimize for.
Block-level Elements Used to Calculate the LCP Score
Block-level elements used for calculating the Largest Contentful Paint score can be the <main> and <section> elements, as well as the heading, div, form elements.
Any block-level HTML element that contains text elements can be used, as long as it’s the largest one.
Not all elements are used. For example, the SVG and VIDEO elements are not currently used for calculating the Largest Contentful Paint.
LCP is an easy metric to understand because all you have to do is look at your webpage and determine what the largest text block or image is and then optimize it by making it smaller or removing anything that would prevent it from downloading quickly.
Because Google includes most sites in the mobile-first index, it’s best to optimize the mobile viewport first, then the desktop.
Delaying Large Elements Might Not Help
Sometimes a webpage will render in parts. A large featured image might take longer to download than the largest text block-level element.
What happens, in this case, is that a PerformanceEntry is logged for the largest text block-level element.
But when the featured image at the top of the screen loads, if that element takes up more of the user’s screen (their viewport), then another PerformanceEntry object will be reported for that image.
Images Can Be Tricky for LCP Scores
Web publishers commonly upload images at their original size and then use HTML or CSS to resize the image to display at a smaller size.
The original size is what Google refers to as the “intrinsic” size of the image.
If a publisher uploads an image that’s 2048 pixels wide and 1152 pixels in height, that 2048 x 1152 height and width are considered the “intrinsic” size.
Now, if the publisher resizes the 2048 x 1152 pixel image to 640 x 360 pixels, the 640×360 size image is called the visible size.
For the purposes of calculating the image size, Google uses whichever size is smaller between the intrinsic and visible size images.
Note About Image Sizes
It’s possible to achieve a high Largest Contentful Paint score with a large intrinsic size image that is resized with HTML or CSS to be smaller.
But it’s a best practice to make the intrinsic size of the image match the visible size.
The image will download faster and your Largest Contentful Paint score will go up.
How LCP Handles Images Served from Another Domain
Images served from another domain, like from a CDN, are generally not counted in the Largest Contentful Paint calculation.
Publishers who want to have those resources be a part of the calculation need to set what’s called a Timing-Allow-Origin header.
Adding this header to your site can be tricky because if you use a wildcard (*) in the configuration, then it could open your site up to hacking events.
In order to do it properly, you would have to add a domain that’s specific to Google’s crawler in order to whitelist it so that it can see the timing information from your CDN.
So at this point, resources (like images) that are loaded from another domain (like from a CDN) will not be counted as part of the LCP calculation.
Beware These Scoring “Gotchas”
All elements that are in the user’s screen (the viewport) are used to calculate LCP. That means that images that are rendered off-screen and then shift into the layout once they are rendered may not count as part of the Largest Contentful Paint score.
On the opposite end, elements that start out in the user viewport and then get pushed off-screen may be counted as part of the LCP calculation.
How to Get the LCP Score
There are two kinds of scoring tools. The first one is called Field Tools, and the second one is called Lab Tools.
Field tools are actual measurements of a site.
Lab tools give a virtual score based on a simulated crawl using algorithms that approximate Internet conditions that a typical user on a mobile phone might encounter.
How to Optimize for Largest Contentful Paint
There are three main areas to optimize (plus one more for JavaScript Frameworks):
1. Slow servers.
2. Render-blocking JavaScript and CSS.
3. Slow resource load times.
A slow server can be an issue with DDOS levels of hacking and scraper traffic on a shared or VPS host. You may find relief by installing a WordPress plugin like WordFence to find out if you’re experiencing a massive onslaught and then block it.
Other issues could be the misconfiguration of a dedicated server or VPS. A typical issue can be the amount of memory allotted to PHP.
Another issue could be outdated software like an old PHP version or CMS software that is outdated.
The worst-case scenario is a shared server with multiple users that are slowing down your box. In that case, moving to a better host is the answer.
Typically, applying fixes like adding caching, optimizing images, fixing render-blocking CSS and JavaScript, and pre-loading certain assets can help.
Google has a neat tip for dealing with CSS that’s not essential for rendering what the user sees:
“Remove any unused CSS entirely or move it to another stylesheet if used on a separate page of your site.
For any CSS not needed for initial rendering, use loadCSS to load files asynchronously, which leverages rel=”preload” and onload.
<link rel=”preload” href=”stylesheet.css” as=”style” onload=”this.rel=’stylesheet'”>”
Field Tools for LCP Score
Google lists three field tools:
The last one – Chrome User Experience Report – requires a Google account and a Google Cloud Project. The first two are more straightforward.
Lab Tools for LCP Score
Lab measurements are simulated scores.
Google recommends the following tools:
The first two tools are provided by Google. The third tool is provided by a third party.
Citations
How to Optimize for LCP
What is LCP?
Timing Attacks and the Timing-Allow-Origin Header
Featured image credit: Paulo Bobita
Category SEO
ADVERTISEMENT
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com
I have 25 years hands-on experience in SEO and have kept on top of the evolution of search every step ...
Core Web Vitals: A Complete Guide
|
__label__pos
| 0.611675 |
block_core_navigation_get_classic_menu_fallback_blocks
函数
block_core_navigation_get_classic_menu_fallback_blocks ( $classic_nav_menu )
参数
• (object) $classic_nav_menu WP_Term The classic navigation object to convert.
Required:
返回值
• (array) the normalized parsed blocks.
定义位置
相关方法
block_core_navigation_get_classic_menu_fallbackblock_core_navigation_get_fallback_blocksblock_core_navigation_maybe_use_classic_menu_fallbackblock_core_navigation_get_menu_items_at_locationblock_core_navigation_render_submenu_icon
引入
-
弃用
6.3.0
将经典导航转换为区块。
function block_core_navigation_get_classic_menu_fallback_blocks( $classic_nav_menu ) {
_deprecated_function( __FUNCTION__, '6.3.0', 'WP_Navigation_Fallback::get_classic_menu_fallback_blocks' );
// BEGIN: Code that already exists in wp_nav_menu().
$menu_items = wp_get_nav_menu_items( $classic_nav_menu->term_id, array( 'update_post_term_cache' => false ) );
// Set up the $menu_item variables.
_wp_menu_item_classes_by_context( $menu_items );
$sorted_menu_items = array();
foreach ( (array) $menu_items as $menu_item ) {
$sorted_menu_items[ $menu_item->menu_order ] = $menu_item;
}
unset( $menu_items, $menu_item );
// END: Code that already exists in wp_nav_menu().
$menu_items_by_parent_id = array();
foreach ( $sorted_menu_items as $menu_item ) {
$menu_items_by_parent_id[ $menu_item->menu_item_parent ][] = $menu_item;
}
$inner_blocks = block_core_navigation_parse_blocks_from_menu_items(
isset( $menu_items_by_parent_id[0] )
? $menu_items_by_parent_id[0]
: array(),
$menu_items_by_parent_id
);
return serialize_blocks( $inner_blocks );
}
常见问题
FAQs
查看更多 >
|
__label__pos
| 0.918204 |
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
Hi,
I need help to prove that, for $ N = \big\lfloor \frac{1}{2}n\log(n)+cn \big\rfloor $ with $c \in \mathbb R $ and $0 \leq k \leq n: $
$$ \lim_{n\rightarrow +\infty} \dbinom{n}{k} \frac{\dbinom{\binom{n - k}{2}}{N} }{\dbinom{\binom{n}{2} }{N}} = \frac{e^{-2kc}}{k!} $$
share|cite|improve this question
3
Jacques Carette has provided a nice suggestion. However, unless you specify what sort of help you need, I and others are going to think this question unsuitable for MathOverflow. (It may be unsuitable after you provide motivation or explain your difficulty, but you may get more sympathetic treatment. Also, if you have trouble with Jacques answer, you may find it best to ask on math.stackexchange instead.) Gerhard "Ask Me About System Design" Paseman, 2011.12.02 – Gerhard Paseman Dec 2 '11 at 18:42
3
You may also want to give a more precise reference: which Erdős paper? – j.c. Dec 2 '11 at 20:24
Note that the limit is not in general correct if $k$ is a function of $n$. I'll assume you meant us to assume it is constant or very slowly growing.
You don't need a computer. Just remember this one: $$ \binom{M}{t} = \frac{M^t}{t!} \exp\biggl( -\frac{t(t-1)}{2M} + O(t^3/M^2)\biggr), $$ as $M\to\infty$. The variable $t$ can be a function of $M$ provided $t^3/M^2$ is bounded. You can prove this using Stirling's formula, but it is easier to just take the logarithm of both sides and use the Taylor expansion of the logarithm.
Apply this to the three binomials in your problem and simplify. This will also tell you how fast $k$ can increase before the limit changes.
For your second problem $t^3/M^2$ doesn't go to zero, so you need the next term inside the exponential, which is $$ -\frac{t(t-1)(2t-1)}{12M^2} $$ and the error term is then $O(t^4/M^3)$. If you don't care about precise error terms, put this together and infer that whenever $t=o(M^{3/4})$, $$ \binom{M}{t} = \frac{M^t}{t!} \exp\biggl( -\frac{t^2}{2M} -\frac{t^3}{6M^2} + o(1)\biggr). $$
share|cite|improve this answer
Hi,
thank you for your calculation, in fact I wanted to generalize the method to obtain the following limit:
for $ N' = \lfloor n^2 log(n)+cn^2 \rfloor $ with $c \in \mathbb R $ and $0 \leq k \leq n $
$$ \lim_{n\rightarrow +\infty} \dbinom{n}{k} \frac{\dbinom{3 \binom{n-k}{3} }{N'} }{\dbinom{3 \binom{n}{3} }{N'}} $$
but I think it's a little hard without Maple, so if you could give me the value of this limit with your method it would help me a lot.
Friendly.
share|cite|improve this answer
Mathematica can do the limit automatically. Just define the function and use the Limit[ ] command. – David Harris Dec 2 '11 at 20:26
Yes, but I have no symbolic computation program. So if you could give me the result I would be delighted. – Bob Dec 2 '11 at 20:34
@David Harris: as per Brendan McKay's answer, unless Mathematica's answer is piecewise, then it is wrong. In other words, there is a phase transition depending on the exact value of c. – Jacques Carette Dec 3 '11 at 2:21
First note that $\binom{m}{2} = \frac{m(m-1)}{2}$ and use that to get rid of the nested binomials. Also, the floor function will not (here) make any difference, so ignore it.
Then convert all binomials to their $\Gamma$ equivalents, and use Stirling's formulas for each term. The next step is the messiest, as you'll have a lot of arithmetic to perform on the result, which will give you the result.
This is sufficiently mechanical that, using Maple, I can quickly derive that $$ \frac{e^{-2kc}}{k!} + \frac{-\frac{1}{2}e^{-2kc}((4c+k+1)\ln{n}+2kc+4c^2+\ln^2{n}-1+2c+k)}{n (k-1)!}+O(n^{-2})$$
Of course, that second term might not be quite right, since the previously ignored floor function might here contribute, I have not checked that.
share|cite|improve this answer
1
I think you are allowed to ignore the floor function in your computation. Let's introduce some suitable $c_n$ such that $N=\frac{1}{2}n\log(n)+c_n n$. So $c_n=c+O(1/n)$, and since Stirling's formula also holds for $\Gamma$, your computation has to be correct with $c_n$ in place of $c$. Therefore it is also correct with $c$ as it is now -maybe with $O(n^{-2}\ln n)$ in place of $O(n^{-2})$. – Pietro Majer Dec 2 '11 at 18:40
Thanks Pietro. I should have seen it myself... – Jacques Carette Dec 2 '11 at 19:43
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.992383 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
PerlMonks
Comment on
( #3333=superdoc: print w/replies, xml ) Need Help??
One of the core ideas behind Moose is that of metaprogramming. That is; don't write programs - write programs which write programs.
For example, rather than defining our attributes the old-fashioned way, like:
sub some_attribute { my $self = shift; $self->{some_attribute} = shift if @_; return $self->{some_attribute}; }
We just write:
has some_attribute => (is => 'rw');
The has function is a "program which writes programs". It makes our sub some_attribute for us!
So the solution is to write something that does the same sort of job as has, but has some domain-specific knowledge about grabbing raw data, unpacking it, etc. Here's an example (untested):
{ package Binary::Humax::HmtData; use DateTime; use Moose; has raw_data_block => ( is => 'rw', isa => 'Str', required => 1, ); # Shortcut function for defining attributes # sub _my_has { my ($name, %spec) = @_; my $meta = __PACKAGE__->meta; # Default attribute to 'rw' $spec{is} //= 'rw'; # Set up lazy builder if (my $unpack = delete $spec{unpack}) { $spec{lazy} //= 1; $spec{builder} //= "_build_$name"; if (my $postprocess = delete $spec{postprocess}) { $meta->add_method($spec{builder}, sub { my $self = shift; local $_ = unpack($unpack, $self->raw_data_block); $postprocess->(); }); } else { $meta->add_method($spec{builder}, sub { my $self = shift; return unpack($unpack, $self->raw_data_block); }); } } $meta->add_attribute($name, \%spec); } # Now use that shortcut to define each attribute. # _my_has last_play => (isa => 'Int', unpack => '@5 S'); _my_has chan_num => (isa => 'Int', unpack => '@17 S'); _my_has start_time => ( isa => 'DateTime', unpack => '@5 S', postprocess => sub { DateTime->from_epoch(epoch => $_, time_zone => 'GMT'); }, ); _my_has file_name => (isa => 'Str', unpack => '@33 A512'); # Create an alternative constructor which wraps "new". # sub new_from_file { my ($class, $filename) = @_; open my $fh, '>', $filename; my $slurp = do { local $/ = <$fh> }; return $class->new(r); } } # # USAGE # my $hmt_data = Binary::Humax::HmtData->new_from_file($path_name); my $field = $hmt_data->start_time;
"Secondly, when complete, I plan to upload the module to CPAN. Have I chosen a good name for it, or should it live in a different name space?"
No, it seems like a bad name. You're putting it in "Binary" because it's a binary file format. But presumably end users of your module won't care whether it's a binary file format, a text-based one, or XML-based; they don't care about the file format at all, because they've downloaded your module to abstract those sort of details away, haven't they?
I would have thought something in the "TV" namespace more fitting.
UPDATE:; we can go even "more meta" by replacing our has workalike with an attribute trait. This has the advantage of allowing introspection of each attribute to read back its "unpack" code.
{ package Binary::Humax::HmtData::Trait::Attribute; use Moose::Role; has unpack => (is => 'ro', isa => 'Str'); has postprocess => (is => 'ro', isa => 'CodeRef'); before _process_options => sub { my ($meta, $name, $spec) = @_; if ($spec->{unpack}) { $spec->{lazy} //= 1; $spec->{builder} //= "_build_$name"; $spec->{is} //= 'rw'; } }; after attach_to_class => sub { my $attr = shift; my $class = $attr->associated_class; my $unpack = $attr->unpack or return; if (my $postprocess = $attr->postprocess) { $class->add_method($attr->builder, sub { my $self = shift; local $_ = unpack($unpack, $self->raw_data_block); $postprocess->(); }); } else { $class->add_method($attr->builder, sub { my $self = shift; return unpack($unpack, $self->raw_data_block); }); } }; } { package Binary::Humax::HmtData; use DateTime; use Moose; use constant MAGIC => 'Binary::Humax::HmtData::Trait::Attribute'; has raw_data_block => ( is => 'rw', isa => 'Str', required => 1, ); has last_play => ( traits => [ MAGIC ], isa => 'Int', unpack => '@5 S', ); has chan_num => ( traits => [ MAGIC ], isa => 'Int', unpack => '@17 S', ); has start_time => ( traits => [ MAGIC ], isa => 'DateTime', unpack => '@5 S', postprocess => sub { DateTime->from_epoch(epoch => $_, time_zone => 'GMT'); }, ); has file_name => ( traits => [ MAGIC ], isa => 'Str', unpack => '@33 A512', ); } # Attribute introspection print Binary::Humax::HmtData->meta->get_attribute('start_time')->unpac +k, "\n";
The slight ugliness with this method is that the attribute trait has some knowledge of the class it's being applied to - it knows that the class has a raw_data_block attribute. With a little more work that problem could be eliminated.
perl -E'sub Monkey::do{say$_,for@_,do{($monkey=[caller(0)]->[3])=~s{::}{ }and$monkey}}"Monkey say"->Monkey::do'
In reply to Re: Binary data structure to moose class. by tobyink
in thread Binary data structure to moose class. by chrestomanci
Title:
Use: <p> text here (a paragraph) </p>
and: <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":
• Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
• Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
• Read Where should I post X? if you're not absolutely sure you're posting in the right place.
• Please read these before you post! —
• Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
• You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
For: Use:
& &
< <
> >
[ [
] ]
• Link using PerlMonks shortcuts! What shortcuts can I use for linking?
• See Writeup Formatting Tips and other pages linked from there for more info.
• Log In?
Username:
Password:
What's my password?
Create A New User
Chatterbox?
and all is quiet...
How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2017-02-26 22:14 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Before electricity was invented, what was the Electric Eel called?
Results (376 votes). Check out past polls.
|
__label__pos
| 0.980057 |
Ludic Logo
Edit
Click To Edit
The click to edit pattern provides a way to offer inline editing of all or part of a record without a page refresh. In this example, we create a customer which we'll be able to edit.
Demo
#
Implementation
#
Before we start implementing the endpoint rendering the HTML form, we also want to display the data of the customer. The task can be achieved by implementing the Contact endpoint which renders the data as a description list. The endpoint also requires the contact's attributes which we describe like this:
from typing import Annotated
from ludic.attrs import Attrs
from ludic.catalog.forms import FieldMeta
class ContactAttrs(Attrs):
id: NotRequired[str]
first_name: Annotated[str, FieldMeta(label="First Name")]
last_name: Annotated[str, FieldMeta(label="Last Name")]
email: Annotated[
str, FieldMeta(label="Email", type="email", parser=email_validator)
]
You may have noticed we used the Annotated marker. The reason for that is that we want to automatically create a form based on this specification later. There is also the email_validator parser which can parse and validate email of the customer. The parser could be as simple as this:
from ludic.web.parsers import ValidationError
def email_validator(email: str) -> str:
if len(email.split("@")) != 2:
raise ValidationError("Invalid email")
return email
The contact endpoint handles rendering of the customer data as well as the GET and PUT HTTP methods:
from typing import Self, override
from ludic.catalog.forms import Form, create_fields
from ludic.catalog.layouts import Cluster, Stack
from ludic.catalog.items import Pairs
from ludic.web import Endpoint, LudicApp
from ludic.web.exceptions import NotFoundError
from ludic.web.parsers import Parser
from your_app.attrs import ContactAttrs
from your_app.database import db
app = LudicApp()
@app.endpoint("/contacts/{id}")
class Contact(Endpoint[ContactAttrs]):
@classmethod
async def get(cls, id: str) -> Self:
contact = db.contacts.get(id)
if contact is None:
raise NotFoundError("Contact not found")
return cls(**contact.dict())
@classmethod
async def put(cls, id: str, attrs: Parser[ContactAttrs]) -> Self:
contact = db.contacts.get(id)
if contact is None:
raise NotFoundError("Contact not found")
for key, value in attrs.validate().items():
setattr(contact, key, value)
return cls(**contact.dict())
@override
def render(self) -> Stack:
return Stack(
Pairs(items=self.attrs.items()),
Cluster(
Button(
"Click To Edit",
hx_get=self.url_for(ContactForm),
),
),
hx_target="this",
)
We created a class-based endpoint with the following methods:
• get – handles the GET request which fetches information about the contact from the database, and returns them as the Contact component to be rendered as HTML.
• put – handles the PUT request which contains form data. Since we later use the create_rows method, it is possible to use a parser to automatically convert the form data into a dictionary. First, we check that we have the contact in our database, than we validate the data submitted by the user and update the contact in our database, and finally, we return the updated contact as the Contact component.
• render – handles rendering of the contact data and a button to issue the HTMX swap operation replacing the content with an editable form.
The last remaining piece is the ContactForm component:
@app.endpoint("/contacts/{id}/form/")
class ContactForm(Endpoint[ContactAttrs]):
@classmethod
async def get(cls, id: str) -> Self:
contact = db.contacts.get(id)
if contact is None:
raise NotFoundError("Contact not found")
return cls(**contact.dict())
@override
def render(self) -> Form:
return Form(
*create_fields(self.attrs, spec=ContactAttrs),
Cluster(
ButtonPrimary("Submit"),
ButtonDanger("Cancel", hx_get=self.url_for(Contact)),
),
hx_put=self.url_for(Contact),
hx_target="this",
)
This component only implements two methods:
• get – handles the GET request which renders the form containing the contact data stored in the database.
• render – handles rendering of the contact form, a button to submit the form, and also a button to cancel the edit operation. We use the HTMX swap operation to replace the form with the Contact component on submit or cancel.
Made with Ludic and HTMX and 🐍 • DiscordGitHub
|
__label__pos
| 0.830156 |
Close
14/12/2020
What is manual document management system?
What is manual document management system?
An document management system is a way to automate manual processes. And that makes it a key part of a digital transformation for any organization.
What is document control procedure?
Document control procedures set the framework for how documents are approved, updated or amended, how changes are tracked, how documents are published (internally or externally), and how documents are made obsolete. …
What is meant by document control?
Document control refers to the practice and profession of enforcing document management standards within a given workplace or other definable scope. Document control practices ensure that all of the data concerning any given element of the workplace is accurate and in agreement with each other.
How do you create a control document?
How to start a document control system
1. Step 1: Identify documents and workflows.
2. Step 2: Establish ownership and quality standards.
3. Step 3: Name and classify documents.
4. Step 4: Create revision protocols.
5. Step 5: Manage security and access.
6. Step 6: Classify and archive documents to ensure version control.
What does a document management system do?
Document management is a system or process used to capture, track and store electronic documents such as PDFs, word processing files and digital images of paper-based content.
What is a manual filing system?
Definition. A manual filing system is “a structured set of personal data that are accessible according to certain criteria.”
How do you create a control document in Excel?
The key steps to adding document control to an excel spreadsheet
1. Click on the print / print preview button.
2. Click Page Setup.
3. Select Header Footer tab.
4. Click custom header and add in your information.
5. Click customer footer and add in your information.
6. Click OK (again) when you are done.
7. Close the Print Preview page.
What are the features of document management system?
Top 7 document management features you need today.
• Cloud access. These days, practically everything in business takes place online.
• Intelligent organization.
• An attractive user interface.
• A robust search feature.
• Version control.
• Permissions.
• Universal format support.
How to control documents more effectively?
Staff Designation. Although your business is small,you probably have multiple departments,such as accounting,human resources,customer service and marketing.
• Document Coding. By giving each document an identification code,the document is unique to its purpose and department.
• Electronic Control.
• Paper Control.
• Considerations.
• What is a Standard Operating Procedure Manual?
A standard operating procedures manual is a written document that lists the instructions, step-by-step, on how to complete a job task or how to handle a specific situation when it arises in the workplace. Before developing a standard operating procedures manual,…
What does document control mean to you?
Document control is a process used to ensure that documents and the information they contain are current and valid and that any changes to the document are tracked and reviewed. For this reason, it’s common to use a seven-point document control audit checklist to ensure that the process is working.
What is document management procedure?
The document management process is a system in which documents are organized and stored for future reference. This process can be rudimentary, such as when someone puts receipts in a shoebox with little or no organization.
|
__label__pos
| 0.999998 |
2. implement Required<T>
TypeScript
- accepted / - tried
As the opposite of Partial<T>, Required<T> sets all properties of T to required.
Please implement MyRequired<T> by yourself.
// all properties are optional
type Foo = {
a?: string
b?: number
c?: boolean
}
const a: MyRequired<Foo> = {}
// Error
const b: MyRequired<Foo> = {
a: 'BFE.dev'
}
// Error
const c: MyRequired<Foo> = {
b: 123
}
// Error
const d: MyRequired<Foo> = {
b: 123,
c: true
}
// Error
const e: MyRequired<Foo> = {
a: 'BFE.dev',
b: 123,
c: true
}
// valid
Let's try to solve this problem within 5 minutes.
(30)
|
__label__pos
| 1 |
読者です 読者をやめる 読者になる 読者になる
Daily Creative Coding
元「30 min. Processing」。毎日、Creative Codingします。
円のループ
決定論
/**
* loop of circles
*
* @author aa_debdeb
* @date 2016/12/23
*/
int NUM = 24;
void setup(){
size(640, 640);
noStroke();
}
void draw(){
background(0, 0, 30);
translate(width / 2, height / 2);
for(int i = 0; i < NUM; i++){
float angle = i * TWO_PI / NUM;
float v = pow(abs(sin(angle / 2 + frameCount * 0.03)), 4);
float r = map(v, 0, 1, 10, 20);
fill(lerpColor(color(255, 0, 191), color(191, 255, 0), v));
ellipse((150 + r) * cos(angle), (150 + r) * sin(angle), r * 2, r * 2);
}
}
f:id:aa_debdeb:20161220213308j:plain
|
__label__pos
| 0.930924 |
Modules for expansion services, import and export in MISP http://misp.github.io/misp-modules
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
164 lines
7.0 KiB
# -*- coding: utf-8 -*-
import json
from assemblyline_client import Client, ClientError
from collections import defaultdict
from pymisp import MISPAttribute, MISPEvent, MISPObject
misperrors = {'error': 'Error'}
mispattributes = {'input': ['link'], 'format': 'misp_standard'}
moduleinfo = {'version': '1', 'author': 'Christian Studer',
'description': 'Query AssemblyLine with a report URL to get the parsed data.',
'module-type': ['expansion']}
moduleconfig = ["apiurl", "user_id", "apikey", "password"]
class AssemblyLineParser():
def __init__(self):
self.misp_event = MISPEvent()
self.results = {}
self.attribute = {'to_ids': True}
self._results_mapping = {'NET_DOMAIN_NAME': 'domain', 'NET_FULL_URI': 'url',
'NET_IP': 'ip-dst'}
self._file_mapping = {'entropy': {'type': 'float', 'object_relation': 'entropy'},
'md5': {'type': 'md5', 'object_relation': 'md5'},
'mime': {'type': 'mime-type', 'object_relation': 'mimetype'},
'sha1': {'type': 'sha1', 'object_relation': 'sha1'},
'sha256': {'type': 'sha256', 'object_relation': 'sha256'},
'size': {'type': 'size-in-bytes', 'object_relation': 'size-in-bytes'},
'ssdeep': {'type': 'ssdeep', 'object_relation': 'ssdeep'}}
def get_submission(self, attribute, client):
sid = attribute['value'].split('=')[-1]
try:
if not client.submission.is_completed(sid):
self.results['error'] = 'Submission not completed, please try again later.'
return
except Exception as e:
self.results['error'] = f'Something went wrong while trying to check if the submission in AssemblyLine is completed: {e.__str__()}'
return
try:
submission = client.submission.full(sid)
except Exception as e:
self.results['error'] = f"Something went wrong while getting the submission from AssemblyLine: {e.__str__()}"
return
self._parse_report(submission)
def finalize_results(self):
if 'error' in self.results:
return self.results
event = json.loads(self.misp_event.to_json())
results = {key: event[key] for key in ('Attribute', 'Object', 'Tag') if (key in event and event[key])}
return {'results': results}
def _create_attribute(self, result, attribute_type):
attribute = MISPAttribute()
attribute.from_dict(type=attribute_type, value=result['value'], **self.attribute)
if result['classification'] != 'UNCLASSIFIED':
attribute.add_tag(result['classification'].lower())
self.misp_event.add_attribute(**attribute)
return {'referenced_uuid': attribute.uuid, 'relationship_type': '-'.join(result['context'].lower().split(' '))}
def _create_file_object(self, file_info):
file_object = MISPObject('file')
filename_attribute = {'type': 'filename'}
filename_attribute.update(self.attribute)
if file_info['classification'] != "UNCLASSIFIED":
tag = {'Tag': [{'name': file_info['classification'].lower()}]}
filename_attribute.update(tag)
for feature, attribute in self._file_mapping.items():
attribute.update(tag)
file_object.add_attribute(value=file_info[feature], **attribute)
return filename_attribute, file_object
for feature, attribute in self._file_mapping.items():
file_object.add_attribute(value=file_info[feature], **attribute)
return filename_attribute, file_object
@staticmethod
def _get_results(submission_results):
results = defaultdict(list)
for k, values in submission_results.items():
h = k.split('.')[0]
for t in values['result']['tags']:
if t['context'] is not None:
results[h].append(t)
return results
def _get_scores(self, file_tree):
scores = {}
for h, f in file_tree.items():
score = f['score']
if score > 0:
scores[h] = {'name': f['name'], 'score': score}
if f['children']:
scores.update(self._get_scores(f['children']))
return scores
def _parse_report(self, submission):
if submission['classification'] != 'UNCLASSIFIED':
self.misp_event.add_tag(submission['classification'].lower())
filtered_results = self._get_results(submission['results'])
scores = self._get_scores(submission['file_tree'])
for h, results in filtered_results.items():
if h in scores:
attribute, file_object = self._create_file_object(submission['file_infos'][h])
print(file_object)
for filename in scores[h]['name']:
file_object.add_attribute('filename', value=filename, **attribute)
for reference in self._parse_results(results):
file_object.add_reference(**reference)
self.misp_event.add_object(**file_object)
def _parse_results(self, results):
references = []
for result in results:
try:
attribute_type = self._results_mapping[result['type']]
except KeyError:
continue
references.append(self._create_attribute(result, attribute_type))
return references
def parse_config(apiurl, user_id, config):
error = {"error": "Please provide your AssemblyLine API key or Password."}
if config.get('apikey'):
try:
return Client(apiurl, apikey=(user_id, config['apikey']))
except ClientError as e:
error['error'] = f'Error while initiating a connection with AssemblyLine: {e.__str__()}'
if config.get('password'):
try:
return Client(apiurl, auth=(user_id, config['password']))
except ClientError as e:
error['error'] = f'Error while initiating a connection with AssemblyLine: {e.__str__()}'
return error
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if not request.get('config'):
return {"error": "Missing configuration."}
if not request['config'].get('apiurl'):
return {"error": "No AssemblyLine server address provided."}
apiurl = request['config']['apiurl']
if not request['config'].get('user_id'):
return {"error": "Please provide your AssemblyLine User ID."}
user_id = request['config']['user_id']
client = parse_config(apiurl, user_id, request['config'])
if isinstance(client, dict):
return client
assemblyline_parser = AssemblyLineParser()
assemblyline_parser.get_submission(request['attribute'], client)
return assemblyline_parser.finalize_results()
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo
|
__label__pos
| 0.994804 |
OpenSesame
Rapunzel Code Editor
DataMatrix
Support forum
Python Tutorials
MindProbe
Python videos
datamatrix.operations
A set of operations to apply to columns and DataMatrix objects.
function auto_type(dm)
Requires fastnumbers
Converts all columns of type MixedColumn to IntColumn if all values are integer numbers, or FloatColumn if all values are non-integer numbers.
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 'a'
dm.B = 1
dm.C = 1.1
dm_new = ops.auto_type(dm)
print('dm_new.A: %s' % type(dm_new.A))
print('dm_new.B: %s' % type(dm_new.B))
print('dm_new.C: %s' % type(dm_new.C))
Output:
dm_new.A: <class 'datamatrix._datamatrix._mixedcolumn.MixedColumn'>
dm_new.B: <class 'datamatrix._datamatrix._numericcolumn.IntColumn'>
dm_new.C: <class 'datamatrix._datamatrix._numericcolumn.FloatColumn'>
Arguments:
• dm -- No description
• Type: DataMatrix
Returns:
No description
• Type: DataMatrix
function bin_split(col, bins)
Splits a DataMatrix into bins; that is, the DataMatrix is first sorted by a column, and then split into equal-size (or roughly equal-size) bins.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 1, 0, 3, 2, 4
dm.B = 'a', 'b', 'c', 'd', 'e'
for bin, dm in enumerate(ops.bin_split(dm.A, bins=3)):
print('bin %d' % bin)
print(dm)
Output:
bin 0
+---+---+---+
| # | A | B |
+---+---+---+
| 1 | 0 | b |
+---+---+---+
bin 1
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 1 | a |
| 3 | 2 | d |
+---+---+---+
bin 2
+---+---+---+
| # | A | B |
+---+---+---+
| 2 | 3 | c |
| 4 | 4 | e |
+---+---+---+
Arguments:
• col -- The column to split by.
• Type: BaseColumn
• bins -- The number of bins.
• Type: int
Returns:
A generator that iterates over the bins.
function fullfactorial(dm, ignore=u'')
Requires numpy
Creates a new DataMatrix that uses a specified DataMatrix as the base of a full-factorial design. That is, each value of every row is combined with each value from every other row. For example:
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=2)
dm.A = 'x', 'y'
dm.B = 3, 4
dm = ops.fullfactorial(dm)
print(dm)
Output:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | x | 3 |
| 1 | y | 3 |
| 2 | x | 4 |
| 3 | y | 4 |
+---+---+---+
Arguments:
• dm -- The source DataMatrix.
• Type: DataMatrix
Keywords:
• ignore -- A value that should be ignored.
• Default: ''
function group(dm, by)
Requires numpy
Groups the DataMatrix by unique values in a set of grouping columns. Grouped columns are stored as SeriesColumns. The columns that are grouped should contain numeric values. The order in which groups appear in the grouped DataMatrix is unpredictable.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=4)
dm.A = 'x', 'x', 'y', 'y'
dm.B = 0, 1, 2, 3
print('Original:')
print(dm)
dm = ops.group(dm, by=dm.A)
print('Grouped by A:')
print(dm)
Output:
Original:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | x | 0 |
| 1 | x | 1 |
| 2 | y | 2 |
| 3 | y | 3 |
+---+---+---+
Grouped by A:
+---+---+---------+
| # | A | B |
+---+---+---------+
| 0 | x | [0. 1.] |
| 1 | y | [2. 3.] |
+---+---+---------+
Arguments:
• dm -- The DataMatrix to group.
• Type: DataMatrix
• by -- A column or list of columns to group by.
• Type: BaseColumn, list
Returns:
A grouped DataMatrix.
• Type: DataMatrix
function keep_only(dm, *cols)
Removes all columns from the DataMatrix, except those listed in cols.
Version note: As of 0.11.0, the preferred way to select a subset of columns is using the dm = dm[('col1', 'col2')] notation.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 'a', 'b', 'c', 'd', 'e'
dm.B = range(5)
dm.C = range(5, 10)
dm_new = ops.keep_only(dm, dm.A, dm.C)
print(dm_new)
Output:
+---+---+---+
| # | A | C |
+---+---+---+
| 0 | a | 5 |
| 1 | b | 6 |
| 2 | c | 7 |
| 3 | d | 8 |
| 4 | e | 9 |
+---+---+---+
Arguments:
• dm -- No description
• Type: DataMatrix
Argument list:
• *cols: A list of column names, or column objects.
function random_sample(obj, k)
New in v0.11.0
Takes a random sample of k rows from a DataMatrix or column. The order of the rows in the returned DataMatrix is random.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 'a', 'b', 'c', 'd', 'e'
dm = ops.random_sample(dm, k=3)
print(dm)
Arguments:
• obj -- No description
• Type: DataMatrix, BaseColumn
• k -- No description
• Type: int
Returns:
A random sample from a DataMatrix or column.
• Type: DataMatrix, BaseColumn
function replace(col, mappings={})
Replaces values in a column by other values.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=3)
dm.old = 0, 1, 2
dm.new = ops.replace(dm.old, {0 : 'a', 2 : 'c'})
print(dm_new)
Output:
+---+---+---+
| # | A | C |
+---+---+---+
| 0 | a | 5 |
| 1 | b | 6 |
| 2 | c | 7 |
| 3 | d | 8 |
| 4 | e | 9 |
+---+---+---+
Arguments:
• col -- The column to weight by.
• Type: BaseColumn
Keywords:
• mappings -- A dict where old values are keys and new values are values.
• Type: dict
• Default: {}
function shuffle(obj)
Shuffles a DataMatrix or a column. If a DataMatrix is shuffled, the order of the rows is shuffled, but values that were in the same row will stay in the same row.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 'a', 'b', 'c', 'd', 'e'
dm.B = ops.shuffle(dm.A)
print(dm)
Output:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | a | d |
| 1 | b | c |
| 2 | c | a |
| 3 | d | b |
| 4 | e | e |
+---+---+---+
Arguments:
• obj -- No description
• Type: DataMatrix, BaseColumn
Returns:
The shuffled DataMatrix or column.
• Type: DataMatrix, BaseColumn
function shuffle_horiz(*obj)
Shuffles a DataMatrix, or several columns from a DataMatrix, horizontally. That is, the values are shuffled between columns from the same row.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.A = 'a', 'b', 'c', 'd', 'e'
dm.B = range(5)
dm = ops.shuffle_horiz(dm.A, dm.B)
print(dm)
Output:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 0 | a |
| 1 | b | 1 |
| 2 | c | 2 |
| 3 | d | 3 |
| 4 | 4 | e |
+---+---+---+
Argument list:
• *desc: A list of BaseColumns, or a single DataMatrix.
• *obj: No description.
Returns:
The shuffled DataMatrix.
• Type: DataMatrix
function sort(obj, by=None)
Sorts a column or DataMatrix. In the case of a DataMatrix, a column must be specified to determine the sort order. In the case of a column, this needs to be specified if the column should be sorted by another column.
The sort order is as follows:
• -INF
• int and float values in increasing order
• INF
• str values in alphabetical order, where uppercase letters come first
• None
• NAN
You can also sort columns (but not DataMatrix objects) using the built-in sorted() function. However, when sorting different mixed types, this may lead to Exceptions or (in the case of NAN values) unpredictable results.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=3)
dm.A = 2, 0, 1
dm.B = 'a', 'b', 'c'
dm = ops.sort(dm, by=dm.A)
print(dm)
Output:
+---+---+---+
| # | A | B |
+---+---+---+
| 1 | 0 | b |
| 2 | 1 | c |
| 0 | 2 | a |
+---+---+---+
Arguments:
• obj -- No description
• Type: DataMatrix, BaseColumn
Keywords:
• by -- The sort key, that is, the column that is used for sorting the DataMatrix, or the other column.
• Type: BaseColumn
• Default: None
Returns:
The sorted DataMatrix, or the sorted column.
• Type: DataMatrix, BaseColumn
function split(col, *values)
Splits a DataMatrix by unique values in a column.
Version note: As of 0.12.0, split() accepts multiple columns as shown below.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=4)
dm.A = 0, 0, 1, 1
dm.B = 'a', 'b', 'c', 'd'
# If no values are specified, a (value, DataMatrix) iterator is
# returned.
print('Splitting by a single column')
for A, sdm in ops.split(dm.A):
print('sdm.A = %s' % A)
print(sdm)
# You can also split by multiple columns at the same time.
print('Splitting by two columns')
for A, B, sdm in ops.split(dm.A, dm.B):
print('sdm.A = %s, sdm.B = %s' % (A, B))
# If values are specific an iterator over DataMatrix objects is
# returned.
print('Splitting by values')
dm_a, dm_c = ops.split(dm.B, 'a', 'c')
print('dm.B == "a"')
print(dm_a)
print('dm.B == "c"')
print(dm_c)
Output:
Splitting by a single column
sdm.A = 0
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 0 | a |
| 1 | 0 | b |
+---+---+---+
sdm.A = 1
+---+---+---+
| # | A | B |
+---+---+---+
| 2 | 1 | c |
| 3 | 1 | d |
+---+---+---+
Splitting by two columns
sdm.A = 0, sdm.B = a
sdm.A = 0, sdm.B = b
sdm.A = 1, sdm.B = c
sdm.A = 1, sdm.B = d
Splitting by values
dm.B == "a"
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 0 | a |
+---+---+---+
dm.B == "c"
+---+---+---+
| # | A | B |
+---+---+---+
| 2 | 1 | c |
+---+---+---+
Arguments:
• col -- The column to split by.
• Type: BaseColumn
Argument list:
• *values: Splits the DataMatrix based on these values. If this is provided, an iterator over DataMatrix objects is returned, rather than an iterator over (value, DataMatrix) tuples.
Returns:
A iterator over (value, DataMatrix) tuples if no values are provided; an iterator over DataMatrix objects if values are provided.
• Type: Iterator
function weight(col)
Weights a DataMatrix by a column. That is, each row from a DataMatrix is repeated as many times as the value in the weighting column.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=3)
dm.A = 1, 2, 0
dm.B = 'x', 'y', 'z'
print('Original:')
print(dm)
dm = ops.weight(dm.A)
print('Weighted by A:')
print(dm)
Output:
Original:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 1 | x |
| 1 | 2 | y |
| 2 | 0 | z |
+---+---+---+
Weighted by A:
+---+---+---+
| # | A | B |
+---+---+---+
| 0 | 1 | x |
| 1 | 2 | y |
| 2 | 2 | y |
+---+---+---+
Arguments:
• col -- The column to weight by.
• Type: BaseColumn
Returns:
No description
• Type: DataMatrix
function z(col)
Transforms a column into z scores.
Example:
from datamatrix import DataMatrix, operations as ops
dm = DataMatrix(length=5)
dm.col = range(5)
dm.z = ops.z(dm.col)
print(dm)
Output:
+---+-----+---------------------+
| # | col | z |
+---+-----+---------------------+
| 0 | 0 | -1.2649110640673518 |
| 1 | 1 | -0.6324555320336759 |
| 2 | 2 | 0.0 |
| 3 | 3 | 0.6324555320336759 |
| 4 | 4 | 1.2649110640673518 |
+---+-----+---------------------+
Arguments:
• col -- The column to transform.
• Type: BaseColumn
Returns:
No description
• Type: BaseColumn
|
__label__pos
| 0.880167 |
Where Can I Deploy Caligrafy?
9 minutes read
Caligrafy can be deployed in a variety of settings and contexts to enhance calligraphy and handwriting skills. Some potential deployment options include:
1. Educational Institutions: Caligrafy can be integrated into schools and universities to teach calligraphy as an art form or as part of writing curriculum. It can be used in classrooms, art studios, or dedicated calligraphy workshops.
2. Art and Craft Stores: Caligrafy can be deployed in stores that specialize in art supplies, offering a dedicated section for calligraphy tools and accessories. Customers can browse and choose from a wide range of pens, inks, papers, and other related items.
3. Online Marketplaces: Caligrafy can be made available on e-commerce platforms, enabling individuals to purchase calligraphy materials online. This allows users to conveniently access the items they need and have them delivered to their doorstep.
4. Calligraphy Studios: Dedicated calligraphy studios can deploy Caligrafy to offer workshops, training sessions, and other instructional programs to individuals interested in learning or refining their calligraphy skills. These studios can be standalone establishments or part of larger art communities.
5. Workplaces and Corporate Events: Companies organizing team-building activities, office events, or corporate training sessions may choose to deploy Caligrafy as a unique and engaging experience for employees. Calligraphy workshops can foster creativity, improve handwriting, and provide employees with a relaxing artistic outlet.
6. Libraries and Bookshops: Caligrafy can be introduced in libraries and bookshops to promote calligraphy as a creative endeavor and to arrange events such as calligraphy demonstrations, book signings, or calligraphy-themed book clubs. This allows enthusiasts to connect and explore their shared interest in calligraphy.
7. Cultural Festivals and Events: Caligrafy can be deployed at cultural gatherings, art fairs, or festivals dedicated to promoting different forms of art. This provides attendees with an opportunity to learn and practice calligraphy while immersing themselves in a vibrant artistic atmosphere.
Overall, Caligrafy can be deployed in numerous settings and cater to various audiences, depending on the goal of promoting calligraphy as an art form, improving handwriting skills, or providing a creative outlet for individuals to express themselves through beautiful writing.
Best Cloud Hosting Providers of September 2024
1
AWS
Rating is 5 out of 5
AWS
2
DigitalOcean
Rating is 4.9 out of 5
DigitalOcean
3
Vultr
Rating is 4.8 out of 5
Vultr
4
Cloudways
Rating is 4.6 out of 5
Cloudways
How to deploy Caligrafy on a local development environment?
To deploy Caligrafy on a local development environment, you can follow these steps:
1. Install the necessary dependencies: Node.js and npm: Make sure you have Node.js and npm installed on your computer. You can download them from the official Node.js website.
2. Clone the Caligrafy repository: git clone https://github.com/theapache64/caligrafy.git
3. Navigate to the project directory: cd caligrafy
4. Install project dependencies: npm install
5. Configure the environment variables: Create a .env file in the root directory of the project and configure the following variables: API_URL: The base URL of the Caligrafy API. For a local development environment, you can set it to http://localhost:8000/api/v1. FRONTEND_URL: The URL where you'll be running the Caligrafy frontend. For a local development environment, you can set it to http://localhost:3000.
6. Start the development server: npm start
7. Access Caligrafy on your local development environment: Open your preferred web browser and visit http://localhost:3000 to access the Caligrafy frontend.
That's it! You now have Caligrafy deployed on your local development environment.
What is the installation process for deploying Caligrafy?
To deploy Caligrafy, you can follow these steps:
1. Ensure that your server meets the system requirements for running Caligrafy. It typically requires a Linux-based server with PHP, MySQL, and Apache/Nginx installed.
2. Download the latest version of Caligrafy from the official website or the repository.
3. Extract the downloaded package to a directory on your server. You can use a command-line tool like tar or a GUI tool like FileZilla for this.
4. After extracting the package, navigate to the Caligrafy directory and review the configuration files. You will find a file named .env.example. Rename it to .env and update the necessary configuration values such as database credentials, app URL, and other settings.
5. Create a new database for Caligrafy on your MySQL server. You can use a tool like phpMyAdmin or the MySQL command-line client to do this.
6. Import the database schema for Caligrafy into the newly created database. The SQL file for the schema can be found in the Caligrafy package under the database directory.
7. Set the appropriate file permissions for the Caligrafy directory to ensure that the web server can read and write necessary files. Typically, you need to run commands like chmod -R 755 storage, chmod -R 755 bootstrap/cache, and chmod 777 public/uploads.
8. Once the files are in place and the database is configured, you can access Caligrafy by visiting the URL configured in your .env file. The installation wizard should guide you through the final steps of setting up your application.
9. Follow the on-screen instructions provided by the installation wizard, which typically involve configuring the admin account, email settings, and any additional customizations.
10. After completing the installation, make sure to remove or secure the installation directory to prevent unauthorized access. This can usually be done by deleting the install directory or by using server-level configurations to deny access.
11. Finally, test the Caligrafy deployment by logging in and ensuring that all features and modules are functioning correctly.
What is the recommended hosting provider for deploying Caligrafy?
Caligrafy is a web-based application, so any hosting provider that supports PHP and MySQL should work. Some popular hosting providers that are often recommended for deploying PHP applications include:
1. Bluehost
2. DreamHost
3. SiteGround
4. A2 Hosting
5. HostGator
These providers offer reliable hosting services with good customer support and easy integration with PHP applications. However, it ultimately depends on your specific requirements and budget. It is always recommended to carefully assess the features, pricing, and customer reviews of different hosting providers before making a decision.
What is the necessary configuration file to deploy Caligrafy?
The necessary configuration file to deploy Caligrafy is typically a Docker Compose file, named "docker-compose.yml". This file specifies the services, networks, volumes, and other configurations required to launch Caligrafy in a containerized environment using Docker or Docker Compose.
Here's an example of a basic Docker Compose file for deploying Caligrafy:
1
2
3
4
5
6
7
8
9
10
11
12
version: '3'
services:
caligrafy:
image: caligrafy:latest
ports:
- 8000:8000
volumes:
- ./config:/app/config
environment:
- DATABASE_URL=your-database-url
- SECRET_KEY=your-secret-key
- DEBUG=False
In this example, the Docker Compose file specifies a single service named "caligrafy". It pulls the "caligrafy:latest" image and exposes container port 8000, which is mapped to host port 8000. The "volumes" section mounts a local "config" directory to the "/app/config" directory inside the container. This allows you to customize the Caligrafy configurations by adding files in the local "config" directory.
Additionally, the "environment" section defines environment variables required by Caligrafy, such as the database URL, secret key, and debug mode.
Note that the specific configuration requirements may vary depending on your setup and deployment environment. It's recommended to refer to the Caligrafy documentation or any specific deployment instructions provided by the Caligrafy project for more details on the necessary configuration.
What is the difference between a local and remote deployment of Caligrafy?
Caligrafy is a software tool developed by OpenAI that offers handwriting synthesis capabilities. The main difference between a local deployment and a remote deployment of Caligrafy lies in the location where the software is executed.
1. Local deployment: In a local deployment, Caligrafy is installed and executed on a user's local machine or infrastructure. This means that all processing and computation happen on the user's device itself, without requiring a connection to external services or servers. Local deployment provides more control over data privacy and security since the data used by the software remains within the user's environment.
2. Remote deployment: In a remote deployment, Caligrafy is hosted and executed on a remote server or cloud infrastructure. Users interact with the software through an internet connection, sending input requests to the remote server, which processes the requests and returns the generated handwriting back to the user. Remote deployment allows for easier access, flexibility, and scalability since users can utilize the software from any device with an internet connection. It also offloads the computational load to the remote server, making it suitable for resource-intensive tasks.
Both local and remote deployments have their own advantages and considerations. Local deployment provides direct control over data and processing, while remote deployment offers convenience and scalability. The choice between the two depends on factors such as data privacy requirements, resource availability, and the need for accessibility.
Facebook Twitter LinkedIn Whatsapp Pocket
Related Posts:
Caligrafy is a powerful text analysis tool that allows you to extract valuable insights from textual data. Installing Caligrafy on Cloudways is a straightforward process that can be completed in a few simple steps.Start by logging in to your Cloudways account....
To deploy Ghost on cloud hosting, you need to follow these steps:Choose a cloud hosting provider: Before deploying Ghost, you need to select a cloud hosting provider that suits your requirements. Some popular options include AWS (Amazon Web Services), Google C...
To deploy a Golang application to a server, you can follow the steps outlined below:Choose a server: Begin by selecting a server where you wish to deploy your Golang application. This could be a cloud-based server, a Virtual Private Server (VPS), or a physical...
|
__label__pos
| 0.973021 |
File Objects in Java
The File object represents the actual file/directory on the disk. Here are the list of constructors to create File Object in Java −
Sr.No.Method & Description
1File(File parent, String child)This constructor creates a new File instance from a parent abstract pathname and a child pathname string.
2File(String pathname)This constructor creates a new File instance by converting the given pathname string into an abstract pathname.
3File(String parent, String child)This constructor creates a new File instance from a parent pathname string and a child pathname string.
4File(URI uri)This constructor creates a new File instance by converting the given file: URI into an abstract pathname.
Assuming an object is present in the given location, the first argument to the command line will be considered as the path and the below code will be executed −
Example
import java.io.File;
public class Demo{
public static void main(String[] args){
String file_name =args[0];
File my_file = new File(file_name);
System.out.println("File name is :"+my_file.getName());
System.out.println("The path to the file is: "+my_file.getPath());
System.out.println("The absolute path to the file is:" +my_file.getAbsolutePath());
System.out.println("The parent directory is :"+my_file.getParent());
if(my_file.exists()){
System.out.println("Is the file readable"+my_file.canRead());
System.out.println("The size of the file in bytes is "+my_file.length());
}
}
}
Output
The details about the file will be displayed here.
A class named Demo contains the main function, and a string is defined, that holds the first argument passed in the command line. The details of the file are printed on the screen, this includes name of the file, file path, the absolute path of the file, and the parent directory of the file.
Updated on: 17-Aug-2020
1K+ Views
Kickstart Your Career
Get certified by completing the course
Get Started
Advertisements
|
__label__pos
| 0.918478 |
PINGDOM_CANARY_STRING
developing a professional software engineer
Are You Ready to Commit? Developing a Professional Software Engineer Workflow
Reading time: about 11 min
Posted by: Matt Dawson
Aspiring programmers often ask a question like, "What can I learn in X amount of time that will make me a star programmer?", where X is way too little time to develop star programming skills. There are many diverse skills needed to become a truly professional programmer. It seems of late that the focus is all about learning the hottest language, knowing lots of algorithms and design patterns, and understanding the latest frameworks.
Those are all great and useful things to invest time in. However, one area that seems to receive little attention from up-and-coming programmers is developing a workflow that leads to long-term professionalism, no matter what language or framework they are using.
I like to think of myself as a Professional Software Engineer. However, nothing makes me feel more unprofessional than when I create bugs, break the build, or cause other people extra work when it could have been avoided. In software development, we can't avoid every problem, but we can avoid many of them. With plenty of unavoidable problems lurking around, there is no sense in wasting time on the ones we can avoid. I have personally caused almost every class of avoidable problem and brought upon myself plenty of well-deserved shame. Early in my career, I thought, "Hey, no big deal, nobody is perfect." Of course I would try to learn from my mistakes, but I found that I was making some of the same mistakes over and over. At one point I had a conversation with my boss in which he challenged me to do better. After some reflection, I realized that I could improve.
From that time until now, I have refined a software engineer workflow that helps me ensure I don't cause unnecessary problems and also increases the quality of the code I produce. I've also mentored and managed other programmers and tried to help them develop their own workflows. I hope that by sharing an outline of my workflow, you can develop your own workflow that will increase your value as an engineer and help you become a true professional. It's worth mentioning that while this list is fairly long, the reality is that these steps have become part of how I work. When it comes time to commit a change, most of these items have already been considered and done. However, having a formal workflow acts as a forcing function and ensures that I've done everything I intended to do before I submit my work.
1. Does it compile?
This may seem like a stupid thing to have in my workflow, but trust me, it's not. I've seen even extremely simple changes (my own and other people's) cause a build to fail and cause work to grind to a halt until fixed. You bring shame on yourself and your family if you break the master build because you failed to compile your changes before committing. You may begin to notice sideways glances as your colleagues pass you in the lunchroom--you’ll know why. You may say to yourself, “It’s just a minor change to fix a teeny tiny defect found in a code review, so I don’t have to worry about compiling, right?” WRONG! Don't be tempted to commit without compiling. If you are working in a language that isn't compiled, then load the updated code and make sure that the parser is happy.
2. Have I stepped through my changes?
Again, this may seem simple and obvious, but you'd be surprised how many times programmers skip this step and problems are found later on that would not have been overlooked if the code had been stepped through with the programmer watching. There may be exceptional cases when you can't step through your change, but keep these to exceptions and don't make excuses. You probably won't be able to step through every nook and cranny of your code, but you can step through the common code flows and make sure that the code is working the way you thought it would. If you are tempted not to take the time to do this, you will probably have an inordinate amount of bugs crop up later. Bugs that are found later are harder to fix because you are not as fresh with the code and you'll have to take time to re-immerse yourself. You may even find that other people have worked around your bugs, not understanding what the real problem is. Unwinding layers of workarounds once the code is fixed can be more complex than fixing the broken code would have been in the first place. It's worth taking the time to step through your code.
3. Have I run the automated tests?
You have automated tests... don't you? If you don't have automated tests, you should seriously think about implementing some. Start small, and add gradually. Automated tests can save you a lot of time in the long run and at least ensure that when you make changes, you haven't broken some base level of functionality. There is plenty of information available about automated testing--if you don’t know how to do it, take the time to learn. Bottom line: if you have automated tests available to you, take advantage of them by running them before you commit. It will save you time in the long run, save you from having egg on your face if you've broken something, and make you appear more professional to your colleagues. Of course, if you find a problem as a result of running your automated tests, don't overlook it--fix it before you continue with the workflow.
4. Have I created unit tests for my new code?
This one dovetails closely with the previous and probably doesn't need much additional explanation. If you are using a test-driven development approach, you will have done this as a matter of course. In any case, when you figure out what your own workflow should be, don't neglect to include creating unit tests. This will add some upfront time, and yes, it'll take longer initially, but it will pay dividends in the long run, especially if you and everyone else run them before committing. Your code will be more robust from the get-go, and in the future, many bugs will be found and fixed before they are committed -- which means no one will have to be slowed down by having to find a bug, figure out how to reproduce it, log it, figure out who to assign it to, etc.
5. Have I considered all "platforms" my change will affect?
If you work in a shop where you have exactly one target platform, then you can skip this step. However, for a large portion of programmers, there are multiple platforms involved. The platforms might be different web browsers, different operating systems, or different types of hardware. If you are in this situation, it's worth spending a moment to consider if your change could cause a problem on a different platform than the one you are developing on. In many cases, it won't and you can move on quickly, but by giving some thought before committing, you may detect problems and save yourself and others some hassle. A few things to think about are memory availability, CPU speed, and API differences. You probably already know which of these is likely to affect you in your job--just formalize your process a bit and make sure that you think about these things every time you commit. Also, you may want to take some time to research the differences between your various target platforms so you can be better informed. The more you know about your platforms, the more you can take this into account as you code. This will be an easy checkbox to mark off as you ready yourself to commit.
6. Have I removed any debug code or settings I have added?
This one bites me all the time. Oftentimes while I'm working, I'll add debug code or change some hard-coded setting that is intended only to help me troubleshoot or test my changes. Of course, I do not intend to commit these changes, but I've done it many times. I've found that if I mark these types of changes clearly when I add them, then I can find and remove them more easily. Typically I'll add a comment like "//DEBUG" in any area I need to clean up before I commit, and then it's easy to find and clear out any residual debugging code that should not be committed.
7. Have I considered the scope of my changes fully?
This is important especially if you are operating in code that you are less familiar with. A fairly common occurrence for me is to narrow in on a bug in unfamiliar code and figure out a small change that will fix it, make the change, test that the bug is fixed, and then commit (of course only after following all the other steps in my workflow). The problem that can arise is that even a small change can cause big problems, especially in unfamiliar code. It's a good practice to take a step back and look at the bigger picture. You may find that the code you are changing is used in a wider variety of ways than you knew about. From there you can decide whether you know enough to continue or if you need to get additional eyes on the change before you commit.
8. Are my changes robust?
Take a moment to consider what conditions could cause your code to fail. Does it make sense to validate parameters? Are there any security concerns? What corner cases are you not handling? Address those before committing. There is an art to knowing how much "robustness" to add to a piece of code. If you are unsure if it's worth spending more time to make your code more robust, you may want to seek advice from other engineers you respect. Over time you'll develop an intuition to know how far to take it.
9. Have I had my changes code reviewed?
Get someone else to review your changes and make sure the changes look correct. Find the best person you can. The best person is someone who is familiar with the area you are changing and understands the type of code you are writing. By doing this, you'll likely find things that you didn't think of, even if you are rigorous about following all the other steps in your workflow. Also, you'll probably learn new things and thus become a better engineer as a result.
10. Have I considered QA?
Hopefully your QA process is integrated with your development process and your testers are always aware of new things that are going into your application. If not, it may be worth looping in a QA person to alert them to the changes you've made and to discuss when they'll be available to test and how to go about testing. If possible, you may even want to have a QA person give your new feature a trial run before committing it. You'll likely get good feedback and will probably discover something you should fix before committing. Inexperienced programmers tend to harbor enmity towards QA. Professional programmers realize the value of QA and take advantage of the help they can lend to the development process. As you visit each step, if you end up making further code changes to address issues that come up, don't forget to go back and reconsider whether it’s appropriate to revisit previous steps. Revisiting the steps a second or third time as necessary will help you make sure that what you finally commit is as high quality as possible. Don’t be discouraged if you need to do this. Second and third passes are usually a lot quicker than your first pass through the workflow. You'll get better at it in time, and you'll internalize these steps into your coding process, which will make the commit workflow smoother and quicker.
flowchart
This blog post serves as a reminder to my current and future self as much as it is meant to help you, the reader. I often find myself in situations where I feel driven to act quickly to complete an assignment. My experience tells me that when I rush through and skip steps, it often comes back to haunt me. Whatever language or framework you are using, no matter how many algorithms and design patterns you know, having a formal workflow will help you be recognized for your code quality and professionalism and will help you advance in your career.
Start diagramming with Lucidchart today—try it for free!
Sign up free
Popular now
managed computer vision services
A Feature Comparison of Managed Computer Vision Services
About Lucidchart
Lucidchart is the intelligent diagramming application that empowers teams to clarify complexity, align their insights, and build the future—faster. With this intuitive, cloud-based solution, everyone can work visually and collaborate in real time while building flowcharts, mockups, UML diagrams, and more.
The most popular online Visio alternative, Lucidchart is utilized in over 180 countries by more than 25 million users, from sales managers mapping out target organizations to IT directors visualizing their network infrastructure.
Related posts:
How Lucidites Use Lucidchart: Diagrams for Engineering
engineering diagrams
How to Write an Effective Bug Report That Actually Gets Resolved (and Why Everyone Should)
How to Write an Effective Bug Report Lucidchart
English
PrivacyLegal
© 2021 Lucid Software Inc.
|
__label__pos
| 0.572092 |
Преобразование файлов doc / docx в pdf в Python
Можно использовать std::regex.
В зависимости от размера вашего файла и доступной вам памяти, можно прочитать его либо по строкам, либо целиком в файле std::string.
Чтобы прочитать файл , вы можете использовать:
std::ifstream t("file.txt");
std::string sin((std::istreambuf_iterator(t)),
std::istreambuf_iterator());
, после чего вы можете сопоставить это, которое на самом деле настраивается для ваших нужд.
std::regex word_regex(",\\s]+");
auto what =
std::sregex_iterator(sin.begin(), sin.end(), word_regex);
auto wend = std::sregex_iterator();
std::vector v;
for (;what!=wend ; wend) {
std::smatch match = *what;
v.push_back(match.str());
}
1
задан Vivek 5 March 2019 в 12:05
поделиться
1 ответ
Вот вам решение VBA (я не знаю, как это сделать с помощью Python).
Если вам нужно преобразовать несколько файлов Word в другие форматы, такие как TXT, RTF, HTML или PDF, запустите приведенный ниже скрипт.
Option Explicit On
Sub ChangeDocsToTxtOrRTFOrHTML()
'with export to PDF in Word 2007
Dim fs As Object
Dim oFolder As Object
Dim tFolder As Object
Dim oFile As Object
Dim strDocName As String
Dim intPos As Integer
Dim locFolder As String
Dim fileType As String
On Error Resume Next
locFolder = InputBox("Enter the folder path to DOCs", "File Conversion", "C:\Users\your_path_here\")
Select Case Application.Version
Case Is < 12
Do
fileType = UCase(InputBox("Change DOC to TXT, RTF, HTML", "File Conversion", "TXT"))
Loop Until (fileType = "TXT" Or fileType = "RTF" Or fileType = "HTML")
Case Is >= 12
Do
fileType = UCase(InputBox("Change DOC to TXT, RTF, HTML or PDF(2007+ only)", "File Conversion", "TXT"))
Loop Until (fileType = "TXT" Or fileType = "RTF" Or fileType = "HTML" Or fileType = "PDF")
End Select
Application.ScreenUpdating = False
Set fs = CreateObject("Scripting.FileSystemObject")
Set oFolder = fs.GetFolder(locFolder)
Set tFolder = fs.CreateFolder(locFolder & "Converted")
Set tFolder = fs.GetFolder(locFolder & "Converted")
For Each oFile In oFolder.Files
Dim d As Document
Set d = Application.Documents.Open(oFile.Path)
strDocName = ActiveDocument.Name
intPos = InStrRev(strDocName, ".")
strDocName = Left(strDocName, intPos - 1)
ChangeFileOpenDirectory tFolder
Select Case fileType
Case Is = "TXT"
strDocName = strDocName & ".txt"
ActiveDocument.SaveAs FileName:=strDocName, FileFormat:=wdFormatText
Case Is = "RTF"
strDocName = strDocName & ".rtf"
ActiveDocument.SaveAs FileName:=strDocName, FileFormat:=wdFormatRTF
Case Is = "HTML"
strDocName = strDocName & ".html"
ActiveDocument.SaveAs FileName:=strDocName, FileFormat:=wdFormatFilteredHTML
Case Is = "PDF"
strDocName = strDocName & ".pdf"
ActiveDocument.ExportAsFixedFormat OutputFileName:=strDocName, ExportFormat:=wdExportFormatPDF
End Select
d.Close
ChangeFileOpenDirectory oFolder
Next oFile
Application.ScreenUpdating = True
End Sub
0
ответ дан ryguy72 5 March 2019 в 12:05
поделиться
Другие вопросы по тегам:
Похожие вопросы:
|
__label__pos
| 0.726295 |
Command Line Interface
Adds the ability to specify which file to open from the command line to our text editor.
Introduction
We now have a working text editor that can open and save files. We might, however, want to extend its utility by enabling users to more quickly and efficiently use it to edit files. In this tutorial we will make the editor act more like a desktop application by enabling it to open files from command line arguments or even using Open with from within Dolphin.
Code and Explanation
mainwindow.h
Here we have done nothing but add a new openFileFromUrl function which takes a QUrl. Again, we use a QUrl instead of a QString so that we can also work with remote files as if they were local.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <KXmlGuiWindow>
class KTextEdit;
class KJob;
class MainWindow : public KXmlGuiWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = nullptr);
void openFileFromUrl(const QUrl &inputFileName);
private:
void setupActions();
void saveFileToDisk(const QString &outputFileName);
private Q_SLOTS:
void newFile();
void openFile();
void saveFile();
void saveFileAs();
void downloadFinished(KJob *job);
private:
KTextEdit *textArea;
QString fileName;
};
#endif // MAINWINDOW_H
mainwindow.cpp
There’s no new code here, only rearranging. Everything from void openFile() has been moved into void openFileFromUrl(const QUrl &inputFileName) except the call to QFileDialog::getOpenFileUrl().
This way, we can call openFile() if we want to display a dialog, or we can call openFileFromUrl(const QUrl &) if we know the name of the file already. Which will be the case when we feed the file name through the command line.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
#include <QApplication>
#include <QAction>
#include <QSaveFile>
#include <QFileDialog>
#include <QTextStream>
#include <QByteArray>
#include <KTextEdit>
#include <KLocalizedString>
#include <KActionCollection>
#include <KStandardAction>
#include <KMessageBox>
#include <KIO/Job>
#include "mainwindow.h"
MainWindow::MainWindow(QWidget *parent) : KXmlGuiWindow(parent), fileName(QString())
{
textArea = new KTextEdit();
setCentralWidget(textArea);
setupActions();
}
void MainWindow::setupActions()
{
QAction *clearAction = new QAction(this);
clearAction->setText(i18n("&Clear"));
clearAction->setIcon(QIcon::fromTheme("document-new"));
actionCollection()->setDefaultShortcut(clearAction, Qt::CTRL + Qt::Key_W);
actionCollection()->addAction("clear", clearAction);
connect(clearAction, &QAction::triggered, textArea, &KTextEdit::clear);
KStandardAction::quit(qApp, &QCoreApplication::quit, actionCollection());
KStandardAction::open(this, &MainWindow::openFile, actionCollection());
KStandardAction::save(this, &MainWindow::saveFile, actionCollection());
KStandardAction::saveAs(this, &MainWindow::saveFileAs, actionCollection());
KStandardAction::openNew(this, &MainWindow::newFile, actionCollection());
setupGUI(Default, "texteditorui.rc");
}
void MainWindow::newFile()
{
fileName.clear();
textArea->clear();
}
void MainWindow::saveFileToDisk(const QString &outputFileName)
{
if (!outputFileName.isNull()) {
QSaveFile file(outputFileName);
file.open(QIODevice::WriteOnly);
QByteArray outputByteArray;
outputByteArray.append(textArea->toPlainText().toUtf8());
file.write(outputByteArray);
file.commit();
fileName = outputFileName;
}
}
void MainWindow::saveFileAs()
{
saveFileToDisk(QFileDialog::getSaveFileName(this, i18n("Save File As")));
}
void MainWindow::saveFile()
{
if (!fileName.isEmpty()) {
saveFileToDisk(fileName);
} else {
saveFileAs();
}
}
void MainWindow::openFile()
{
openFileFromUrl(QFileDialog::getOpenFileUrl(this, i18n("Open File")));
}
void MainWindow::openFileFromUrl(const QUrl &inputFileName)
{
if (!inputFileName.isEmpty()) {
KIO::Job *job = KIO::storedGet(inputFileName);
fileName = inputFileName.toLocalFile();
connect(job, &KIO::Job::result, this, &MainWindow::downloadFinished);
job->exec();
}
}
void MainWindow::downloadFinished(KJob *job)
{
if (job->error()) {
KMessageBox::error(this, job->errorString());
fileName.clear();
return;
}
const KIO::StoredTransferJob *storedJob = qobject_cast<KIO::StoredTransferJob *>(job);
if (storedJob) {
textArea->setPlainText(QTextStream(storedJob->data(), QIODevice::ReadOnly).readAll());
}
}
main.cpp
This is where all the QCommandLineParser magic happens. In previous examples, we only used the class to feed QApplication the necessary data for using flags like --version or --author. Now we actually get to use it to process command line arguments.
First, we tell QCommandLineParser that we want to add a new positional arguments. In a nutshell, these are arguments that are not options. -h or --version are options, file is an argument.
parser.addPositionalArgument(QStringLiteral("file"), i18n("Document to open"));
Later on, we start processing positional arguments, but only if there is one. Otherwise, we proceed as usual. In our case we can only open one file at a time, so only the first file is of interest to us. We call the openFileFromUrl() function and feed it the URL of the file we want to open, whether it is a local file like “$HOME/foo” or a remote one like “ftp.mydomain.com/bar”. We use the overloaded form of QUrl::fromUserInput() in order to set the current path. This is needed in order to work with relative paths like “../baz”.
if (parser.positionalArguments().count() > 0) {
window->openFileFromUrl(QUrl::fromUserInput(parser.positionalArguments().at(0), QDir::currentPath()));
}
These are the changes:
41
42
43
44
45
46
47
48
49
50
51
52
53
QCommandLineParser parser;
aboutData.setupCommandLine(&parser);
parser.addPositionalArgument(QStringLiteral("file"), i18n("Document to open"));
parser.process(app);
aboutData.processCommandLine(&parser);
MainWindow *window = new MainWindow();
window->show();
if (parser.positionalArguments().count() > 0) {
window->openFileFromUrl(QUrl::fromUserInput(parser.positionalArguments().at(0), QDir::currentPath()));
}
|
__label__pos
| 0.865962 |
Note
The documentation you're currently reading is for version 3.0.1. Click here to view documentation for the latest stable version.
Inquiries
Inquiries allow you to pause a workflow to wait for additional information. This is done by using the core.ask action. The idea is to allow you to “ask a question” in the middle of a workflow. This could be a question like “do I have approval to continue?” or “what is the second factor I should provide to this authentication service?”
These use cases (and others) require the ability to pause a workflow mid-execution and wait for additional information. Inquiries make this possible. This document explains how to use them.
core.ask
The best way to get started using Inquiries is to check out the core action - core.ask - and start using it in your workflows. This action is built on the inquirer runner type, which performs the bulk of the logic required to pause workflows and wait for a response.
~$ st2 action get core.ask
+-------------+----------------------------------------------------------+
| Property | Value |
+-------------+----------------------------------------------------------+
| id | 59a8c27732ed3553ceb2dec4 |
| uid | action:core:ask |
| ref | core.ask |
| pack | core |
| name | ask |
| description | Action for initiating an Inquiry (usually in a workflow) |
| enabled | True |
| entry_point | |
| runner_type | inquirer |
| parameters | |
| notify | |
| tags | |
+-------------+----------------------------------------------------------+
The inquirer runner imposes a number of parameters that are, in turn, required by the core.ask action:
Parameter Description
schema A JSON schema that will be used to validate the response data. A basic schema will be provided by default, or you can provide one here. Only valid responses will cause the action to succeed, and the workflow to continue.
ttl Time (in minutes) until an unacknowledged Inquiry is garbage-collected. Set to 0 to disable garbage collection for this Inquiry. NOTE - Inquiry garbage collection is not enabled by default, so this field does nothing unless it is turned on. See Garbage Collection for Inquiries for more info.
roles A list of RBAC roles that are permitted to respond to the action. Defaults to empty list, which permits all roles. This requires enterprise features
users A list of users that are permitted to respond to the action. Defaults to empty list, which permits all users.
route An arbitrary string that can be used to filter different Inquiries inside rules. This can be helpful for deciding who to notify of an incoming Inquiry. See Notifying Users of Inquiries using Rules for more info.
Using core.ask in a Workflow
While you can use this action on its own (i.e. with st2 run), the real value comes from using it in a Workflow.
The core.ask action supports a number of parameters, but the most important one by far is the schema parameter. This parameter defines exactly what kind of responses will satisfy the Inquiry, and allow the workflow to continue. When users respond to this Inquiry, their response must come in the form of a JSON payload that will satisfy this schema. We cover responses in Responding to an Inquiry below - the st2 client makes this pretty easy.
Now we’ll use this action in an example workflow. The following example shows a simple ActionChain with two tasks. task1 executes the core.ask action and passes in a few parameters:
chain:
- name: task1
ref: core.ask
params:
route: developers
schema:
type: object
properties:
secondfactor:
type: string
description: Please enter second factor for authenticating to "foo" service
required: True
on-success: "task2"
- name: task2
ref: core.local
params:
cmd: echo "We can now authenticate to "foo" service with {{ task1.result.response.secondfactor }}"
Note that we’re using a Jinja snippet in task2 to access and make use of the value that we’re asking for. In this example we’re simply printing this to the screen, but the <task>.result.response dictionary will contain all of the values that satisfy our schema. More on this later.
We can run this workflow to see its execution:
~$ st2 run examples.chain-test-inquiry
.
id: 59d1ecb632ed353f1f340898
action.ref: examples.chain-test-inquiry
parameters: None
status: paused
result_task: task1
result:
roles: []
route: developers
schema:
properties:
secondfactor:
description: Please enter second factor for authenticating to "foo" service
required: true
type: string
type: object
ttl: 1440
users: []
start_timestamp: 2017-10-02T07:37:26.854217Z
end_timestamp: None
+--------------------------+---------+-------+----------+-------------------------------+
| id | status | task | action | start_timestamp |
+--------------------------+---------+-------+----------+-------------------------------+
| 59d1ecb732ed353ec4aa9a5a | pending | task1 | core.ask | Mon, 02 Oct 2017 07:37:27 UTC |
+--------------------------+---------+-------+----------+-------------------------------+
As you can see, the status of our ActionChain is paused. Note that task2 hasn’t even been scheduled, because the use of the core.ask action prevented further tasks from running. You’ll also notice that the status for task1 is pending. This indicates to us that this particular Inquiry has not yet received a valid response, and is currently blocking the Workflow execution.
You can also use core.ask to ask a question within Orquesta workflows:
version: 1.0
description: A basic workflow that demonstrates inquiry.
tasks:
start:
action: core.echo message="Automation started."
next:
- when: <% succeeded() %>
do: get_approval
get_approval:
action: core.ask
input:
schema:
type: object
properties:
approved:
type: boolean
description: "Continue?"
required: True
next:
- when: <% succeeded() %>
do: finish
- when: <% failed() %>
do: stop
finish:
action: core.echo message="Automation completed."
stop:
action: core.echo message="Automation stopped."
next:
- do: fail
When encountering an Inquiry, StackStorm will send a request to Orquesta to pause execution of a workflow, just like we saw previously with ActionChains:
~$ st2 run examples.orquesta-ask-basic
.
id: 59a9c99032ed3553fb738c83
action.ref: examples.orquesta-ask-basic
parameters: None
status: paused
start_timestamp: 2017-09-01T20:56:48.630380Z
end_timestamp: None
+--------------------------+---------+-------+----------+-------------------------------+
| id | status | task | action | start_timestamp |
+--------------------------+---------+-------+----------+-------------------------------+
| 59a9c99132ed3553fb738c86 | pending | task1 | core.ask | Fri, 01 Sep 2017 20:56:49 UTC |
+--------------------------+---------+-------+----------+-------------------------------+
Note
At the time of this writing, the Inquiry ID is the same as the action execution ID that raised it. So if you’re curious which workflow a given Inquiry is part of, use the same ID with the st2 execution get command.
Notifying Users of Inquiries using Rules
When a new Inquiry is raised, a dedicated trigger - core.st2.generic.inquiry - is used. This trigger can be consumed in Rules, and you can use an action or a workflow to provide notification to the relevant party. For instance, using Slack:
---
name: "notify_inquiry"
pack: "examples"
description: Notify relevant users of an Inquiry action
enabled: false
trigger:
type: core.st2.generic.inquiry
action:
ref: slack.post_message
parameters:
channel: "#{{ trigger.route }}"
message: 'Inquiry {{trigger.id}} is awaiting an approval action'
Note how this Rule uses the route field to determine to which Slack channel the notification should be sent. You could also use this in the Rule criteria as well, and set up different notification actions depending on the value of route.
Responding to an Inquiry
In order to resume a Workflow that’s been paused by an Inquiry, a response must be provided to that Inquiry, and the response must come in the form of JSON data that validates against the schema in use by that particular Inquiry instance.
In order to respond to an Inquiry, we need its ID. We would already have this if we wrote a Rule like shown in the previous section, but we could also use the st2 inquiry list command to view all outstanding inquiries:
~$ st2 inquiry list
+--------------------------+-------+-------+------------+------+
| id | roles | users | route | ttl |
+--------------------------+-------+-------+------------+------+
| 59d1ecb732ed353ec4aa9a5a | | | developers | 1440 |
+--------------------------+-------+-------+------------+------+
Like most other resources in StackStorm, we can use the get subcommand to retrieve details about this Inquiry, using its ID provided in the previous output:
~$ st2 inquiry get 59d1ecb732ed353ec4aa9a5a
+----------+--------------------------------------------------------------+
| Property | Value |
+----------+--------------------------------------------------------------+
| id | 59d1ecb732ed353ec4aa9a5a |
| roles | |
| users | |
| route | developers |
| ttl | 1440 |
| schema | { |
| | "type": "object", |
| | "properties": { |
| | "secondfactor": { |
| | "required": true, |
| | "type": "string", |
| | "description": "Please enter second factor for |
| | authenticating to "foo" service" |
| | } |
| | } |
| | } |
+----------+--------------------------------------------------------------+
In this view, we see the schema in use requires a single key: secondfactor, whose value must be a string.
Note
You can omit the schema parameter when using core.ask, and a basic schema will be used as default - only requiring a single boolean value to continue the workflow. In this example, we’ve provided our own schema that allows us to use the retrieved value in a later task of the workflow. This allows you to “inject” data into a workflow mid-execution, rather than rely solely on parameters.
Fortunately, the st2 client makes it easy to provide a valid response; when you run the command st2 inquiry respond <inquiry id>, it will step through each of these values, prompting you with the provided description. You simply respond to each prompt:
~$ st2 inquiry respond 59d1ecb732ed353ec4aa9a5a
secondfactor: bar
Please enter second factor for authenticating to "foo" service
Response accepted. Successful response data to follow...
+----------+---------------------------+
| Property | Value |
+----------+---------------------------+
| id | 59d1ecb732ed353ec4aa9a5a |
| response | { |
| | "secondfactor": "bar" |
| | } |
+----------+---------------------------+
It’s very important that each property in the response schema has a proper description, as shown in the default example, as this is what prompts the user for required values when it’s time to respond.
Since the st2 client has a handle on the schema being used for an Inquiry, it can guide you to provide the right datatypes for each attribute, and won’t continue until you do. For instance, if our schema required a boolean value, an integer would be rejected client-side:
~$ st2 inquiry respond 59ab26af32ed35752062d2dc
continue (boolean): 123
Does not look like boolean. Pick from [false, no, nope, nah, n, 1, 0, y, yes, true]
Should we continue?
However, not every response can be done interactively. You may even want to script some or all of your Inquiry responses, and may be using tools like jq to craft your own JSON payload for a response and wish to simply provide this to the CLI. The -r flag can be used for this:
~$ st2 inquiry respond -r '{"secondfactor": "bar"}' 59d1ecb732ed353ec4aa9a5a
Response accepted. Successful response data to follow...
+----------+---------------------------+
| Property | Value |
+----------+---------------------------+
| id | 59d1ecb732ed353ec4aa9a5a |
| response | { |
| | "secondfactor": "bar" |
| | } |
+----------+---------------------------+
Note that this effectively bypasses any client-side validation, so it’s possible to send a JSON payload that doesn’t validate against the schema. However, the API is the ultimate authority on validating an Inquiry response, so in this case, you’ll still get an error in return:
~$ st2 inquiry respond -r '{"secondfactor": 123}' 59d1ecb732ed353ec4aa9a5a
ERROR: 400 Client Error: Bad Request
MESSAGE: Response did not pass schema validation. for url: http://127.0.0.1:9101/exp/inquiries/59ab26af32ed35752062d2dc
Once an acceptable response is provided, the workflow resumes:
~$ st2 execution get 59d1ecb632ed353f1f340898
id: 59d1ecb632ed353f1f340898
action.ref: examples.chain-test-inquiry
parameters: None
status: succeeded (468s elapsed)
result_task: task2
result:
failed: false
return_code: 0
stderr: ''
stdout: We can now authenticate to foo service with bar
succeeded: true
start_timestamp: 2017-10-02T07:37:26.854217Z
end_timestamp: 2017-10-02T07:45:14.123405Z
+--------------------------+------------------------+-------+------------+-------------------------------+
| id | status | task | action | start_timestamp |
+--------------------------+------------------------+-------+------------+-------------------------------+
| 59d1ecb732ed353ec4aa9a5a | succeeded (0s elapsed) | task1 | core.ask | Mon, 02 Oct 2017 07:37:27 UTC |
| 59d1ee8932ed353ec4aa9a5d | succeeded (1s elapsed) | task2 | core.local | Mon, 02 Oct 2017 07:45:12 UTC |
+--------------------------+------------------------+-------+------------+-------------------------------+
Note that the stdout for task2 (and subsequently, this ActionChain) is “We can now authenticate to foo service with bar”. If you recall, this was because we were using a Jinja snippet to print the value of secondfactor in our response. We just printed the phrase to the screen in this example, but you can just as easily use this to pass a value into another action in your workflow.
The st2 pack also now contains an inquiry.respond action, which may be useful for responding to inquiries within another workflow:
~$ st2 inquiry get 5a1f4411c4da5f4486b09364
+----------+--------------------------------------------------------------+
| Property | Value |
+----------+--------------------------------------------------------------+
| id | 5a1f4411c4da5f4486b09364 |
| roles | |
| users | |
| route | developers |
| ttl | 1440 |
| schema | { |
| | "type": "object", |
| | "properties": { |
| | "secondfactor": { |
| | "required": true, |
| | "type": "string", |
| | "description": "Please enter second factor for |
| | authenticating to "foo" service" |
| | } |
| | } |
| | } |
+----------+--------------------------------------------------------------+
vagrant@st2vagrant:~$ st2 run st2.inquiry.respond id=5a1f4411c4da5f4486b09364 response='{"secondfactor": "foo"}'
.
id: 5a1f444ec4da5f4486b09366
status: succeeded
parameters:
id: 5a1f4411c4da5f4486b09364
response:
secondfactor: '********'
result:
exit_code: 0
result: null
stderr: ''
stdout: ''
Note
You’ll notice that the value for the key secondfactor is masked within the response body in the execution output for this action. The st2.inquiry.respond action doesn’t actually know the inquiry response schema at all - it is merely a thin layer on top of the EWC client. As a result, it doesn’t know which fields are marked with the secret attribute. To avoid potentially leaking secrets, all field values are masked in this way for the output of this action, regardless of whether or not the schema has declared them as secrets.
The st2 pack also contains an action alias for responding to Inquiries via ChatOps. Using this alias, you can respond to an Inquiry within Slack, as an example:
!st2 respond to inquiry 5a1f4860c4da5f4486b093bf with {“secondfactor”: “supersecret”}
Securing Inquiries
Inquiries work a little differently from other system resources with it comes to granting permissions to them via RBAC. The users and roles parameters for the core.ask action allow you to control who can respond to a specific inquiry, right in the workflow. With this granularity being offered in parameters, RBAC for Inquiries is a bit simpler, focusing broadly on who has access to Inquiries in general, leaving specific access control to the action parameters.
For example, rather than specifying a particular Inquiry when constructing a role, all Inquiry UIDs should be specified as inquiry:. Whatever permissions are granted in the role are granted to all inquiries:
---
name: "inquiry_role_respond"
description: "Role which grants inquiry powers"
permission_grants:
- resource_uid: "inquiry:"
permission_types:
- "inquiry_respond"
Inquiries also honor execution permissions for the workflow they were generated from. For instance, if user inherit has action_execute permissions on the workflow examples.orquesta-ask-basic, they don’t need to be explicitly granted inquiry_respond permissions - this is done automatically.
The following is an example role that only grants permissions to execute a workflow that contains a core.ask action, but doesn’t explicitly grant inquiry_respond permissions. However, any user that’s been assigned to this role will still be permitted to respond.
---
name: "inquiry_role_inherit"
description: "Role which only grants action powers - will inherit inquiry_respond"
permission_grants:
# Grant to run the workflow
- resource_uid: "action:examples:orquesta-ask-basic"
permission_types:
- "action_execute"
- "action_view"
# Grant to run the core.ask action
- resource_uid: "action:core:ask"
permission_types:
- "action_execute"
- "action_view"
# Grant to list runners (allows us to test this with `st2 run`)
- resource_uid: "runner_type:orquesta"
permission_types:
- "runner_type_list"
To lock down a specific Inquiry to a set of users or RBAC roles (the latter of which is only available with enterprise features), the users and roles parameters should be used. These offer additional restriction on a per-Inquiry basis, but they don’t remove any restrictions imposed on the aforementioned RBAC settings, if any. These parameter-based restrictions are cumulative with any existing RBAC restrictions.
The users parameter is a list of users that are permitted to respond to this specific instance of an Inquiry. Similarly, roles controls which RBAC roles (assuming enterprise features) are allowed to respond to this specific Inquiry. The default value for both of these parameters is an empty list, which permits all. The following ActionChain invokes a core.local action, passing a list into the users parameter that specifies only st2responduser is able to respond:
chain:
- name: task1
ref: core.ask
params:
route: developers
users:
- st2responduser
schema:
type: object
properties:
secondfactor:
type: string
description: Please enter second factor for authenticating to "foo" service
required: True
on-success: "task2"
- name: task2
ref: core.local
params:
cmd: echo "We can now authenticate to "foo" service with {{ task1.result.response.secondfactor }}"
All other users attempting to respond will be rejected, even if they are granted inquiry_respond RBAC permissions.
Garbage Collection for Inquiries
As alluded to in Purging Old Operational Data, the st2garbagecollector service is also responsible for cleaning up old Inquiries. This is done by comparing the ttl parameter of an Inquiry with its start time. The ttl field is the number of minutes since the start time the Inquiry will be allowed to receive responses, before it is cleaned up.
Unlike garbage collection for trigger-instances, or action executions, Inquiries are not deleted when they’re “cleaned up”. Rather, they’re marked as “timed out”. This allows workflows to make different decisions based on whether or not an Inquiry was responded to successfully, or if the TTL expired waiting for a response.
To configure garbage collection for Inquiries, you first need to enable this globally. Unlike trigger-instances and action executions, /etc/st2/st2.conf only requires a single boolean parameter to enable Inquiry garbage colllection:
[garbagecollector]
# By default, this value is False
purge_inquiries = True
Once done, each Inquiry has its own ttl configured via parameters. The default is 1440 - 24 hours. However, this can be easily overridden for a inquiry by specifying the ttl in a parameter for the core.ask action, like in the following Orquesta workflow:
version: 1.0
description: A basic workflow that demonstrates inquiry.
tasks:
start:
action: core.echo message="Automation started."
next:
- when: <% succeeded() %>
do: get_approval
get_approval:
action: core.ask
input:
ttl: 60
route: developers
next:
- when: <% succeeded() %>
do: finish
- when: <% failed() %>
do: stop
finish:
action: core.echo message="Automation completed."
stop:
action: core.echo message="Automation stopped."
next:
- do: fail
Note
Even if Inquiry garbage collection is enabled globally in the st2 config, you can use a TTL value of 0 to disable garbage collection for a specific Inquiry.
Once this option has been enabled, and the st2garbagecollector service is started, it will begin periodically looking for Inquiries that have been in a pending state beyond their configured ttl. If we didn’t respond to the above inquiry within 60 minutes, then get_approval would be marked “timeout”, and the workflow would go the stop task.
|
__label__pos
| 0.54265 |
Dependency Injection в Angular: советы
• 30 сентября, 15:54
• 1676
• 0
Dependency Injection (DI) - одна из важнейших концепций в Angular. Это шаблон проектирования, который упрощает создание веб-приложений и ограничивает тесную связь.
Dependency Injection в Angular: советы
Что именно предусматривает DI:
1. обмен функциональными возможностями между различными компонентами приложения;
2. упрощение юнит-тестов;
3. уменьшение потребности создавать экземпляры класса;
4. облегчения понимания зависимостей определенного класса.
Кроме получения данных, механизм Dependency Injection в Angular позволяет уменьшить связанность компонентов в приложения. В этом материале мы разберемся, как он это делает.
Если вы новичок в Angular или не знаете о концепции Dependency Injection, ознакомьтесь с официальной документацией .
Создание переменных среды
Если вы уже имели дело с Angular, вы, вероятно, знакомы с файлами environment.ts. Из них можно получить информацию о среде, в которой запущено приложение.
Если приложение запущено в среде разработки, мы хотим, чтобы сервисы, которые отвечают за подгрузки данных в приложении, посылали запросы на http://localhost:3000/api. Если же приложение запущено на временном сервере для тестировщиков, мы направляем запросы https://qa.local.com/api. Все это управляется Angular во время процесса сборки с использованием различных файлов среды.
Например, у нас могут быть два файла environment.ts и environment.qa.ts в папке environments, а когда мы запускаем команду ng build --config qa, Angular CLI заменит файл environment.ts на environment.qa.ts, и приложение будет соответственно запущено в режиме для тестировщиков.
Но при чем здесь Dependency Injection?
Посмотрите на такой компонент:
import { Component, OnInit } from '@angular/core';
import { environment } from 'src/environments/environment';
@Component({
selector: 'app-some',
templateUrl: './some.component.html',
styleUrls: ['./some.component.css']
})
export class SomeComponent implements OnInit {
constructor() { }
ngOnInit() {
if (!environment.production) {
console.log('In development environment') {}
}
}
}
Здесь мы только импортировали файл и использовали его содержание направления.
А если мы хотим заинжектить некоторые другие значения для переменных среды тестирования? К тому же у нас есть сервис, который также использует переменные среды:
import { environment } from 'src/environments/environment';
@Injectable()
export class SomeService {
constructor(
private http: HttpClient,
) { }
getData(): Observable<any> {
return this.http.get(environment.baseUrl + '/api/method');
}
}
На первый взгляд все нормально, но на самом деле это не лучший способ использовать переменные среды.
Представьте ситуацию, что наш сервис является частью приложения, который состоит из нескольких Angular-проектов. Поэтому у нас есть папка projects, которая содержит различные приложения (веб-компоненты, например). Можем ли мы переиспользовать этот сервис в другом проектах?
Теоретически, можем: просто импортируем сервис в другой модуль Angular, используя массив providers. Однако получаем некоторые проблемы: различные проекты могут выполняться в различных средах. Мы не можем просто импортировать одну из них, но нам надо убедиться, что сервис может быть перевыполнен как можно большим количеством компонентов. Здесь нам на помощь приходит DI.
Существует много способов реализовать нужный нам функционал, мы начнем с самого простого - Injection Tokens.
Injection Tokens- концепция Angular, которая позволяет объявлять независимые уникальные токени, чтобы инжектить значение в другие классы по декоратором Inject. Более подробно по ссылке .
Нам нужно просто предсказать значение для нашей среды в модули:
export const ENV = new InjectionToken('ENV');
@NgModule({
declarations: [
AppComponent,
SomeComponent
],
imports: [
BrowserModule
],
providers: [
{provide: ENV, useValue: environment}
],
bootstrap: [
AppComponent
]
})
export class AppModule { }
Как видите, мы определили значение для нашей переменной среды с помощью Injection Token. И теперь использую его в нашем сервисе:
import { Injectable, Inject } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';
import { ENV } from '../app.module';
@Injectable()
export class SomeService {
constructor(
private http: HttpClient,
@Inject(ENV) private environment,
) { }
getData(): Observable<any> {
return this.http.get(this.environment.baseUrl + '');
}
}
Обратите внимание, что мы больше не импортируем файл environment.ts. Вместо этого мы берем токен ENVи позволяем приложению самостоятельно решить, какое значение передать в сервис, в зависимости от проекта, где он используется.
Но мы все еще не нашли лучшее решение. Мы не предусмотрели тип для частного поля environment.
Стоит все же заботиться о типах, чтобы наш код был менее уязвимым к неожиданным ошибкам. Angular может использовать типизацию, чтобы улучшить DI. Фактически, мы можем полностью избавиться декоратора Inject и InjectionToken. Для этого мы напишем класс-оболочку, чтобы описать интерфейс нашей переменной среды, а уже потом используем ее в коде. Пример такого класса:
export class Environment {
production: boolean;
baseUrl: string;
// some other fields maybe
}
В этом классе мы описали нашу среду. Однако мы также можем использовать класс InjectionToken для ввода переменной среды. Просто используем useValue как провайдер:
@NgModule({
declarations: [
AppComponent,
SomeComponent
],
imports: [
BrowserModule
],
providers: [
{provide: Environment, useValue: environment}
],
bootstrap: [
AppComponent
]
})
export class AppModule { }
И в нашем сервисе:
@Injectable()
export class SomeService {
constructor(
private http: HttpClient,
private environment: Environment,
) { }
getData(): Observable<any> {
return this.http.get(this.environment.baseUrl + '');
}
}
Такое решение дает нам не просто более чистый и привычный механизм DI, но и статическую типизацию.
Однако до сих пор есть небольшая проблема. Рассмотрим такой фрагмент:
@Injectable()
export class SomeService {
constructor(
private environment: Environment,
) { }
someMethod() {
this.environment.baseUrl = 'something else';
}
}
Здесь мы меняем значение одной из переменных среды. Поскольку модули Angular убеждаются, что внутри каждый компонент получает тот самый экземпляр зависимости, то такой код:
export class SomeComponent implements OnInit {
constructor(
private someService: SomeService,
private environment: Environment,
) { }
ngOnInit() {
this.someService.someMethod();
console.log(this.environment.baseUrl);
}
}
... выведет строку:
Поэтому мы «случайно» изменили переменную среды. Исправить такую проблему достаточно просто - сделать поля нашего класса readonly:
export class Environment {
readonly production: boolean;
readonly baseUrl: string;
// some other fields maybe
}
Так мы получили защищен механизм DI.
Используем различные сервисы в зависимости от среды
Мы уже знаем, как использовать переменные среды через DI, но как насчет переключения между сервисами в различных средах?
Представим ситуацию: нам необходима некоторая статистика о сбоях / использовании нашего приложения. Однако механизм логирования отличается в зависимости от используемой среды: если эта среда разработки, нам надо логировать только ошибку или предупреждение в консоль; в случае среды тестирования, нам необходимо вызвать API, которое сгруппирует наши ошибки в Excel-файл; в продакшене мы хотели бы иметь отдельный файл с логами на бэкенд, поэтому мы вызываем другой API. Лучшим решением будет реализовать сервис Logger, который будет обрабатывать такой функционал. Такой вид наш сервис будет иметь в коде:
@Injectable()
export class LoggerService {
constructor(
private environment: Environment,
private http: HttpClient,
) { }
logError(text: string): void {
switch (this.environment.name) {
case 'development': {
console.error(text);
break;
}
case 'qa': {
this.http.post(this.environment.baseUrl + '/api/reports', {text})
.subscribe(/*handle http errors here */);
break;
}
case 'production': {
this.http.post(this.environment.baseUrl + '/api/logs/errors', {text})
.subscribe(/* handle http errors here */);
}
}
}
}
Все достаточно понятно: получаем сообщения об ошибках, проверяем среду, выполняем соответствующие действия. Однако такое решение имеет определенные недостатки:
• Мы делаем проверки каждый раз при вызове метода logError. По сути, это лишено смысла, ведь после того, как приложение збилдилось, значение enviroment.name никогда не меняется. switch-выражение всегда будет работать одинаково - не важно, сколько раз мы вызвали метод.
• Реализация самого метода достаточно неуклюжая: на первый взгляд не очень понятно, что происходит.
• А если нам надо логировать больше различной информации? Для этого нужно делать проверки в каждом методе?
Какие есть альтернативы в нашей реализации? Мы могли бы написать отдельные LoggerServices для каждого сценария, а инжектить только один - в зависимости от условия с помощью factory:
export class LoggerService {
logError(text: string): void { }
// maybe other methods like logWarning, or info
}
@Injectable()
class DevelopLoggerService implements LoggerService {
logError(text: string) {
console.error(text);
}
}
@Injectable()
class QALoggerService implements LoggerService {
constructor(
private http: HttpClient,
private environment: Environment,
) {}
logError(text: string) {
this.http.post(this.environment.baseUrl + '/api/reports', {text})
}
}
@Injectable()
class ProdLoggerService implements LoggerService {
constructor(
private http: HttpClient,
private environment: Environment,
) {}
logError(text: string) {
this.http.post(this.environment.baseUrl + '/api/logs/errors', {text})
}
}
Пройдемся тем, что мы реализовали:
• Мы оставляем от LoggerService только объявления его API, без реализации. Использовать его как injection token, а также, чтобы указать интерфейс для TS.
• Создаем отдельные классы для каждой среды, убеждаемся, что реализовали LoggerService для каждого, чтобы у нас был одинаковый API.
• Каждый класс имеет те же методы, однако с разной логикой. Нет нужды проверять среду.
• Как теперь сообщить нашим компонентам, какую версию LoggerService мы используем? Здесь на помощь приходят factories.
factory- чистая функция, которая получает зависимости в качестве аргументов и возвращает значение для токена. Посмотрим, как использовать наш LoggerService:
export function loggerFactory(environment: Environment, http: HttpClient): LoggerService {
switch (environment.name) {
case 'develop': {
return new DevelopLoggerService();
}
case 'qa': {
return new QALoggerService(http, environment);
}
case 'prod': {
return new ProdLoggerService(http, environment);
}
}
}
Такая функция будет вызывать один из сервисов во время выполнения программы, в зависимости от переменной среды.
Конечно, на этом наша реализация не заканчивается: нам до сих пор нужно сообщить Angular об использовании factory и передать все необходимые зависимости через массив deps.
@NgModule({
providers: [
{
provide: LoggerService,
useFactory: loggerFactory,
deps: [HttpClient, Environment], // we tell Angular to provide this dependencies to the factory as arguments
},
{provide: Environment, useValue: environment}
],
// other metadata
})
export class AppModule { }
С таким решением не нужно что-то менять в нашем приложении: все компоненты, которые использовали LoggerService, продолжат это делать, как и с предыдущей реализацией:
export class SomeComponent implements OnInit {
constructor(
private logger: LoggerService,
) { }
ngOnInit() {
try {
// do something that may throw an error
} catch (error) {
this.logger.logError(error.message); // no need to change anything - works the same wy as previously
}
}
}
Создание глобальных одиночек (singletons)
Angular убеждается, что внутри заданного модуля все компоненты получают одинаковые экземпляры зависимостей. Например, если мы используем сервис в AppModule, затем объявляем SomeCompoоnent в модуле, а затем вводим туда SomeService, а также в AnotherComponent, объявленный в том же модуле, то SomeComponent и OtherComponent получат тот же экземпляр SomeService.
Однако в различных модулях все по-разному: каждый модуль имеет собственный инжектор зависимостей и будет передавать различные экземпляры того же сервиса для различных модулей.
А если мы хотим тот же экземпляр для каждого компонента, сервиса или чего-либо, что использует наши зависимости? Тогда нам точно нужен singleton.
Мы можем реализовать singleton, использовав useFactory. Сначала нам надо будет реализовать статический метод getInstance для нашего класса, а затем вызвать его с factory.
Допустим, нам надо реализовать простое хранилище данных во время выполнения программы, вроде базового redux. Код реализации метода getInstance:
export class StoreService {
// maybe store methods like dispatch, subscribe or others
private static instance: StoreService = null;
static getInstance(): StoreService {
if (!StoreService.instance) {
StoreService.instance = new StoreService();
}
return StoreService.instance;
}
}
Теперь мы должны сообщить Angular о нашем методе getInstance
export function storeFactory(): StoreService {
return StoreService.getInstance();
}
@NgModule({
providers: [
{provide: StoreService, useFactory: storeFactory}
],
// other metadata
})
export class AppModule { }
На этом все. Теперь мы можем просто инжектить StoreService в любой компонент и быть уверены, что имеем дело с тем же экземпляром. Мы можем передавать зависимости в factory, как обычно.
Общие советы по DI
1. Всегда инжектите значение в ваш компонент - никогда не полагайтесь на глобальные переменные, переменные из других файлов и тому подобное. Стоит помнить, если метод вашего класса ссылается на свойства, которые не принадлежат этому классу, такое значение, скорее всего, инжектиться как зависимость (как мы делали с переменными среды).
2. Никогда не используйте строчные токены для DI. В Angular есть возможность передать строку в декоратор Inject для поиска зависимостей. Однако вы всегда можете допустить ошибку, даже если у вас есть IntelliSense. Лучше использовать InjectionToken.
3. Помните, что экземпляры сервисов распространяются между компонентами на уровне модуля. Если какие-либо свойства такого сервиса не будут меняться внешне, их можно обозначить как readonly.
Теги: angular
0 комментариев
Сортировка:
Добавить комментарий
IT Новости
Смотреть все
|
__label__pos
| 0.981595 |
Sign In Register
How can we help you today?
Start a new topic
Making a simple thing like Global Chat....
Hello, I'm making a Global Chat and as the support suggested in other topic, trying to make a runtime collection of online players and sending the chat message to every one of them.
I have this cloud code attached to GS_PLAYER_CONNECT
var json = {"pid":Spark.getPlayer().getPlayerId()};
var collection = Spark.runtimeCollection("online");
collection.save(json);
and this to GS_PLAYER_DISCONNECT
var collection = Spark.runtimeCollection("online");
collection.remove({"pid": Spark.getPlayer().getPlayerId()});
Testing on Test Harness it looks like it's working but does this code guarantee the list doesn't get messed up?
And sending the same chat message to every player, is it really a good thing to do?
I mean.., isn't it better to create a runtime collection that stores chat messages and let players retrieves messages if there're new messages? To do that performantly, you would need to store chat index to somewhere outside database and I thought I could use property to store the index but it seems properties can't be modified programmatically.
ok.. the code on GS_PLAYER_CONNECT is now this
var pid = Spark.getPlayer().getPlayerId();
var json = {"pid":pid};
var collection = Spark.runtimeCollection("online");
collection.update(json,json,true,false);
1 person likes this
And the cloud code on my chat event.
var chatString = Spark.getData().Text;
var dName = Spark.getPlayer().getDisplayName();
var json = {"displayName":dName, "Message":chatString};
var msg = Spark.message(null);
var collection = Spark.runtimeCollection("online");
var ids = collection.distinct("pid")
msg.setPlayerIds(ids);
msg.setMessageData(json);
msg.send();
Hope this saves someones's headache.
1 person likes this
Well sorry, forget about the "good thing to do?" lines and below. Maybe I'm being a paranoid.
And collection.save(json); should be collection.insert(json);
Everyone googling how to do global chat, this is it.
I have tried the other crazy batch job systems and making this weird global team system that many people have cobbled together.
This is the best solution. I have no idea why global message isn't a default function but that's ok.
Thank you so much elin cat
Login to post a comment
|
__label__pos
| 0.799836 |
恢复mysql删除数据
时间: 2023-08-23 13:15:54 浏览: 56
恢复MySQL删除的数据可以通过使用binlog文件进行回滚操作。Binlog是MySQL的二进制日志文件,记录了数据库的所有修改操作。通过指定特定的binlog文件、起始位置和结束位置,可以将删除的数据恢复回来。 如果使用mysqlbinlog命令,可以将binlog文件中的删除操作导出为SQL语句,并通过管道传递给mysql命令执行恢复操作。命令如下: ```shellmysqlbinlog /var/lib/mysql/mysql-bin.000004 --start-position1974 --stop-position2257 | mysql -uroot -p123456``` 以上命令将指定的binlog文件范围内的删除操作导入到MySQL数据库中,实现数据的恢复。
相关问题
mysql删除数据怎么恢复
如果你使用的是InnoDB存储引擎,可以尝试使用mysqlbinlog命令来查看binlog文件并找回被删除的数据。 1.首先查看mysql的数据目录下是否存在binlog日志文件,使用`show variables like 'log_bin%';`命令可以查看是否开启了binlog日志功能,以及binlog日志的文件名和路径。 2.使用`mysqlbinlog [binlog日志文件名] > [输出文件名]`命令将binlog日志文件转换成可读文件,输出文件名可以指定为任意一个文本文件。 3.在输出文件中搜索删除数据的语句,找到相关的语句后,将其逆向执行即可恢复被删除的数据。 需要注意的是,使用binlog日志恢复数据并不是百分之百成功的,因为如果binlog日志文件被清除或者被覆盖,那么恢复数据的可能性就非常小了。因此,建议在日常使用数据库时,加强对数据的备份和恢复策略,以确保数据的安全和完整性。
mysql怎么恢复删除的数据
MySQL支持使用两种方法恢复删除的数据:使用备份或使用日志文件。 1. 使用备份 如果你有备份数据库,可以恢复删除的数据。恢复数据库时,可以使用以下命令: ``` mysql -u username -p database_name < backup_file.sql ``` 其中,`username` 是数据库的用户名,`database_name` 是要恢复的数据库名称,`backup_file.sql` 是备份文件名称。 2. 使用日志文件 如果没有备份,可以使用MySQL二进制日志文件进行恢复。以下是恢复过程: 1) 确定删除数据的时间点。 2) 找到最近的备份文件和二进制日志文件。 3) 从备份文件中恢复数据库。 ``` mysql -u username -p database_name < backup_file.sql ``` 4) 通过二进制日志文件找到删除数据的语句。 ``` mysqlbinlog binlog_file | grep delete ``` 5) 复制删除数据的语句并执行。 6) 检查恢复的数据是否正确。 以上是恢复MySQL删除数据的两种方法。建议定期备份数据库以避免数据丢失。
相关推荐
最新推荐
recommend-type
MySQL中truncate误操作后的数据恢复案例
在MySQL数据库管理中,误操作是一个常见的问题,尤其是像`TRUNCATE TABLE`这样的命令,它会立即删除表中的所有数据且无法通过`ROLLBACK`撤销。本案例将详细讲解如何在MySQL中,特别是在Percona Server 5.6.16环境下...
recommend-type
Mysql数据库误删恢复方法.docx
总之,恢复MySQL数据库需要谨慎操作,尤其是在涉及文件系统级别的操作时。始终确保有最新的数据备份,以防止不可预见的问题。如果在恢复过程中遇到困难,及时查阅MySQL官方文档或寻求专业支持。
recommend-type
MySQL删除有外键约束的表数据方法介绍
本文将详细介绍在MySQL中如何安全地删除这些受限的数据。 首先,当你尝试删除一个有外键约束的表或者表中的数据时,如果这个表被其他表引用,MySQL会抛出错误[Err] 1451,提示“Cannot delete or update a parent ...
recommend-type
Linux上通过binlog文件恢复mysql数据库详细步骤
本文将详细阐述如何通过binlog文件恢复MySQL数据库的步骤。 首先,理解binlog的含义与作用至关重要。MySQL的binlog是一种记录所有改变数据库状态的非归档日志,包括插入、更新和删除操作。它不仅用于主从同步,确保...
recommend-type
MySQL误操作后快速恢复数据的方法
binlog2sql是一款开源的MySQL binlog解析工具,它能够帮助我们解析binlog,生成误操作前后的SQL语句,从而实现快速的数据恢复。 要使用binlog2sql,首要条件是MySQL服务器已经开启了binlog,并配置了相关的参数,...
recommend-type
Node.js实战:快速入门,全面解析
"Node.js即学即用是一本面向JavaScript和编程有一定基础的读者的入门书籍,旨在教授如何利用Node.js构建可扩展的互联网应用程序。本书详尽介绍了Node.js提供的API,同时深入探讨了服务器端事件驱动开发的关键概念,如并发连接处理、非阻塞I/O以及事件驱动编程。内容覆盖了对多种数据库和数据存储工具的支持,提供了Node.js API的实际使用示例。" 在Node.js的世界里,事件驱动模型是其核心特性之一。这种模型使得Node.js能够高效地处理大量并发连接,通过非阻塞I/O操作来提高性能。在本书中,读者将学习如何利用Node.js的异步编程能力来创建高性能的网络应用,这是Node.js在处理高并发场景时的一大优势。 Node.js的API涵盖了网络通信、文件系统操作、流处理等多个方面。例如,`http`模块用于创建HTTP服务器,`fs`模块提供了对文件系统的读写功能,而`stream`模块则支持数据的高效传输。书中会通过实例来展示如何使用这些API,帮助读者快速上手。 对于数据库和数据存储,Node.js有丰富的库支持,如MongoDB的`mongodb`模块、MySQL的`mysql`模块等。书中会讲解如何在Node.js应用中集成这些数据库,进行数据的增删改查操作,以及如何优化数据访问性能。 此外,本书还会介绍Node.js中的模块系统,包括内置模块和第三方模块的安装与使用,如使用`npm`(Node Package Manager)管理依赖。这使得开发者可以轻松地复用社区中的各种工具和库,加速开发进程。 《Node.js即学即用》是一本全面的实战指南,不仅适合初学者快速掌握Node.js的基础知识,也适合有一定经验的开发者深入理解Node.js的高级特性和最佳实践。通过阅读本书,读者不仅可以学习到Node.js的技术细节,还能了解到如何构建实际的、可扩展的网络应用。
recommend-type
管理建模和仿真的文件
管理Boualem Benatallah引用此版本:布阿利姆·贝纳塔拉。管理建模和仿真。约瑟夫-傅立叶大学-格勒诺布尔第一大学,1996年。法语。NNT:电话:00345357HAL ID:电话:00345357https://theses.hal.science/tel-003453572008年12月9日提交HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire
recommend-type
nginx配置中access_log指令的深入分析:日志记录和分析网站流量,提升网站运营效率
 # 1. nginx access_log指令概述** nginx 的 `access_log` 指令用于记录服务器处理客户端请求的信息。它可以生成日志文件,其中包含有关请求的详细信息,例如请求方法、请求 URI、响应状态代码和请求时间。这些日志对于分析网站流量、故障排除和性能优化至关重要。 `access_log` 指令的基本语法如下:
recommend-type
opencvsharp连接工业相机
OpenCVSharp是一个.NET版本的OpenCV库,它提供了一种方便的方式来在C#和Mono项目中使用OpenCV的功能。如果你想要连接工业相机并使用OpenCVSharp处理图像数据,可以按照以下步骤操作: 1. 安装OpenCVSharp:首先,你需要从GitHub或NuGet包管理器下载OpenCVSharp库,并将其添加到你的项目引用中。 2. 配置硬件支持:确保你的工业相机已安装了适当的驱动程序,并且与计算机有物理连接或通过网络相连。对于一些常见的工业相机接口,如USB、GigE Vision或V4L2,OpenCV通常能够识别它们。 3. 初始化设备:使用OpenCVS
recommend-type
张智教授详解Java入门资源:J2SE与J2ME/J2EE应用
本PPT教程由主讲教师张智精心制作,专为Java初学者设计,旨在快速提升学习者的Java编程入门能力,以应对各类考试需求。教程内容涵盖了Java的基础知识和实用技巧,从语言的历史背景和发展到核心特性。 1. **Java简介**: - Java起源于1990年由James Gosling领导的小组,原名Oak,目标是为家用电器编程,后来在1995年更名为Java。Java是一种平台无关、面向对象的语言,其特点包括:平台无关性,通过JVM实现跨平台;面向对象,强调代码重用;简单健壮,降低出错风险;解释性,源代码编译成字节码执行;分布式,支持网络通信;安全,防止非法操作;多线程,支持并发处理;动态性和可升级性;以及高性能。 2. **Java平台版本**: - Java有三个主要版本: - 微型版(J2ME):针对移动设备和嵌入式设备,如手机或IoT设备。 - 标准版(J2SE,Java SE):适用于桌面和服务器开发,涵盖了日常应用开发。 - 企业版(J2EE,Java EE):为企业级应用和Web应用设计,如企业级服务器和Web服务。 3. **Java环境配置**: - 要开始Java编程,首先需要下载Java JDK,如Java 8。然后配置Java环境变量,例如设置JAVA_HOME指向JDK安装路径,CLASSPATH用于指定类库搜索路径,以及添加JDK bin和jre bin到PATH中,以便执行Java命令。 4. **常用IDE工具**: - Eclipse是一款推荐使用的Java IDE,它提供了集成开发环境,便于代码编写、调试和测试。下载Eclipse后,通常直接解压安装即可。 整个教程围绕Java的核心概念展开,从基础语法讲解到实践项目,适合初学者系统地学习和巩固Java知识,无论是为了学术研究还是职业发展,都能提供有效的学习资源。通过本资源,初学者能够快速掌握Java编程,并为进一步深入学习和实战项目打下坚实基础。
|
__label__pos
| 0.608906 |
Question: Is WeChat safe to use?
The app doesnt provide anything that might look like some kind of protection. It lacks end-to-end encryption and has many security holes. Apart from that, it is used by the Chinese authorities for the purpose of censorship and mass surveillance on a global scale. So, by all means, it isnt safe.
Is WeChat Chinese spyware?
TikTok, WeChat and thousands of other apps from China look harmless but are, in fact, malware, experts say. The apps cleverly disguise their origin.
Is WeChat a Chinese?
This is the first time WeChat, which operates as a super app in China, has had to take a step of this kind in more than a decade. In addition to offering a messaging service, Weixin also allows users to make online payments and access a range of financial services. (In other markets, its a different story.
Can WeChat be hacked?
Partially, hackers manage to gain access to users accounts due to weak WeChat passwords they use. Even though WeChat accounts get hacked quite often, it does not mean that you are doomed if you use this service. You just need to take several security measures to prevent WeChat hacking. Keep on reading!
Does WeChat steal photos?
Like most social media apps, the WeChat app on iPhone and Android has full permission to activate microphones and cameras, track your location, access your address book and photos, and copy all of this data at any time to their servers.
Does WeChat install spyware?
WeChat appears to be spying on foreigners chats to fuel censorship algorithm. In a study, researchers found that WeChat is looking in on chats from foreigners without disclosing that its doing so. The company has been known to monitor all of the chats of Chinese users as they come through.
Are WeChat messages monitored?
Chinas do-everything app, WeChat, has become one of the most powerful tools in Beijings arsenal for monitoring the public, censoring speech and punishing people who voice discontent with the government.
Does WeChat steal my data?
As its privacy policy makes clear, it will always comply with any requests from state authorities for information about data found on its apps. More worryingly, as WeChat does not deploy end-to-end encryption, corporate information that has been shared in a group could be stolen by state-sponsored actors.
How can you tell if someone is active on WeChat?
You cannot tell if someone is online in WeChat. Notifications are not something WeChat wanted to get involved with and the online status is just one of them. There is no alert or marker to show whether someone is online in WeChat. If you want to know if someone is there, youre going to have to ask them.
Why does WeChat keep blocking me?
Your account may have been blocked for one or more of the following reasons: 1. You have downloaded WeChat from an unofficial channel. Youre using an Android emulator (such as Bluestack, Andy, Youwave) or other unofficial plugins to run WeChat.
Contact us
Find us at the office
Kanarek- Prusa street no. 1, 91754 Niamey, Niger
Give us a ring
Saivon Onweller
+48 362 334 509
Mon - Fri, 7:00-18:00
Tell us about you
|
__label__pos
| 0.994492 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.